post

Blogs, community and other skills

Early this year I decided to up the ante a bit on my level of blogging. While I had really started to take it a bit more seriously the year before I wanted to make a concerted effort this year. During the months running up to the end of 2014 the traffic on the blog had grown quite significantly from what it had previously been. This was at a point when I wasn’t putting out any content all that regularly so it came as a surprise and encouraged me to think about creating more content. Anthony Burke over at NetworkInferno, a great blog if you get some downtime to have a flick through, wrote an article earlier this year which completely summed up my reasons for doing a blog. It’s called VMUG, Community and you (me). In that post Anthony talks about his VMUG contribution, his blog, career and how other skills have developed. All thanks to taking an active part in the community.

For me, I basically use the blog as a means to share my thoughts and experiences and probably most importantly as a way to cure professional isolation, similar to Anthony. I also see it as a way to provide assistance to someone else who may face similar challenges. I’ve been lucky enough to have been dug out of some holes thanks to someone else taking the time to write up their experiences and fixes to problems and I feel it’s only right that I reciprocate. Maintaining a blog and setting myself challenges to produce x number of blog posts does not come naturally to me. Writing doesn’t come naturally to me. It’s something I’ve struggled with but I’ve found that writing blog posts has been a great way of forcing me to be more concise. Another upside, and this is invaluable really, is that it has helped me formulate my opinions and understanding of technology. Through researching topics to ensure that what I’m writing is accurate I’ve gained a far more in-depth understanding of the core concepts of a number of technologies and this has without doubt made me a better employee.

Read More

post

VMware Metro Storage Cluster Overview

VMware Metro Storage Cluster

VMware Metro Storage Cluster (vMSC) allows vCenter to stretch across two data centers in geographically dispersed locations. In normal circumstances, in vSphere 5.5 and below at least, vCenter would be deployed in Link-Mode so two vCenters can be managed as one. However, with vMSC it’s possible to have one vCenter manage all resources across two sites and leverage the underlying stretch storage and networking infrastructures. I’ve done previous blogs on NetApp MetroCluster to describe how a stretched storage cluster is spread across two disparate data centers. I’d also recommend reading a previous post done on vMSC by Paul Meehan over on www.virtualizationsoftware.com. The idea behind this post is to provide the VMware view for the MetroCluster posts and to give a better idea on how MetroCluster storage links into virtualization environments.

The main benefit of a stretched cluster is that it enables workload and resource balancing across datacenters. This helps companies to reach almost zero RTO and RPOs and ensure uptime of critical systems as workloads can be migrated easing using vMotion and Storage vMotion. One thing to keep in mind regarding vMSC, it’s not really sold as a disaster recover solution but rather a disaster avoidance solution when linked with the underlying storage. Some of the other benefits of a stretched cluster are:

  • Workload mobility
  • Cross-site automated load balancing
  • Enhanced downtime avoidance
  • Disaster avoidance
  • System uptime and high availability

There are a number of storage vendors that provide the back-end storage required for a vMSC to work. I won’t go into the entire list but you can find out more on the VMware Compatibility Matrix site. The one that I have experience with is NetApp MetroCluster but I know of others from EMC and Hitachi at least. So what components make up a vMSC? It comes down to an extended layer 2 network across data centers so that vMotions can take place with ease and also a resilient storage platform connected to ESXi via VMFS or NFS datastores. VMware vCenter itself does need some configuration changes but it’s nothing outside the scope of what a regular VMware admin can implement. A view of what a vMSC looks like is below. The networking and storage components have been simplified.

fabric metro cluster diagram

 

Read More

NetApp MetroCluster Overview – Part 8 – Further Reading


Here are some links to further reading that will help with getting a far deeper understanding of MetroCluster:

High Availability and MetroCluster Configuration Guide

MetroCluster Best Practices for Implementation

Configuring a stretch MetroCluster system with SAS disk shelves

Installing FC-to-SAS bridges and SAS disk shelves

A Continuous-Availability Solution for VMware vSphere and Netapp

MetroCluster Plug-in 1.0 for vSphere

MetroCluster Clustered On-Tap was not covered as part of this series but if you’re looking at CDOT MetroCluster the below documents may be useful:

OnTap 8.3

MetroCluster Management and Disaster Recovery Guide

MetroCluster Installation Express Guide

MetroCluster Installation and Configuration Guide

post

NetApp MetroCluster Overview – Part 7 – MetroCluster Tools

 

There’s not many tools available specifically for MetroCluster but I’ve added the ones I found below. If anyone knows of any others please let me know and i’ll update this post.

FMC_DC
The FMC_DC can be downloads from here -> http://mysupport.netapp.com/NOW/download/tools/FMC_DC/. It will require a NetApp NOW account.

Fabric MetroCluster Data Collector

The FMC_DC is the Fabric Metro Cluster Data Collector which can be configured to gather information on all components (controllers, switches, bridges etc.) of the MetroCluster infrastructure. Once the components have been added a health check can be run. This health check appears as a card on the application and will show whether the components are healthy or need further investigation.

I’d recommend having a look over this document to get started with FMC_DC

http://community.netapp.com/t5/Developer-Network-Articles-and-Resources/FMC-DC-Starter-Guide/ta-p/86351

While the FMC_DC doesn’t provide any management features it does provide peace of mind that all components are configured so that failover can be successful. If you’re doing a DR test I’d definitely recommend using it.

Read More

post

NetApp MetroCluster Overview – Part 6 – Best Practices and Recommendations

 

These are some of the things to look out for with MetroCluster and can be considered best practices and recommendations.

Disable change_fsid

One very important configuration change to be done on MetroCluster controllers is to immediately disable the change_fsid option. If it is not disabled the all volumes and LUNs will be renamed during failover and make it impossible to volumes and LUNs to be referenced. This is really critical for LUNs.

To avoid the FSID change in the case of a site takeover, you can set the change_fsid option to off (the default is on). Setting this option to off has the following results if a site takeover is initiated by the cf forcetakeover -d command:

  • Data ONTAP refrains from changing the FSIDs of volumes and aggregates.
  • Users can continue to access their volumes after site takeover without remounting.
  • LUNs remain online.

If you don’t disable the change_fsid option in MetroCluster configurations the following happens when the cf forcetakeover -d command is run:

  • Data ONTAP changes the file system IDs (FSIDs) of volumes and aggregates because ownership changes.
  • Because of the FSID change, clients must remount their volumes if a takeover occurs.
  • If using Logical Units (LUNs), the LUNs must also be brought back online after the takeover.
options cf.takeover.change_fsid off

MetroCluster RC file
Read More

post

NetApp MetroCluster Overview – Part 5 – Failure Scenarios for MetroCluster

 

Failover/Failure Scenarios for MetroCluster

I’m not going to re-invent the wheel here. These failure scenarios are all pretty self-explanatory and can be found in TR-3788.pdf. There’s far more scenarios in that document but here I’ll cover off some of the most common types.

Scenario: Loss of power to disk shelf

MetroCluster Failure Disk Shelf

Expected behaviour: Relevant disks for offline and the plex is broken. There’s no disruption to data availability to hosts running HA (VMware High Availability) or FT (Fault Tolerance), no change is detected by the ESXi Server. When the shelf is powered back on the plexes will sync automatically

Impact on data availability: None

 

Scenario: Loss of one link in one disk loop

MetroCluster Failure Inter-Switch Link

Expected behaviour: A notification appears on the controller to advise that disks are only accessible via one switch. There’s no disruption to data availability to hosts running HA or FT, no change is detected by the ESXi Server. When the connection is reset an alert on the controller will advise of connectivity across two switches

Impact on data availability: None

 

Scenario: Failure and Failback of Storage Controller Read More

post

NetApp MetroCluster Overview – Part 4 – Cabling of Fabric MetroCluster

 

Cabling of Fabric MetroCluster

The cabling of a MetroCluster is the key. Outside of some licensing it’s the cabling that’s really the only different between MetroCluster and Mirrored HA pair. Yes it’s a bit more complex for failover and failback but really the main difference from a setup point of view is the cabling. There’s a large number of cables and the configuration should be all mapped out before beginning putting equipment into your racks. I would heartily recommend reading MetroCluster and High Availability Guide before starting to understand your cabling requirements. Below is not a how-to on how to connect everything, it’s just an overview with a brief explanation. The above NetApp document is very detailed and should answer any questions you may have.

A simplified view of a Fabric MetroCluster is as follows:

Fabric Switch

I found this workflow from NetApp documentation which is quite useful as a guideline on how the bridges should be cabled.

Cable connection workflow

Read More

post

NetApp MetroCluster Overview – Part 3 – Fabric-Attached MetroCluster

 

What is Fabric-Attached MetroCluster?

A Fabric-Attached MetroCluster configuration can be implemented for distances greater than 500 meters connects the two storage nodes by using four Brocade or Cisco Fibre Channel switches in a dual-fabric configuration for redundancy. Each site has two Fibre Channel switches, each of which is connected through an inter-switch link to a partner switch at the other site.

The inter-switch links are fibre connections which extend the storage fabric path so that it provides a greater distance between nodes than other HA pair solutions. By using four switches instead of two, redundancy is in place to avoid single-points-of-failure in the switches and their connections.

The advantages of a fabric-attached MetroCluster configuration over a stretch MetroCluster configuration include the following:

  • Increased disaster protection via nodes being in separate geographical locations
  • Disk shelves and nodes are not connected directly to each other, but are connected to a fabric with multiple data routes ensuring no single point of failure.

The disadvantage is that there’s more cabling and there’s more components involved in the way of fibre switches.

Fabric MetroCluster requirements

Read More

post

NetApp MetroCluster Overview – Part 2 – Stretch MetroCluster

 

What is a Stretch MetroCluster

To understand Stretch MetroCluster you first need to understand how a HA pair operates. A Stretch MetroCluster is basically the next level up from a HA pair, or a HA Mirrored pair to be more specific.

Standard HA Pair

In a HA pair, which is a very common Netapp controller deployment, the cluster can handle the outage of a physical link or the entire controller and still provide access to the underlying storage without impacting data access for end users. Each controller in the HA pair shares the same set of disks or own its own distinct set of disks but either way in the event of a controller failure all reads/writes are sent to the remaining controller that can still access the failed controllers disks. There is a HA interconnect between the controllers that’s used for both keep-alive monitoring and mirroring of NVRAM. The HA pair provides fault tolerance and allows non-disruptive upgrades as a takeover and giveback can be performed for planned migration of read/writes to second controller in the HA pair.

Next up is a Mirrored HA pair. Mirrored HA pairs maintain two copies of all data in the form of plexes. These are continually updated synchronously using SyncMirror and provides protection in the event of disk failures. You can also set the mirroring to be asynchronous if there is a need for that. The major drawback to Mirrored HA pairs is that it does not provide failover to the partner node in the event of a controller failure. The Mirrored HA controller pair need to be within the 5 metre SAS limit.  This is where stretch MetroCluster comes in.

Read More

post

NetApp MetroCluster Overview – Part 1 – What is MetroCluster?

During the past couple of years I’ve been working on Flexpod solutions and even more recently than that I’ve been exposed to NetApp Flexpod MetroCluster. This has led me to doing quite a bit of research and reading about MetroCluster solutions and I thought I’d share some of that knowledge.I wanted to put together a post to help anyone else that needs to get a better understanding of MetroCluster infrastructure. It kind of got out of hand a bit and to make it easier to read I’ve split it out into a number of parts

 

MetroCluster is a term that is often heard but I believe rarely understood. It adds some extra complexity into every aspect of the infrastructure but from my own technical bias I love that as it gives me something else to learn and play with.

Multiple different vendors now provide a MetroCluster or metro-availability solution but my focus is on NetApp and in particular 7-Mode MetroCluster. Any reference I make to MetroCluster from here on will only be in reference to NetApp MetroClusters. If your clients or company value disaster avoidance, business continuity, fault tolerance and overall infrastructure resilience then you really need to look at a MetroCluster solution. You will also need to have some deep pockets.

As the engineer supporting such solutions and performing disaster recovery tests I can attest to the power of a MetroCluster solution that can attain zero downtime and data resilience. Even in one instance where I almost brought it to its knees it still soldiered on.

What is a MetroCluster? Read More