post

How To: VMware vCenter 5.0 to 5.5 Update 2 Upgrade – Part 6

Other posts in this series:

Step 20:  Upgrade the ESXi hosts using Update Manager

20.1: The first step to carry out is to create a new baseline with the ESXi image. To do this go to Update Manager from the home page on the vSphere client

vCenter Upgrade Update Manager

20.2: Click on the ESXi Images tab as you’ll need to upload the image before configuring a new baseline. Select Import ESXi image

Update Manager Import ISO Image

20.3: Select the ESXi image that was downloaded earlier and click Next
Update Manager Import ISO Image Select Image Read More

post

How To: VMware vCenter 5.0 to 5.5 Update 2 Upgrade – Part 5

Other posts in this series:

Step 19: Post Installation tasks

Issue 1 – SSO access for admins

19.1: Give permissions to admin users for access to SSO. Log into the web client as the administrator account.

vSphere Web Client

19.2: Select Administration and then expand Single Sign-On. Select Users and Groups and select the groups tab. From here you can select Administrators

vSphere Web Client SSO Setup

19.3: Select Add member

vSphere Web Client SSO Add Member

19.4: Select the required domain from the drop down menu

vSphere Web Client SSO Add Group Read More

post

How To: VMware vCenter 5.0 to 5.5 Update 2 Upgrade – Part 4

Other posts in this series:

Step 13 : Upgrade SRM

13.1: Upgrade the SRM server software first and once that has been completed update the SRA. Select the SRM software and run it.

Update SRM 5.5

13.2: Click Ok on the language settings

Update SRM 5.5 Step 2

Update SRM 5.5 Step 3 Update SRM 5.5 Step 4

13.4: Go to C:WindowsSysWOW64 and run odbcad32 and check which server and database the connector is directed to. You can then run the normal 64bit ODBC from Administrative Tasks and add a new connection under System DSN

Click next to continue

Update SRM 5.5 Step 5

13.5: Click Next

Update SRM 5.5 Step 6 Read More

How To: VMware vCenter 5.0 to 5.5 Update 2 Upgrade – Part 3

Other posts in this series:

 Step 10:  Upgrade vCenter Inventory Service on Primary

10.1: Select vCenter Inventory Service and click Install

vCenter Inventory Service installation

10.2: Leave the default language settings and click Ok

vCenter Inventory Service installation step 2

10.3: Click Next on the initial screen

vCenter Inventory Service installation Step 3

10.4: Accept the EULA and click Next

vCenter Inventory Service installation Step 4

10.5: Select to keep the existing data and click next

vCenter Inventory Service installation Step 5 Read More

post

How To: VMware vCenter 5.0 to 5.5 Update 2 Upgrade – Part 2

Other posts in this series:

Step 7 – Unlink vCenter Server

7.1: Go to Start -> Programs -> VMware -> vCenter Server Linked Mode Configuration

vCenter Upgrade Break Linked Mode

7.2: When the configurator opens click on Next

vCenter Upgrade Break Linked Mode Step 2

7.3: Select Modify linked mode configuration and click Next

vCenter Upgrade Break Linked Mode Step 3

7.4: Leave Isolate this vCenter Server instance from linked mode group selected and click Next

vCenter Upgrade Break Linked Mode Step 4

7.5: Click Continue to remove the server from linked-mode

vCenter Upgrade Break Linked Mode Step 5 vCenter Upgrade Break Linked Mode Step 5 part 2 Read More

post

How To: VMware vCenter 5.0 to 5.5 Update 2 Upgrade – Part 1

Following on from a previous bit of work I carried out to convert vCenter from a physical to virtual machine I then had to upgrade vCenter from 5.0 to 5.5 Update 2 to allow the drivers for Trend Micro Deep Security Manager to work on the ESXi hosts. A workaround was tried to just have the ESXi 5.5 filter drivers for Trend installed on the 5.0 hosts but it caused some PSODs on our Dev servers and VMware recommended performing an upgrade of the environment. It was on my to-do list for later in the year anyway so it was good to get the upgrade out of the way. I documented the steps for the upgrade and while once again I didn’t want to create a multi-part blog post the sheer number of steps dictated that it was a requirement. I’ve broken down the posts into a 6-part series covering the below areas:

Step 1 – Planning

1.1: Check Compatibility

The first thing you need to check is that all the components of your environment are compatible with the version of vSphere you want to upgrade to. The first step is this process is to gather the version details of all the installations and plug-ins that you have and use the VMware Compatibility Guide – http://www.vmware.com/resources/compatibility/search.php – to  verify that all the components listed are compatible or at least find out what versions of your products are compatible and seek out information on the upgrade process for each of those components. For example in the below matrix we will be upgrading SRM from 5.0.1 to 5.5.1 to be up to the latest version supported on vCenter 5.5 Update 2. Likewise for the IBM plug-ins and the SRA required for SRM.

Product Current Version Compatible Version
ESXi Host 5.0.0 5.5 Update 2
vCenter 5.0.0 5.5 Update 2
SRM 5.0.1 5.5.1
IBM SRA 2.1.0 2.2.0
Update Manager 5.0.0 5.5 Update 2
IBM TSM TDP 1.1 7.1
IBM Storage Mgmt Console 2.6.0 3.2.2 (supported on 5.5)

There is one other document to be aware of when it comes to planning for the upgrade and that is the upgrade sequence matrix so that you ensure that the correct products are updated at the correct times. This can be found here – http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2057795

1.2: Download software vCenter 5.5.0 Update 2d

Go to the following website – https://my.vmware.com/group/vmware/details?productId=353&downloadGroup=VC55U2D

Select the relevant version of vCenter and click on Download Now

vCenter Upgrade Planning Step 1

From here you’ll be prompted to log into the my.vwmare.com account. Log in. Accept the EULA

vCenter Upgrade Planning Step 2

The download will begin. To get the Custom ISOs for Cisco blades for this version go to: https://my.vmware.com/group/vmware/details?downloadGroup=ESXI55U2&productId=353#custom_iso and click Go To Downloads Read More

post

How To: P2V of vCenter Server

I was recently tasked with upgrading a legacy vCenter environment to cater for an upgrade to Trend Deep Security Manager. As I was reviewing the environment I noticed that one of the vCenter servers was a physical server running on an IBM HS22 blade. This server is part of a linked-mode vCenter and as the second vCenter was virtualized it caught me by surprise that this one wasn’t. Before beginning the work to upgrade vCenter from 5.0 to 5.5 and all its component I decided to virtualize the physical vCenter server to make management easier down the road and to eliminate the reliance on physical hardware outside of the ESXi hosts themselves.

As all ESXi hosts were being managed by the vCenter I was trying to convert I had to remove on host from the production cluster and isolate it so that it could be managed independently and could be used as the destination for the P2V in the vCenter Standalone Converter.

 

PREPARATION:

Step 1: Download vCenter Standalone Converter 5.5 from VMware site

1.1: Go to https://my.vmware.com/web/vmware/info/slug/infrastructure_operations_management/vmware_vcenter_converter_standalone/5_5 and download the installation file.

Step 2: Isolate an ESXi host to use as the destination of the conversion

2.1: Put the ESXi host in maintenance mode. Then right-click and Disconnect from vCenter. It will appear in italics and with a red X through it.

vCenter P2V step 1

2.2: Log on directly to the ESXi host using the root account

vCenter Server P2V Step 2 Read More

NetApp 7-Mode MetroCluster Disaster Recovery – Part 3

This is the last of the 3-part post about MetroCluster failover procedure. This section covers the giveback process and also a note about a final review. The other sections of this blog post can be found here:

  1. NetApp 7-Mode MetroCluster Disaster Recovery – Part 1
  2. NetApp 7-Mode MetroCluster Disaster Recovery – Part 2

Test Case 4 – Virtual Infrastructure Health Check

This test case covers a virtual infrastructure system check, not only to get an insight into the current status of the system but to also compare against the outcomes from test case 1.

4.1 – Log into vCenter  using the desktop client or web client. Expand the virtual data center and verify all SiteB ESXi hosts are online

4.2 – Log onto NetApp onCommand System Manager. Select primary storage controller and open application

4.3 – Expand SiteA/SiteB and expand primary storage controller, select Storage and Volumes. Volumes should all appear with online status

4.4 – Log into Solarwinds – Check the events from the last 2 hours and take note of all devices from Node List which are currently red

Read More

post

NetApp 7-Mode MetroCluster Disaster Recovery – Part 2

This is the part 2 of the 3-part post about MetroCluster failover procedure. This section covers the giveback process and also a note about a final review. The other sections of this blog post can be found here:

  1. NetApp 7-Mode MetroCluster Disaster Recovery – Part 1
  2. NetApp 7-Mode MetroCluster Disaster Recovery – Part 3

The planning and environment checks have take place and now it’s time to execution day. I’ll go through the process here of how the test cases were followed during testing itself. Please note that Site A (SiteA) is the site where the shutdown is taking place. Site B (SiteB) is the failover site for the purpose of this test.

Test Case 1 – Virtual Infrastructure Health Check

This is a health check of all the major components before beginning the execution of the physical shutdown

1.1 – Log into Cisco UCS Manager on both sites using an admin account.

1.2 – Select the Servers tab, and expand Servers -> Service Profiles -> root -> Sub-Organizations -> <SiteName>. List of blades installed in relevant environment will appear here

1.3 – Verify the Overall status. All blades appear with Ok status. Carry on with the next

1.4 – Log into vCenter using desktop client or web client. Select the vCenter server name at top of tree, select Alarms in right-hand pane and select triggered alarms. No alarms should appear

1.5 – Verify all ESX hosts are online and not in maintenance mode

1.7 – Log onto NetApp onCommand System Manager. Select SiteA controller and open application

1.8 – Expand SiteA/SiteB and expand both controllers, select Storage and Volumes. Verify that all volumes are online

1.9 – Launch Fabric MetroCluster Data Collector (FMC_DC) and verify that the configured node is ok. The pre-configured FMC_DC object returns green – this means that all links are health and takeover can be initiated

DR failover Metrocluster FMDC Read More

post

NetApp 7-Mode MetroCluster Disaster Recovery – Part 1

Recently I had the honour of performing a NetApp 7-Mode MetroCluster DR Test. After my previous outing which can be read in its full gory details on another blog post I was suitably apprehensive about performing the test once again. Following the last test I worked with NetApp Support to find a root cause of the DR failure. The final synopsis is that it was due to the Service Processor being online while the DR site was down which caused hardware support to kick in automatically. This meant that a takeover was already running when the ‘cf forcetakeover -d’ command was issues. If the Service Processor is online for even a fraction of a second longer than the controller is it will initiate a takeover. Local NetApp engineers confirmed this was the case thanks to another customer suffering a similar issue and they performed multiple tests both with the Service Processor connected and disconnect. Only those tests that had the Service Processor disconnected were successful. However it wasn’t just the Service Processor. The DR procedure that I followed was not suitable for the test. WARNING: DO NOT USE TR-3788 FROM NETAPP AS THE GUIDELINE FOR FULL SITE DR TESTING. You’ll be in a world of pain if you do.

I had intended on this being just one blog post but it escalated quickly and had to be broken out. The first post is around the overview of steps followed and the health check steps carried out in advance. Part 2 covers the physical kit shutdown and the failover process. Part 3 goes into detail around the giveback process and some things that were noted during the DR test. To access the other parts of the post quickly you can use the links below.

  1. NetApp 7-Mode MetroCluster Disaster Recovery – Part 2
  2. NetApp 7-Mode MetroCluster Disaster Recovery – Part 3

Read More