VMware announced over the weekend that some major security vulnerabilities have been identified in vCenter and ESXi 5.0, 5.1 and 5.5 as well as version 6.0. 6.0 Update 1 is not affected. Only the JMX RMI Remote code execution is an issue in vSphere 6.0. 3 vulnerabilities have been identified and the affect different versions in total.
ESXi OpenSLP Remote Code Execution
- Allows unauthenticated users to execute code remotely on ESXi host
vCenter Server JMX RMI Remote Code Execution
- An unauthenticated remote attacker that is able to connect to the service to execute arbitrary code on the vCenter server
vCenter Server vpxd denial-of-service vulnerability
- Can allow a remote user to create a denial of service on the vpxd service through unsanitized heartbeat messages
The announcement was broken on both the VMware and TheRegister sites and I’d recommend viewing more information on both of those sites. TheRegister also gives some great background on how the issues were originally identified. The full advisory details including links to the CVE references can be viewed on the VMware Security Advisories site for VMSA-2015-0007.
If you are running vSphere 5.0 the recommendation is to upgrade to v5.0 Update 3e. For vSphere 5.1 upgrade to v5.1 Update 3. For vSphere 6 the recommendation is to patch with Update 1. vSphere 5.5 however has some issues. In order to fix the denial-of-service or the OpenSLP issues it’s advised to upgrade to vSphere 5.5 Update 2. However, to resolve the JMX RMI issue VMware have confirmed that vSphere 5.5 Update 3 which was released in early September as being the fix. But, a new bug has been identified with Update Patch 3 regarding snapshots. If a snapshot is deleted in vCenter it causes the VM to crash. Considering that the majority of snapshot related backup solutions utilise VMware snapshots it means that all VMs would reboot each night. Considering uptime is always a business and IT priority then it’s really not a feasible solution.
My advice would be to at least upgrade to vSphere 5.5 Update 2 if you can. Upgrade to vSphere 6.0 Update 1 if possible but that may require considerable research and interoperability checks and may not be on your roadmap just yet. Do not install ESXi 5.5 Patch 3 if your backup software depends on VMware snapshots.
What are VMware Validated Designs?
VMware announced at VMworld earlier this year that they have been working on implementing VMware Validated Designs. This is a fantastic step by VMware and shows a maturity that has come from years of being the number one virtualisation platform. Cisco had had validated designs for years and I refer to them regularly when deploying Cisco related infrastructure. Through the implementation of validated designs VMware is assisting the community to develop and implement consistent designs across infrastructures which will help provide a consistency and familiarity not currently present. When a new platform is being deployed the elements to consider can include compute, storage, network, security, automation and operations. These are not just reference architectures, the validated designs are constantly updated continuously.
This video gives a bit more of an explanation around what VMware Validated Designs are. The designs have been split into pods, Management, Edge and Compute. Management is made up of vCenter Server, vRealize Operations Manager, vRealize Log Insight and VMware Horizon. Network and security are provided by VMware NSX, storage is provided VSAN. The Edge pod provides additional NSX support to allow external access to compute workloads. The compute pod is the heavy lifting pod.
Following on from installing vROPS a few month back I finally made the jump to install the Blue Medora management packs for both Cisco UCS and NetApp to get greater visibility into my virtual environment and the underlying physical infrastructure. I’m really looking forward to seeing what these management packs have to offer. While I’m not going to cover off the dashboards provided by the management packs in this post it is something I plan on revisiting once it’s been in use for a while and I’ve done a bit more playing around with it. The reason I’m posting this deployment process is that despite Blue Medora having decent installation guide it’s not always 100% clear, so I’ve done this to hopefully help guide a few others through the process a bit easier.
Cisco UCS Management Pack Deployment
Before you begin this deployment you can download trial versions from Blue Medora and if you want a permanent installation purchase some licenses from Blue Medora.
1: In vRealize Operations Manager go to the Administration -> Solutions
A while back I upgrade my vCenter and vSphere environment to 5.5 Update 2. As part of this upgrade VMware Tools was upgraded on most servers. Except that is of vCenter itself. This wasn’t a major issue but other issues began to arise where alerts came for disk consolidation problems. On investigation of this most KB articles were pointing towards upgrading the VMware Tools and that should fix the problem. So that’s what I tried. When running the VMware Tools installation on the vCenter VM I got an error that the VMwareTools64.msi was not a valid installation package and to find the correct package to install. I tried a number of things to get this to work but it would just not run the VMwareTools64.msi. I also couldn’t update the VM through Update Manager either.
The first step was to get the correct VMware Tools version as a standalone ISO. Since I performed the upgrade VMware have released a new version of VMware Tools, now it’s version 10, and that’s the only one that can be downloaded from the support site. The version I’m looking for is 9.4.5 and I don’t want to install version 10 without doing prior deployment to the test environment. And this all led me to Vladan’s website article called Manual Download of VMware Tools from VMware Website. Thanks to this article I was quickly able to get the VMware Tools package that I needed. You can go to http://packages.vmware.com/tools and select the VMware Tools version you need for download. The ISO was added to the ISO Datastore and mounted to the VM.
Following this I tried a number of different VMware KB articles but the one I finally found to work was KB1012693. This involved opening a command prompt, changing directory to the CD drive where VMware Tools was mounted and running the command:
Once that completed I re-ran the VMTools installation and it completed successfully. Following the server reboot the VMTools are showing as up to date in vCenter.
I had the honour, and I use that sarcastically, of having some backups failing recently following a TSM upgrade. While the reason is not clear as to why a newer version of TSM failed my guess is that how TSM sends API or other calls has changed and that’s why the error came up. The new TSM version can make API calls based on a specific version of vSphere. As the environment was upgraded from vSphere 4 to 5 etc. the original license key edition was at the top of the license chain and this is what was being interrogated by the APIs so it failed to capture a valid backup.
What we were seeing was the the backup software connecting and taking the snapshot as per vCenter GUI but the transmission aborts with the following error;
08/12/2015 17:32:39.321 : vmvddksdk.cpp (1168): VixDiskLib: Error occurred when obtaining NFC ticket for: [DATASTORE_NAME] VM_NAME/VM_NAME.vmdk. Error 16064 at 3707.
08/12/2015 17:32:39.321 : vmvddksdk.cpp (1024): vddksdkPrintVixError(): VM name 'VM_NAME'.
08/12/2015 17:32:39.321 : vmvddksdk.cpp (1054): ANS9365E VMware vStorage API error for virtual machine 'VM_NAME'.
TSM function name : VixDiskLib_Open
TSM file : vmvddksdk.cpp (1669)
API return code : 16064
API error message : The host is not licensed for this feature
While it was not the exact issue I did find a VMware KB article which mentions removing the license from vCenter MOB (Managed Object Browser). The details however were not clear. Thankfully the community came to the rescue and I found the real solution in GSparks response from the Community thread. The overview was there but not the intimate detail which is why I’ve documented the process here.
Step 1: Read More
I’ve been working on an issue over the past couple of days where a backup has constantly been failing. the problem was isolated down to the fact that the VM has a warning that it required disks to be consolidated. Nothing major, or so I thought. I had a look at the datastore where the VM resides and it has 185 snapshot vmdk disks. Well that can’t be right! So I did a bit of investigation and found a number of VMware KB articles around the problem. The basic option is to follow KB 2003638 and just run a basic consolidation by going to Snapshot -> Consolidate.
You’ll then be prompted to select Yes/No as you’ll have to consolidate the Redo logs. Select Yes.
At this point it looked as it the consolidation was going to work but at about 20% it failed. The next error shows that the file is locked.
There are a number of recommendations around what can be done to remove the lock on the file. One is to run a vMotion/svMotion in VMware to another host. Unfortunately due to these both being standalone ESXi hosts with no vMotion network or capabilities that couldn’t be done. Some people recommend reboot the ESXi host to release the lock but per my issue above, there was no vMotion network and these hosts run production manufacturing systems and cannot just be randomly rebooted. Waiting on a downtime approval would take too long. The next step was to restart the management agents on the ESXi host. This was done by connecting to the ESXi host via SSH and running the following commands: Read More
Other posts in this series:
Step 20: Upgrade the ESXi hosts using Update Manager
20.1: The first step to carry out is to create a new baseline with the ESXi image. To do this go to Update Manager from the home page on the vSphere client
20.2: Click on the ESXi Images tab as you’ll need to upload the image before configuring a new baseline. Select Import ESXi image
20.3: Select the ESXi image that was downloaded earlier and click Next
Other posts in this series:
Step 19: Post Installation tasks
Issue 1 – SSO access for admins
19.1: Give permissions to admin users for access to SSO. Log into the web client as the administrator account.
19.2: Select Administration and then expand Single Sign-On. Select Users and Groups and select the groups tab. From here you can select Administrators
19.3: Select Add member
19.4: Select the required domain from the drop down menu
Other posts in this series:
Step 13 : Upgrade SRM
13.1: Upgrade the SRM server software first and once that has been completed update the SRA. Select the SRM software and run it.
13.2: Click Ok on the language settings
13.4: Go to C:WindowsSysWOW64 and run odbcad32 and check which server and database the connector is directed to. You can then run the normal 64bit ODBC from Administrative Tasks and add a new connection under System DSN
Click next to continue
13.5: Click Next
Other posts in this series:
Step 10: Upgrade vCenter Inventory Service on Primary
10.1: Select vCenter Inventory Service and click Install
10.2: Leave the default language settings and click Ok
10.3: Click Next on the initial screen
10.4: Accept the EULA and click Next
10.5: Select to keep the existing data and click next