post

Cisco UCS – Unable to deploy Service Profile from Template – [FSM:Failed] Configuring service profile

I was recently deploying new blades within the UCS chassis but found that I was unable to. In one UCS domain there were no issues but in the second UCS domain if failed with the error [FSM:Failed] Configuring service profile xxx (FSM:sam:dme:LsServerConfigure) and there were a number of other minor warnings as well. The service profile would appear in the list but it was highlighted as having an issue and it could not be assigned to any blades. After a bit of searching around I found an answer on the Cisco Communities forum.

FSM failed

I also created a tech support file and downloaded it to my desktop, extracted the compressed files and opened sam_techsupportinfo in notepad. I did a search for errors and found that there was an issue resolving default identified from UCS Central.

UCS-SP-deploy-fail-1

The solution was to unregister the server from UCS Central and then to deploy the Service Profile again. To unregister from UCS Central go to Admin Tab -> Communication Management -> UCS Central and select Unregister from UCS Central. Before unregistering make sure that the Policy resolution controls are how you want them to be. In my case they were all set to local so unregistering from UCS Central had to real impact. Many users will have UCS Central integration configured to work as it was designed and will use Global policies. Unregistering from UCS Central can have a knock on impact on how those policies are managed.

UCS-SP-deploy-fail-2

fsm-fail-UCS-Central-unregister

Once the unregister had completed I ran the service profile deployment from template and it worked this time. I believe the issue is down to a time sync issue between UCS and UCS Central. I’m currently working on a permanent work around

post

Cisco UCS – FSM:FAILED: Ethernet traffic flow monitoring configuration error

During a recent Cisco UCS upgrade I noticed an error for ethlanflowmon which was a critical alert. I hadn’t seen the problem before and it occurred right after I had upgraded UCS Manager firmware as per the steps listed in a previous post I wrote about UCS Firmware Upgrade. Before proceeding to upgrade the Fabric Interconnects I wanted to clear all alerts where possible. The alert for “FSM:FAILED: Ethernet traffic flow monitoring configuration error on” both switches was a cause for concern.

ethlanflowmon On further investigation I found that this is a known bug when upgrading to versions 2.2(2) and above. I was upgrading from version 2.2(1d) to 2.2(3d). Despite being a critical alert the issue does not impact any services. The new UCSM software is looking for new features on the FI that do not exist yet as it has not been upgraded. As soon as you upgrade the FIs this critical alert will go. More information about the bug can be found Cisco’s support page for the bug CSCul11595

 

post

Cisco UCS – CIMC did not detect storage controller error

During a recent UCS firmware upgrade I had quite a few blades show up with the error “CIMC did not detect storage”. Within UCSM I could see that the blade had a critical alert. It initially started after I upgraded UCS Manager firmware as documents in a previous post I wrote about UCS Firmware Upgrades. I did some searching around to find what may be causing the issue and the best answer I could find was to from the Cisco community forums to disassociate the blade, decommission and reseat within the chassis. I later spoke to a Cisco engineer and he advised of the same steps but that it was also possible to do without reseating the blade. This also looks like its a problem when upgrading from 2.2(1d) to other versions of UCSM but I haven’t been able to validate if it’s only that version or if it also affects others.

The full error I saw was for code F1004 and for Controller 1 on server 2/1 is inoperable. Reason: CIMC did not detect storage

cimc error

Within UCSM I could see there was an issue with the Blade

cimc error server blade

Before proceeding with the upgrade of the FIs, IOMs and Blades themselves I wanted to clear any alerts within UCSM, particularly critical alerts. The steps I followed to bring the blade back online were to go to the blade and select Server Maintenance
cimc server maintenance

Read More

post

UCS Director – Schedule Database Backup script

I had a problem a while ago where UCS Director crashed during a Metrocluster failover test. It was caused by the delay in the transfer of writable disks on the storage which in turn caused the VM kernel to panic and set the disk to read only. After that problem, and due to other restore issues within the infrastructure as well as not having a backup prior to the failover test I was left with a dead UCS Director appliance. It was essentially completely buggered as the Postgres database had become corrupt. Cisco support were unable to resolve the problem and it took a lot of playing around with NetApp snapshots to pull back a somewhat clean copy of the appliance from before the failover test. Really messy and I wouldn’t recommend it.

Since then I’ve been capturing weekly backups of the UCS Director database to a FTP server so I have a copy of the DB to restore should there be any problems with the appliance again. This script is not supported by Cisco so please be aware of that before implementing it. To set up the backup create a DB_BACKUP file in /usr/local/etc with the following:

#!/bin/sh
# server login password localfile remote-dir
upload_script(){
 echo "verbose"
 echo "open $1"
 sleep 2
 echo "user $2 $3"
 sleep 3
 shift 3
 echo "bin"
 echo $*
 sleep 10
 echo quit
}
 
doftpput(){
 upload_script $1 $2 $3 put $4 $5 | /usr/bin/ftp -i -n -p
}
 
/opt/infra/stopInfraAll.sh
/opt/infra/dbBackupRestore.sh backup
BKFILE=/tmp/database_backup.tar.gz
if [ ! -f $BKFILE ]
then
echo "Backup failed. "
return 1
fi
export NEWFILE="cuic_backup_`date '+%m-%d-%Y-%H-%M-%S'`.tar.gz"
export FTPSERVER=xxx.xxx.xxx.xxx
export FTPLOGIN=< ftp user name >
export FTPPASS=<ftp password>
doftpput $FTPSERVER $FTPLOGIN $FTPPASS $BKFILE $NEWFILE
nohup /opt/infra/startInfraAll.sh &
 
exit 0

Next you’ll need to edit your cron jobs on the appliance. You can use the crontab -e  command to edit the schedule settings and enter:

1 2 * * 0 /usr/local/etc/DB_BACKUP > /dev/null 2>&1

 

And there you go, you now have a weekly scheduled backup of your UCS Director database.

 DB backup pathc

post

UCS Director – BareMetal Agent Installation Version 5.2, Upgrade to 5.3

UCS Director Baremetal Agent Installation:

Before commencing the Installation of the Baremetal Agent appliance I would recommend that UCS Director has been fully installed and is available before proceeding. If you need to install UCS Director as an initial installation there’s some great documentation on the Cisco site but you can also check out the blog post by Jeremy Waldrop. It’s for an older version of UCS Director but the installation steps still count for the current version. If you are upgrading from a previous version of UCS Director then you can check out a previous post I did on upgrading UCS Director from 5.1 to 5.3.

Useful Documents:

Cisco UCS Director Baremetal Agent Installation and Configuration Guide, Release 5.2

Cisco UCS Director Baremetal Agent Installation and Configuration Guide, Release 5.3

Download Software:

Go to Cisco Download for UCS Director  and select first UCS Director 5.3. Download the Cisco UCS Director Baremetal Agent Patch 5.3.0.0

UCSD Bare Metal Upgrade Download Accept the license agreement

UCSD Bare Metal Upgrade Download license agreement

The download will begin

UCSD Bare Metal Upgrade Downloaded File

Next, go back to the main UCS Director download page and select UCS Director 5.2.

UCSD Bare Metal Upgrade OVF DeploymentAccept the license agreement

UCSD Bare Metal Upgrade license agreement

The download will begin

UCSD Bare Metal Upgrade Patch Download File

Read More

post

UCS Director – Upgrade Version 5.1 to 5.3

Cisco have recently release a new version of their orchestration product UCS Director. The new release is version 5.3 and includes a raft of new features of which the majority are around improved reports and APIC support. Another new feature update is the support for NetApp OnTap 8.3. My primary reason for performing the upgrade is to leverage the reports and enhancements to workflow execution. It’s also been almost a year since the 5.1 installation was performed and I want to keep my systems up to date as much as possible. I’m currently running UCS Director 5.1.0 and Baremetal Agent 5.0.

Some of the new features in UCSD 5.3 are:

  • Support for C880 M4 Server
  • Support for Versa Stack and IBM Storwize
  • Enhancements to EMC RecoverPoint
  • Enhancements to VMware vSphere Support (VSAN Support)
  • Enhancements to Application Controllers (Cisco APIC)
  • Enhancements to workflow execution
  • Enhancements to the script module
  • Enhancements to UCSD REST APIs
  • Enhancements to Managing NetApp Accounts (including support for OnTap 8.3)
  • Enhancements to Cost Models and Chargeback features
  • Changes to Report APIs

You can find more about the features in the release over on the Cisco UCS Director 5.3 Release Notes site.

There are two components to the release, UCS Director itself and the Baremetal Agent upgrade. The supported upgrade paths for both components are:

Cisco UCS Director

Current Release Direct Upgrade Supported Upgrade Path
Release 4.0.x.x No 4.0 > 4.1 > 5.1 > 5.3
Release 4.1.x.x No 4.1 > 5.1 > 5.3
Release 5.0.x.x No 5.0 > 5.1 or 5.2 > 5.3
Release 5.1.x.x Yes 5.1 > 5.3
Release 5.2.x.x Yes 5.2 > 5.3

Read More

Leap Second Year – Impact on Cisco Equipment

Our network engineer sent out an email last week about a potential bug due to this year being a Leap Second Year. This wasn’t something I was aware of before so I did a bit of a search for not only the impact of the bug and what exactly a Leap Second is. As it turns out due to rotational variations of the planet the atomic clock can be out of sync. When this gets to 0.9 second the International Earth Rotation and Reference System (IERS) announces that a leap second will be added to the clock.

On midnight on June 30 this year the world atomic clock will have one second added to align the atomic clock to variances in the earths rotation. This is not the first occurance of this, there’s been 26 of these additional seconds added to the atomic clock since 1972. The last of these changes was in 2012. So what’s the big deal? Well, since the vast majority of computer systems use NTP to lock in their time settings the additional second will cause the same second to occur twice and this has the potential to cause some damage or downtime due to reboots. In 2012 some high profile companies such as Qantas, LinkedIn and Yelp suffered from outages as their equipment rebooted as it wasn’t able to handle the leap second. Cisco has worked to put both software/firmware updates or workarounds in place to help their customers resolve any potential impact. You can find more information about the Leap Second over on Cisco’s site.

As soon as I read the email I began to check out which systems are affected by this problem. The focus was obviously on the Cisco equipment within our Flexpod environment. This includes Nexus 7000, Nexus 5000, UCS Manager, UCS Fabric Interconnects, Cisco MDS switches and lastly Cisco UCM sitting on the infrastructure. I’ll go through each system, the symptoms, known affected systems and known firmware fixes. For more information on each component click on the header of the section and it’ll bring you directly to the Cisco bug search site.

Cisco Nexus 7000:

When the leap second update occurs a N7K SUP1 could have the kernel hit what is known a “livelock” condition under the following circumstances:

a. When the NTP server pushes the update to the N7K NTPd client, which in turn schedules the update to
the Kernel. This push should have happened 24 hours before June 30th, by most NTP servers.
b. When the NTP server actually updates the clock

Workaround:

On switches configured for NTP and running affected code, following workaround can be used.
1) Remove NTP/PTP configuration on the switch at least two days prior to June 30, 2015 Leap second event date.
2) Add NTP/PTP configuration back on the switch after the Leap second event date(July 1, 2015)

Known Affected Releases:

5.5(1)E2, 5.5(2), 6.0(4)

Known Fixed Releases:

5.2(6.16)S0, 5.2(7), 6.1(1)S28, 6.1(1.30)S0, 6.1(1.69), 6.2(0.217), 6.2(2)

Read More

post

UCS Central Upgrade Version 1.3 & Overview

While off on annual leave recently I had a few minutes to spare to look through twitter and came across a tweet from Adam J Bergh (@ajbergh) about a remote code execution vulnerability in Cisco UCS Central. You can read more about the threat over on threatpost.com but the synopsis is that “an exploit could allow the attacker to execute arbitrary commands on the underlying operating system with the privileges of the root user”. UCS Central version 1.2 and earlier are affected by this so it’s time to upgrade. Particularly since the vulnerability score is at the highest severity of 10. So before I go on I want to thank Adam for his tweet and highlighting the issue in the first place.

Pre-Requisites:

There are different steps to perform during the upgrade depending of whether UCS Central is in standalone mode or is part of a cluster. You can find more information about both methods over on the UCS Central Install and Upgrade Guide. Some of the key things to keep in mind are the supported upgrade paths and the pre-requisites before beginning the upgrade.

Important:

  • UCS Central 1.3 requires a minimum of 12Gb RAM and 40GB storage space (otherwise the upgrade will fail)
  • Use the ISO image for an upgrade to UCS Central
  • After the upgrade clear the browser cache before logging into the Cisco UCS Central GUI
  • Make sure UCS Manager is 2.1(2) or newer
  • Make sure to take a full state backup before starting the Upgrade Process

Upgrade Paths:

  • From 1.1(2a) to 1.3(1a)
  • From 1.2 to 1.3(1a)

Note: I’m running version 1.1(2a)

New Features:

Some of the new features in version 1.3 include:

  • HTML5 UI: New task based HTML5 user interface.
  • KVM Hypervisor Support: Ability to install Cisco UCS Central in KVM Hypervisor
  • Scheduled backup: Ability to schedule domain backup time. Provides you flexibility to schedule different backup times for different domain groups.
  • Domain specific ID pools: The domain specific ID pools are now available to global service profiles.
  • NFS shared storage: Support for NFS instead of RDM for the shared storage is required for Cisco UCS Central cluster installation for high availability.
  • vLAN consumption for Local Service Profiles: Ability to push vLANs to the UCS Manager instance through Cisco UCS Central CLI only without having to deploy a service profile that pulls the vLANs.
  • Support for Cisco M-Series Servers.
  • Connecting to SQL server that uses dynamic port.
  • Support for SQL 2014 database and Oracle 12c Database.

I’m really looking forward to seeing what the new HTML 5 UI is like. The initial screenshots I’ve seen are awesome. There’s a nice little introduction from Cisco over on their support site. Also, Jacob Van Ewyk has written a really informative article over on Cisco Communities with details about the UCS Central User Interface Reworked with UCS Central 1.3.

Upgrade Steps:

Read More

post

How To: Cisco UCS Firmware Upgrade

Edit: 22-Jan-2106

A recent comment highlighted that i was missing a step during Step 8: UPGRADE THE FABRIC INTERCONNECTS AND I/O MODULES. This was to manually change the primary status for the Fabric Interconnects to be the recently upgraded Fabric Interconnect. This way your environment remains accessible while the second FI is being upgraded. I’ve updated this step now to include the manual failover steps on the FI. It is a step I always follow when performing upgrades but I can’t think of why I didn’t originally put it into the post.


Recently I was tasked with performing an upgrade to our UCS environment to support some new B200 M4 blades. The current firmware version only supported the B200 M3 blades. As part of the process I performed the below steps to complete the upgrade. I split the upgrade into a planning phase and an implementation/upgrade phase. Upgrading firmware of any system can lead to potential outages, things have definitely improved on this over the past decade, but it’s imperative that you plan correctly to make the implementation process go without any hitches. With UCS firmware there is a specific order to follow during the upgrade process. The order to follow is:

  1. Upgrade UCS Manager
  2. Upgrade Fabric Interconnect B (subordinate) & Fabric B I/O modules
  3. Upgrade Fabric Interconnect A (primary) & Fabric A I/O modules
  4. Upgrade Blade firmware by placing each ESXi Host

During the upgrade process, and particularly during the Fabric Interconnect and I/O module upgrades you will see a swarm of alerts coming from UCSM. This is expected as some of the links will be unavailable as part of the upgrade process so wait until all the steps have been completed before raising an issue with support. As a caveat however, if there is anything that really stands out as not being right open a call with support and get it fixed before proceeding to the next step. During my upgrade process some of the blades showed critical alerts that did not clear and the blades needed to be decommissioned and re-acknowledged to resolve it. This problem stood out and wasn’t hard to recognise that it was more serious that the other alerts.

PLANNING PHASE:

You will need to check the release requirements from Cisco regarding the upgrade path from your current firmware version to the desired version and also the capabilities that the desired firmware version contains. The upgrade process takes some time so it’s best to review everything in advance and not have to do this on the day of the upgrade itself. For instance during this upgrade process I needed to ensure capability to support B200 M4 blades and VIC 1340 modules as they had been purchased as part of an order for a new project. The steps to carry out in the planning phase are:

Step 1: Verify the Upgrade requirements

Check the release version notes on Cisco’s website for the version you want to upgrade to. For this example we are upgrading to version 2.2

http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/release/notes/ucs_2_2_rn.html

Read More

Fix: Cisco UCSM – Default Keyring’s certificate is invalid

Recently during some prep work for a UCS firmware upgrade I noticed that there was a major alert showing for keyring certificate being invalid. At first I was a bit concerned but since it didn’t affect my login to UCS Manager I assumed it wasn’t too serious. After a bit of searching around the internet I found from Cisco’s site the CLI Configuration Guide for UCS (page 6) which shows the quick and easy fix to the problem.

planning-fault check major

Open an SSH session to the IP address/hostname of UCS Manager. It will connect to the primary Fabric Interconnect. Enter the commands in the order of steps below:

Step 1 UCS-A# scope security

Step 2 UCS-A /security # scope keyring default

Step 3 UCS-A /security/keyring # set regenerate yes

Step 4 UCS-A /security/keyring # commit-buffer

After the commit-buffer command has been issues all GUI sessions will be disconnected and you will need to log in again. When you log in next time you’ll be prompted to accept the new certificate. Once accepted UCSM will open and the alert will now be gone.

All fairly quick and painless!