post

Cisco Champions at CLMEL

Cisco Live Melbourne has come and gone for another year and this year was without a doubt the best of all the years I’ve attended so far. This was my 3rd year at CLMEL (#CLMEL) and it was an action packed week. At previous events I’ve been primarily going to the breakout sessions and giving myself a migraine from the amount of information I tried to chug through. This year I went in community mode. Being a Cisco Champion I was lucky to be able to partake in some special events, get some nice perk treatment such as prime seats for the keynote and also to interact with the other Cisco Champions. The number of Cisco Champions for Australia in 2017 has seen a significant increase and it’s heavily loaded towards Melbourne so CLMEL provided the ideal opportunity to meet new people.

CL-Mel-Champs

Last year there were no real events so it was great to see some special Cisco Champions events organised and allow the Champions to meet up. This year Veritas, the events organisers, were on hand to assist with the Cisco Champion events throughout the week. A massive thank you to Freya for keeping things in check throughout the few days. A huge thank you also goes to Brandon Prebynski and Lauren Friedman of the Cisco Champions program for getting everything organised on the back end. The value added to the program during Cisco Live this year cannot be underestimated.

The first order of business on Day 1, Tuesday, was the Data Centre Innovation Day. This session provided an inside look at the upcoming technology roadmap for data centre tech. The Data Centre Innovation Day is by invite only and was organised for me by Lauren Friendman (massive thanks for that). I found the information on the upcoming  roadmaps for UCS Compute, UCS Central and UCS Director platforms. I can’t divulge anything as it was under NDA but I can say some of it is pretty cool. One thing they did discuss which I can mention is the new interoperability matrix tool which has been updated to make it easier to search compatibility requirements. I haven’t played around with it yet but will most likely be using it for my next planned upgrade. Read More

post

Melbourne DCUG Superstar Session

The Cisco DCUG has been running for almost a year now and we’ve been very lucky with the support we’ve recieved from both Cisco and the IT community. Back in March, I know I’m well behind the times here due to other commitments, we were immensely privileged to have some top speakers present to the local DCUG.

Cisco Live opening day fell on the same day as our monthly DCUG meeting so it made sense to try to get some of the heavy hitters over from the US to present for us. Cisco DCUG ran with superstars Lauren Malhoit and Remi Phillippe. Lauren is well known within the IT community for her work on the In Tech We Trust podcast but also through her work on ACI. She’s got a course on Pluralsight around ACI if you’re interested in learning more about the Cisco technology. She’s recently jumped into a new role at Techwise TV. Lauren is also the author of a couple of books and an avid blogger for AdaptingIT.com and VirtualizationAdmin.com. Lauren is a massive presence within the tecnology community and I was immensely excited when she agreed to present at the DCUG. Remi is a TME within Cisco’s INSBU and has a heavy focus on the data center analytics platform, Tetration. A massive shout out goes to Rob Tappenden from Cisco in ANZ for helping to organise such quality speakers and initiating the initial contact. A small shout-out (almost at whisper-level) goes to Brett Johnson from vBrownBag for letting us know Lauren was making the trip out to Melbourne.

CLMEL-DUCG-superstars

Read More

post

Cisco Live Melbourne 2017

Cisco Live Melbourne 2017

Cisco Live time has rolled around again for another year. I’ve been really looking forward to this since before the Christmas break and it’s kind of snuck up on me in the end. This year I’ll be taking part in the Data Centre Innovation Day which will provide the opportunity to interact with key Cisco executives and data centre experts on current and emerging challenges and trends.

Last year I spent quite a bit of time interacting with the guys in the World of Solutions and attending some full-on breakout sessions. This year I’ll once again be hitting up some breakout sessions but I also plan on spending more time in the DevNet zone to get up to speed on scripting, Git, REST APIs and DevOps. DevNet was not very large last year but I expect it to be bigger this year and even harder to attend sessions. You cannot book these sessions in advance so it’s first come first served. If you can spare the time though it’s definitely worth your while going.

The sessions I plan to attend this year are focused on Data Centre technology and I’m really keen to learn more on Tetration and Container technology. I’m also looking at Hybrid Cloud integration. My main purpose outside of technical brain dumps is for networking, meeting and interacting with peers and to promote community engagement. It’s also an opportunity to focus on personal development, take some time out of the office to review where I’m at technically and what gaps exist and begin to make plans on what I’d like to focus on in the coming year. As a Cisco Champion for 2017 there’s some special events/treatment at Cisco Live and having the opportunity to meet the other Cisco Champions is too good to miss. Our regular Cisco Data Center UserGroup also takes place on the first night of Cisco Live and we’ve been extremely fortunate to have fantastic presenters, Remi Philippe and Lauren Malhoit. If anyone happens to be in Melbourne and Tuesday 7th please feel free to come along to the Crafty Squire on Russell Street for a 6:30pm start.

cisco-live-mel-2017
This year I’ve taken the plunge to be part of a panel discussing “Build Your Personal Brand with Social Media”. This is part of the Cisco Champions program during Cisco Live. This will be my first time in front of such an audience and I’m both anxious and excited. If you happen to be at Cisco Live on Wednesday drop by the Cisco Think Tank sessions at 2pm.

 

post

Fix: NetApp DataFabric Manager Certificate has expired

Following the upgrade of DFM from version 5.2.0 to 5.2.1 I started to see a warning in the onCommand Management console that the NetApp DataFabric Manager had expired and to create a new one.

dfm-cert-failure

Surprisingly the cert had expired ages ago but neither I nor anyone else noticed. The first step in fixing the issue was to check the SSL service details to find the expiry date of the current certificate. To find this open a command prompt and run the command:

dfm ssl service detail

If the cert is not valid after the current date, or in my case after Dec 9 2015 then a new one needs to be created.

dfm-check-cert

The steps to create a new certificate are:

dfm ssl server setup
KeySize: 2048
Country Name: AU (or whatever two letter country code suites your needs)
State or Province: <insert your state name>
Locality Name: <insert your city>
Organization Name: <insert company name>
Common Name: <insert FQDN of your DFM server>
Email Address: <insert your address>

Once the cert has been created you’ll be prompted to restart the http services.

dfm-check-cert1

Once you restart the services you can acknowledge the alert in onCommand Manager and the alert will be gone

Fix: Cannot run upgrade script on host, ESXi 5.5 

During a recent upgrade I found that one of the ESXi hosts just would not update using Update Manager. The error I was seeing was “Cannot run upgrade script on host”.

After a bit of searching I found this article which related to ESXi 5.1 upgrade to 5.5 but the steps worked well to fix the issue I was seeing.

In order to fix the issue I performed the following steps:

Step 1: Disable HA for the cluster

Disable Cluster HA

Step 2: Go to vCenter Networking. Select the distributed vswitch and then select the hosts tab. From here, right-click on the host you need to reboot and select Remove from vSphere Distributed Switch

Remove Distributed Switch

Click Yes to remove the host from the switch.

Confirm vDS Removal

Step 3: Remove the host from the cluster

Remove ESXi host from cluster

Step 4: Enter the host into maintenance mode and then choose to reboot.

Enter Maintenance Mode
Step 5: Connect via SSH to the ESXi host and run the following commands to uninstall the FDM agent:

>
cp /opt/vmware/uninstallers/VMware-fdm-uninstall.sh /tmp
chmod +x /tmp/VMware-fdm-uninstall.sh
/tmp/VMware-fdm-uninstall.sh
>

SSH Host FDM Uninstaller
Step 6: Reboot the host

Reboot the host
Step 7: Add the ESXi host back to the cluster

rejoin host to cluster step 1

rejoin host to cluster step 2

rejoin host to cluster step 3

rejoin host to cluster step 4
Step 8: Re-add the host to the Distributed vSwitch. Go to Networking -> select the distributed vswitch. Right-click and select Manage Hosts.

Manage vDS

Select the host

Select Host

Select vnics for Uplinks to be managed by the switch

Manage vDS uplinks

Step 9: Turn vSphere HA back on for the cluster the host resides on.

Turn on vSphere HA

Step 10: Run the upgrade again from Update Manager and this time it will work.

post

How To: Upgrade to ESXi 5.5 Update 3b on Cisco UCS

ESXi upgrade preparation

With Cisco UCS you really need to make sure that your ESXi hosts are running the correct driver version. If you’re running NFS or FCoE storage into your ESXi hosts as either datastores or RDM disks then it’s critical that you have the right fnic and enic drivers. Even if you use the Cisco Custom image for ESXi upgrades the enic and fnic drivers may not be correct according to the compatibility matrix. I’ve had this issue in the past and I saw intermittent NFS datastores going offline for a Dev ESXi host and the resolution was to upgrade the enic driver which handles ethernet storage connectivity.

The best place to go is to VMware’s compatibility site for IO drivers which comes under the System/Servers. To find out which drivers you currently have you will need to check on the driver versions on the ESXi hosts. This can be done by following KB1027206. Using the values for the Vendor ID, Device ID, Sub-Vendor ID and Sub-Device ID it’s possible to pinpoint the interoperability with your respective hardware. In my case I have both VIC1340 and VIC1240 in the mix so I had to go through the process twice. Primarily you’ll be using the ‘ethtool -i’ command to find the driver version.

enic_driver_check_vmware_kb_steps
e.g. You can check the UCS VIC 1240 for FCoE CNAs on ESXi 5.5 Update 3 here

In this image you can see the version of enic drivers I’m running, 2.1.2.71 doesn’t match the firmware version that will be installed as part of the Cisco Custom ISO image. This shows that the enic driver version will need to be upgraded as part of the process.

enic_driver_check_vmware

Read More

post

Fix: vCenter failure to upgrade – unable to configure log browser windows service

During a recent upgrade from vCenter Server 5.5 Update 2d to vCenter Server 5.5 Update 3b it kept failing at the web client upgrade. After successfully upgrading Single-Sign On I proceeded with the upgrade of vSphere Web Client. I got the following error during the installation:

Error 29702 unable to configure log browser windows service please check vminst.log in system temporary folder for details

The update to 5.5 3b caused disk capacity to fill up and make the installation process unable to finish the upgrade. The SSO install worked but the WebClient fails with error 29702. The primary issue was that over 40GB of space on C drive was taken up with SSO upgrade. I searched for fixes and found the following link but before carrying out the task of removing the Java Components and re-installing again I wanted to check with support on the procedure.

The steps I followed to fix the issue were:

Step 1: Go to Control Panel, select VMware vCenter Server – Java Components and select uninstall

vmware java component unistall

Step 2: Click ok to confirm the uninstall

vmware java component unistall step 2

Step 3: Click Yes to confirm reboot

java component uninstall step 3

Step 4: Following the reboot you can then begin the upgrade process once again and this time it will succeeed. Run the vCenter installer and from Custom Install select vCenter Single Sign-On. Click Next.

vcenter upgrade step 1

Step 5: Click Install

vcenter upgrade step 3

Step 6: The single sign-on components will begin to install, including components such as OpenSSL

vcenter upgrade step 3

One of the key components being installed is VMware JRE.

vcenter upgrade step 4 vmware JRE

Step 7: If you get prompted to close some applications select “Close the applications and attempt to restart them”. Click Ok.

vcenter upgrade step 5

Click ok to the prompt to close apps automatically

vcenter upgrade step 6

Step 8: Click Finish to complete the Single Sign-On upgrade

vcenter upgrade step 7

Step 9: Click on vCenter Web Client to begin the next stage of the upgrade

vmware upgrade step 8

Step 10: Click Yes to continue

vmware upgrade step 9

Step 11: Click Accept License agreement and click Next

vmware license agreement

Step 12: Click Install to begin the web client installation

vsphere web client install

Step 13: Click Finish to complete the installation

vsphere web client installation completion

Once you click Finish click Ok on the dialog to advise that the services will take a few minutes to restart

vsphere web client installation completion 1

Step 14: Select vCenter Inventory Service and click Install

vcenter inventory service upgrade step 1

Step 15: Click Yes for Inventory Service install

vcenter inventory service upgrade step 2

Step 16: Click Next to continue the installation process

vcenter inventory service upgrade step 3

Step 17: Click Accept License agreement and click Next

vcenter inventory service upgrade step 4

Step 18: Click Install for inventory service

vcenter inventory service upgrade step 5

Step 19: Click Finish on completion

vcenter inventory service upgrade step 6

Step 20: Install vCenter Server

vcenter server upgrade step 1

Step 21: Click Ok to continue

vcenter server upgrade step 2

Step 22: Click Next to continue

vcenter server upgrade step 3

Step 23: Click to accept the license and click Next

vcenter server upgrade step 4

Step 24: Enter the database user login credentials, VC_User

vcenter server upgrade step 5

Step 25: Click Install at the Customer Experience Improvement Program

vcenter server upgrade step 6

Step 26: Click Finish to complete the installation

vcenter inventory service upgrade step 6

post

Fix: Cisco B200 M4 – FlexFlash – FFCH_Error_old_firmware_Running_error

During a recent upgrade of Cisco B200 M4 blades I got the following error:

FlexFlash FFCH_ERROR_OLD_FIRMWARE_RUNNING
flexflash-error

I really wasn’t sure what was causing the issue but it turned out to be a known bug for M4 blades. More details can be found over on Cisco BugSearch Note: You’ll need a Cisco Login to access the site. Basically the issue affects B200 M4 blades upgraded to 2.2(4) or higher.

The workaround is actually quite easy and just needs to have the FlexFlash Controller reset. This can be done using the below steps:

Step 1: Select Equipment -> Chassis # -> Server # -> Inventory -> Storage -> Reset FlexFlash Controller

Flexflash-fix-steps

Step 2: Click Yes to reset the FlexFlash controller

reset-flexflash-controller

Step 3: Click Ok on reset notification

flexflash-controller-ok

post

Fix: Cisco UCS B200 M4 Activation Failed

During a recent upgrade I ran into a problem with activation of B200 M4 blade. This was following the infrastructure firmware upgrade and the next step was to upgrade the server firmware. However, before upgrading the server firmware I got the error from the B200 M4 blades showing the following error:

Activation failed and Activate Status Set to Failed

This turned out to be due to the B200 M4 blades shipping with version 7.0 of the board controller firmware. On investigation with Cisco I found that it’s a known bug – CSCuu78484

You can follow the commands to change the base board. You can find more information on that from the Cisco forums but the commands you need are below:

#scope server X/Y (chassis X blade Y)

#scope boardcontroller

#show image

#activate firmware version.0 force

>Select a lower version than current one

#commit-buffer

What I found was that since I was going to be upgrading the blade firmware version anyway there was no point in dropping the server firmware back and instead proceed with the upgrade which fixed the issue.

I spoke with TAC and they advised that the error could be ignored and I could proceed with the UCS upgrade. The full details of the upgrade can be found in another post.

post

How To: Cisco UCS Firmware Upgrade 2.2 to 3.1 with Auto-Install

Recently I had to upgrade our ESXi hosts from Update 2 to Update 3 due to security patch requirements. This requirement stretches across two separate physical environments, one running IBM blades and the other running on Cisco UCS blade chassis in a Flexpod configuration. The upgrade paths for both are slightly different, and they also run on different vCenter platforms. Both of these also have different upgrade paths as one is running VMware SRM and is in linked mode. I’m not going to discuss the IBM upgrades but I did need to upgrade the firmware of the Infrastructure and Servers for Cisco UCSM.

Before you being any upgrade process I highly recommend reading the release notes to make sure that a) an upgrade path exists from your current version, b) you become aware of any known issues in the new version and c) the features you want exist in the new version

UCS Upgrade Prep Work

Check the UCS Release Guides

Check the release notes to make sure all the components and modules are supported. The release notes for UCS Manager can be found on their site. The link is listed further below in the documents section.

Some of the things to check within the release notes are:
* Resolved Caveats

ucs-caveats-precheck

  • UCS Version Upgrade patch

ucs-infra-requirements-precheck

  • UCS Infrastructure Hardware compatibility

ucs-infra-requirements-precheck1

  • Minimum software version for UCS Blade servers

ucs-server-requirements-precheck1

Open a Pre-Emptive Support Call

I opened a call with Cisco TAC to investigate the discrepancy in the firmware versions. The advice was to downgrade the B200 M4 server firmware down to 4.0 (1). However, as I was planning on upgrading anyway I’ve now confirmed that the best option is to upgrade to the planned 3.1 version. As part of this upgrade I will also upgrade all the ESXi hosts on that site the same day. There is a second UCS domain on another site that will be upgraded on another date.

ucs-pre-emptive-support-case

Read More