post

Cisco Data Center User Group Melbourne First Meetup

Last night we hosted the first Cisco Data Center User Group in Melbourne. It was a successful night with a great turn out and excellent interaction and networking between everyone that attended. Everyone was enthusiastic and willing to take part and really mate it a fantastic night.

The user group was formed with the intention to create a space where IT professionals can come together in a relaxed environment to network, have a drink and learn about data center technology. We wanted to have an interactive and social atmosphere and thanks to everyone that attended and took part because that’s exactly what was achieved.

Cisco DCUG Melbourne Members photo

One of the things that I liked most about the meetup was the attendance of people from other community groups. Craig Waters (@cswaters1) from the VMware VMUG community, Brett Johnson (@brettjohnson008) from the vBrownBag community and one of the presenters, Will Robinson, from the NetAppATeam. The support from other communities is great and we really appreciate it.

The night itself began with an introduction from Derek Hennessy (@derekhennessy) and Chris Partsenidis (@cpartsenidis) on how the user group idea was formed. A shout out went to Lauren Friedman (@lauren) from Cisco for her help and support for getting the user group off the ground. We swiftly moved onto the first speaker of the night, Chris Gascoigne (@chrisgascoigne).

Introduction

Chris is a Technical Architect for Cisco ANZ with the Data Center team and has a focus on ACI, Nexus 9000, Automation/Orchestration and DevOps. Chris ran through a few slides on how network engineers can leverage tools such as Puppet, Ansible and Chef to implement the DevOps framework. He then ran through a demo of how to manage a Nexus 9000 switch from a bash shell and deploy Puppet configurations to a switch. Chris also emphasised the need to provide version control, code review and deployment into production. There were a number of questions from the audience as everyone tried to imagine using such tools within their own infrastructure environments. Unfortunately I don’t have a copy of Chris’ slidedeck to make available. A special mention goes out to Chris Partsenidis for performing the important task of being a microphone stand through Chris Gascoigne’s demo.

Following Chris’ presentation we took a break and let everyone digest the content and the food as well as order up another drink for the next session. Will Robinson (@oznetnerd) is a Senior Engineer with a focus on networking and storage and a wealth of experience. Will also has a mighty home lab setup and he gave everyone a run through on using GNS3 within his home lab. He really hit home on rethinking the physical and the logical implementations of networks and gave an example of a complex network he’d designed within GNS3. Everyone was really engaged in Wills presentation and it was like a quick fire buzzer round at a quiz following his presentation. He even managed to jokingly make reference to a layer 8 issue for someone using GNS3

GNS3 Connectivity

I’ve uploaded the slidedecks from the night and in the future we hope to capture the presentations on video and make them available as an archive following the events themselves. All in all it was a great night and we believe we have now started to develop a new community. If you’re interested in learning about technology, having a drink and some grub, and meeting other IT professionals and networking then we’re really looking forward to seeing you at the next meeting on Tuesday July 5th.

P.S. Thanks to Chris for the photo of the attendees

post

Cisco Data Centre User Group – Melbourne

cisco next gen data center user group

Next month we’re starting a new user group for Cisco Data Center in Melbourne. This user group is being run by Cisco Champions, myself and Chris Partsenidis. Chris and I met up after the recent Cisco Live in Melbourne and got chatting about how there’s no real community around Cisco technology so we reached out to Lauren Friedman (@lauren). Lauren was super helpful and has supported the creation of Cisco Data Center User Group. This is something that Lauren is working on from a global perspective and we’re delighted to be laying the groundwork in Australia.

This user group is centered around Cisco Next-Generation Data Centers and is for anyone that uses Cisco technology or that of the extended ecosystem. Our meetup is a fantastic opportunity to get to know others in the community over some snacks and beers in a relaxed and social environment. While the group is supported by Cisco, don’t expect sales pitches. We’ll focus on enabling a local community for Cisco Data Center users to share experiences, network and to learn more about both technology and careers. We openly invite submissions for topics and presentations from any members.

Some of the topics we’re looking to cover in the coming months are:

  • Cisco HyperFlex
  • DevOps
  • Cisco Nexus Switching
  • Big Data Analytics
  • Data Center Storage
  • CCNA DC and beyond
  • Cisco ACI and Nexus 9000
  • Operations and Data Center Management
  • HomeLab setup
  • Exam Preparation and certification
  • Automation and Orchestration
  • We’re open to requests from the community for topics of interest

The user group will catch up on the first Tuesday of every month at The Crafty Squire at 127 Russell Street in Melbourne CBD. We’ll be located upstairs in Porter Place. Our first meeting will run be Tuesday June 7th and all meetings will take place between 5:30 and 7:30PM.

Crafty Squire Porter Place

More details about the regular meet ups can be found over at Cisco Data Center User Group page on Meetup.com. This page will be updated regularly with the meeting agendas and speakers. We look forward to seeing you there, please don’t be shy and come along to say hello. Welcome to the community.

post

vMotion to vNotions

vNotions Logo LargeThose that frequent the site regularly will have noticed quite a few changes recently. I’ve migrated the blog from wordpress.com to a hosted wordpress site and the name has also changed from virtualnotions to vNotions. I wanted to get more control of the site and be able to develop it over time into something else as it continues to grow and develop. WordPress.com is excellent as a free resource but I wanted to be able to customise more.

I really wasn’t sure what the best hosting solution would be as there are a number of options. There’s managed, managed hosted, virtual private server (VPS) and also the option of running wordpress in AWS. I turned to twitter to see if anyone had any recommendations for hosting wordpress. The first reply came from Mike Andrews (@trekintech) and I have to thank him for the recommendation. I had a look at a number of different providers and settled on DigitalOcean which was put forward by Mike. DigitalOcean have a strong community forum and supporting documentation so it was very easy to get everything set up. Each VPS in DigitalOcean is called a droplet and it’s very quick to deploy a new server instance. I stumbled across ServerPilot.io which allows quick deployment of apps on DigitalOcean VPS instances. ServerPilot takes a lot of hassle with setting up new apps and given that it’s also got a free option it’s very appealing. It also deployed WordPress using the Nginx engine so it’s considerably faster than just the LAMP stack with Apache. For quick reference check out this guide for installing wordpress on ubuntu and also this one one installing wordpress on DigitalOcean. There’s also a good guide on setting up wordpress on DigitalOcean over at MyBloggingThing. It was a straightforward process to set up a new instance of wordpress and migrate the content from the old wordpress.com site to the new vNotions.com site. Once the site was migrated and fully operational I enabled CDN using CloudFlare to improve speed accessing the site from disperse graphical locations. All in all, it was a relatively painless process.

Right now I’m tidying up the posts on the site to clear out any old posts that are no longer relevant. I’d like to thank Mike Andrews for his feedback that set the ball rolling. For anyone thinking of checking out DigitalOcean I’d definitely recommend jumping right in. The support team at DigitalOcean were also top class and replied very quickly to an issue I had (self-inflicted I might add). vNotions has vMotioned from VirtualNotions.

post

Cisco Live Session Review

I gave a recap of Cisco Live Melbourne in another post and had intended on providing a detailed look at each of the sessions I attended as part of that post but it became a bit long-winded so I’ve broken it out into separate posts. I’ve broken the sessions down by each day.

cisco_live_mel_image

Day 1:

TECCOM-2001 –  Cisco Unified Computing System

As someone that is working towards CCNA and CCNP in Cisco Data Center this extra technical seminar really was invaluable and opened my eyes up to a lot of areas that were unknown to me. This breakout session was an 8-hour, full-on overview of Cisco UCS, the components that comprise the solution and how it all works together. It wasn’t a deep-dive session however so if you’ve a really good working knowledge of UCS and know what’s under the covers quite well then this session wouldn’t really be for you. In saying that however I think there’s always opportunities to learn something new.

Cisco-UCS-b-series-overview

The session was broken down into 6 parts.

  • UCS Overview
  • Networking
  • Storage Best Practices
  • UCS Operational Best Practices
  • UCS does Security Admin
  • UCS Performance Manager

Some of the main takeaways from the session were around the recent Gen 3 releases for the UCS hardware including the Fabric Interconnects and IOMs. They also discussed the new features for UCS Manager 3.1 code base release.  Some of the new features of UCSM and the hardware are listed below:

UCS Manager 3.1

  • Single code base (covers UCS mini, M-Series and UCS traditional)
  • HTML 5 GUI
  • End-to-end 40GbE and 16Gb FC with 3rd Gen FI’s
  • M series cartridges with Intel Xeon E3 v4 Processors
  • UCS mini support for Second Chassis
  • New nVidia M6 and M60 GPUs
  • New PCIe Base Storage Accelerators

UCS Management Portfolio

Next Gen Fabric Interconnects:

FI6332:

  • 32 x 40GbE QSFP+
  • 2.56Tbps switching performance
  • IRU & 4 Fans

FI6332-16UP:

  • 24x40GbE QSFP+ & 16xUP Ports (1/10GbE or 4/8/16Gb FC)
  • 2.43Tbps switching performance

IOM 2304:

  • 8 x 40GbE server links & 4 x 40GbE QSFP+ uplinks
  • 960Gbps switching performance
  • Modular IOM for UCS 5108

Two other notes from this section of the technical session were that the FI6300s requires UCS Manager 3.1(1) and the M-Series is not support on the FI6300’s yet. There was also an overview of the UCS Mini upgrades, the Cloud Scale and Composable Infrastructure (Cisco C3260) and the M-Series. I’ve not had any experience or knowledge of the M-Series modular systems before and I need to do far more reading to understand this much better.

The second part of the session covered MAC pinning and the differences between the IOMs and Mezz cards. (For those that don’t know the IOMs are pass-through and the Mezz are PCIe cards). Once aspect they covered which I hadn’t heard about before was around UDLD (Uni-Directional Link Detection) which monitors the physical connectivity of cables. UDLD is point-to-point and uses echoing from FIs out to neighbouring switches to check availability. It’s complementary to Spanning Tree and is also faster at link detection. UDLD can be set in two modes, default and aggressive. In Default mode UDLD will notify and let spanning tree manage pulling the link down and in Aggressive mode UDLD will bring down link.

The Storage Best Practices looked at the two modes that FIs can be configured to and also the capabilities of both settings. If you’re familiar with UCS then there’s a fair change you’ll know this already. The focus was on FC protocol access via the FIs and how the switching mode changes how the FIs handle traffic.

FC End-Host Mode (NPV mode):

  • Switch sees FI as server with loads of HBAs attached
  • Connects FI to northbound NPIV enabled FC switch (Cisco/Brocade)
  • FCIDs distributed from northbound switch
  • DomainIDs, FC switching, FC zoning responsibilities are on northbound switch

FC Switching Mode:

  • Connects to Northbound FC switch and normal FC switch (Cisco Only)
  • DomainIDs, FC Switching, FCNS handled locally
  • UCS Direct connect storage enabled
  • UCS local zoning feature possible

The session also touched on the storage heavy C3260 can be connect to FIs as an appliance port. It’s also possible via UCSM to create LUN policies for external/local storage access. This can be used to carve up the storage pool of the C3260 into usable storage. Once thing I didn’t know what that a LUN needs to have an ID of 0 or 1 in order for boot from SAN to work. It just won’t work otherwise. Top tip right there. During the storage section there was some talk about Cisco’s new HyperFlex platform but most of the details were being withheld until the breakout session on Hyper-Converged Infrastructure later in the week.

The UCS Operational Best Practice session covered off primarily how UCS objects are structured and how they play a part in pools and and policies. For those already familiar with UCS there was nothing new to understand here. However, one small tidbit I walked away with was around pool exhaustion and how UCS recursively looks up to parent organisation until root and even up to the global level if UCS central is deployed or linked. One other note I took about sub-organisations were that they can go to a maximum of 5 levels deep. Most of the valuable information from this session was around the enhancements in latest version of UCSM updates. These were broken down into improvements in firmware upgrade procedures, maintenance policies and monitoring. Most of these enhancements are listed here:

Firmware upgrade improvements:

  • Baseline policy for upgrade checks – it checks everything is OK after upgrade
  • Fabric evacuation – can be used to test fabric fail-over
  • Server firmware auto-sync
  • Fault suppression (great for upgrades)
  • Fabric High Availability checks
  • Automatic UCSM Backup during AutoInstall

Maintenance:

  • On Next boot policy added
  • Per Fabric Chassis acknowledge
  • Reset IOM to Fabric default
  • UCSM adapter redundant groups
  • Smart call home enhancements

Monitoring:

  • UCS Health Monitoring
  • I2C statistics and improvements
  • UCSM policy to monitor – FI/IOM
  • Locator LED for disks
  • DIMM backlisting and error reporting (this is a great feature and will help immensely with troubleshooting)

Fabric evacuation can be used to test fabric fail-over before firmware upgrade to ensure bonding of NICs works correctly and ESXi hosts fail-over correctly to second vNIC. There’s  also a new tab for health also beside the FSM tab in UCSM.

The last two sections of the session I have to admit were not really for me. I don’t know whether it was just because it was late in the day, my mind was elsewhere or that I was just generally tired but I couldn’t focus. The sections on Security within UCSM and UCS Performance Manager may well have been interesting on another day but they just didn’t do anything for me. The information was somewhat basic and I really felt that UCS Performance Manager was really more of a technical sales pitch. I feel the session would have been better served with looking at more high-level over-arching tools for management such as UCS Director rather than a monitoring tool which the vast majority of people are not going to use anyway.

Overall though this entire technical session was a great learning experience. The presenters were very approachable and I took the opportunity to quiz Chris Dunk in particular about the HyperFlex solution. While I may not attend another UCS technical session again in the future I would definitely consider stumping up the extra cash needed for other technical session which may be more relevant to me then. There’s a lot of options available.

After the sessions were completed I headed down to the World of Solutions opening and wandered around for a bit. As I entered I was offered an array of free drink. Under other circumstances I would have jumped at the chance but I’m currently on a 1-year alcohol sabbatical so I instead floated around the food stand that had the fresh oysters. The World of Solutions was pumping. I didn’t really get into any deep conversations but I did take note of which vendors were present and who I wanted to interrogate more later in the week. I left well before the end of the reception so I could get home early. The next day was planned to be a big day anyway.

 

Read More

post

Cisco Live Recap

cisco_live_mel_imageLast week I had the opportunity to attend Cisco Live in Melbourne and it was awesome. This is the second year I’ve attended Cisco Live but this year I was there as an Attendee so I had access to the breakout sessions. Previously I only had an Explorer Plus pass which was good for the keynote access, partner theatre sessions  and the World of Solutions. While that was fun experience getting access to the breakout session was what I really wanted, and they didn’t disappoint. I’m privileged in that my ticket to Cisco Live was covered by my employer that sees the value in such events and we were also able to leverage Cisco Learning credits. If you wish to attend and have these credits available to you this is a great return on investment and one I’d recommend over a regular 5-day training course.

This year Cisco Live was once again held at Melbourne Convention Centre and it’s a brilliant facility that has a great layout, is large enough to cater for the ever-growing number of attendees and is easy to access via public transport. The breakout sessions are full on and a number of people had mentioned beforehand that going to Cisco Live was like drinking from a firehose. They weren’t wrong. Cisco tee up the sessions and you try to cram as much as you can into your grey matter. I also chose to sign up for an extra day technical seminar which was an 8 hour session on Cisco UCS. There were a number of streams that could be chosen but my focus is on UCS. This was an added extra on top of the regular attendee ticket. During the remainder of the week I tried to cram in as many other breakout sessions as I could and catch a few of the partner sessions as well as have some downtime to network a bit.

clmel-convention-center

Read More

post

Pure Storage – FlashBlade

Pure Storage are just after running their first community event, Pure //Accelerate, and by any measure it can be classed as a success. Pure made a number of announcements about new products in their portfolio but the one that caused the most excitement was the announcement about their new FlashBlade product. More on that in just a moment. Pure made announcements about a new baby flash array, the //M10, to bring flash storage to the masses at under $50k and more details about their collaboration with Cisco on the UCS based FlashStack. Both of these announcements by themselves are substantial enough for most storage vendors but Pure stepped it up a notch with the FlashBlade.

Pure FlashBlade

 

I’m not going to go into the components of the FlashBlade, more information on that can be found on the Pure Press Release, a run-down by Alex Galbraith or Enrico Signoretti’s blog post. As with just about every other storage geek out there I got extremely excited when I heard the announcement and imagining the use-cases for the FlashBlade pretty much made my head explode. After a bit of time letting it sink in over a coffee and digesting the cost per GB on the FlashBlade, the scalability and the form factor size I decided that I hadn’t over-reacted in my excitement.  Read More

post

How To: End-to-End SnapProtect Storage Policy Creation

The example I’m going to give here is for an environment that is already configured but has storage controllers that are not configured as NAS iData Agents for backup or any volumes on those controllers. In this environment the controller I’m enabling backups on is the secondary storage tier which is already a snapvault destination so it is in the Array Manager for SnapProtect. This environment requires new NetApp aggregates to be added as resource pools as new volumes and data have been assigned to the aggregates. The process involves working with SnapProtect, NetApp Management Console (DFM) and the NetApp storage controllers.

Enable backups on a controller within SnapProtect:

Enable Accounts

Before you begin log onto the controller and ensure the login account you require has access to the controller. To do this open a SSH session to the controller and use the following command:

#useradmin user list

If the account used by SnapProtect doesn’t exist then you’ll need to add it as an administrator. Below is a snapshot from a controller where it is added and also a controller where it’s not.

SnapProtect End to End Step 1

SnapProtect End to End Step 2

You can add the account via the command line or via the web console. In this instance I added the account via the console. In System Manager go to Configuration -> Local Users and Groups -> Users and click Create. Enter the required details and click Create.

SnapProtect End to End Step 3

Once completed you can re-run the command from the CLI to ensure that the account appears Read More

post

How To: Remove a volume from NAS backup in Simpana 10

I have an issue recently where I had to remove a volume from within a NAS client subclient so that I could move it to another location within our environment. Due to the limitations within Simpana10/SnapProtect v10 it’s not possible to select the volume and delete the related snapshots to clean up the subclient. The backup jobs need to be deleted to remove the snapshots. As there are multiple volumes within the same subclient deleting the jobs would delete the snapshots for all volumes, not just the one I need to clean up. Given this, there is only one option and that’s to break the relationships in DFM and clean up the volume manually.

Step 1: Go the the OnCommand server and run dfpm dataset list

C:Usersderek>dfpm dataset list

Id         Name                        Protection Policy           Provisioning Policy Application Policy          Storage Service


2927 CC-SnapProtect-XX_SC-45 SnapProtect Back up, then mirror                                                 CC-SnapProtect-XX_Copy-66

3072 CC-SnapProtect-XX_SC-46 SnapProtect Back up, then mirror                                                 CC-SnapProtect-XX_Copy-66

2935 CC-SnapProtect-XX_SC-39 SnapProtect Back up, then mirror                                                 CC-SnapProtect-XX_Copy-70

7011 CC-SnapProtect-XX_SC-42 SnapProtect Back up, then mirror                                                 CC-SnapProtect-XX_Copy-70

41601 CC-SnapProtect-XX_SC-72 SnapProtect Back up, then mirror                                                 CC-SnapProtect-XX_Copy-70

2811 CC-SnapProtect-XX_SC-7  SnapProtect Back up, then mirror                                                 CC-SnapProtect-XX_Copy-74

20149 CC-SnapProtect-XX_SC-68 SnapProtect Back up, then mirror                                                 CC-SnapProtect-XX_Copy-102

25497 CC-SnapProtect-XX_SC-83 SnapProtect Back up, then mirror                                                 CC-SnapProtect-XX_Copy-110

26575 CC-SnapProtect-XX_SC-102 SnapProtect Back up, then mirror                                                 CC-SnapProtect-XX_Copy-124

26094 CC-SnapProtect-XX_SC-98 SnapProtect Back up, then mirror                                                 CC-SnapProtect-XX_Copy-124

31037 CC-SnapProtect-XX_SC-122 SnapProtect Back up, then mirror                                                 CC-SnapProtect-XX_Copy-128

31650 CC-SnapProtect-XX_SC-41 SnapProtect Back up, then mirror                                                 CC-SnapProtect-XX_Copy-130

31654 CC-SnapProtect-XX_SC-72 SnapProtect Back up, then mirror                                                 CC-SnapProtect-XX_Copy-130

42833 CC-SnapProtect-XX_SC-59 SnapProtect Mirror, then back up                                                 CC-SnapProtect-XX_Copy-136

45647 CC-SnapProtect-XX_SC-128 SnapProtect Back up, then mirror                                                 CC-SnapProtect-XX_Copy-145

46010 CC-SnapProtect-XX_SC-127 SnapProtect Back up, then mirror                                                 CC-SnapProtect-XX_Copy-147

46084 CC-SnapProtect-XX_SC-43 SnapProtect Back up, then mirror                                                 CC-SnapProtect-XX_Copy-151

59566 CC-SnapProtect-XX_SC-4  SnapProtect Back up                                                         CC-SnapProtect-XX_Copy-159

 

Step 2: Pick the dataset you want to manage and display the relationships in DFM suing the dataset list -m command

C:Usersderek>dfpm dataset list -m  42833

Id         Node Name            Dataset Id Dataset Name         Member Type                                        Name

43325 Primary data              42833 CC-SnapProtect-XX_SC-59 qtree                                              primary_controller:/users/-

2244 Primary data              42833 CC-SnapProtect-XX_SC-59 qtree                                             primary_controller:/backup/-

3165 Primary data              42833 CC-SnapProtect-XX_SC-59 qtree                                             primary_controller:/ctx/-

42965 Mirror                    42833 CC-SnapProtect-XX_SC-59 volume                                             local_aux_controller:/SP_ctx

51886 Mirror                    42833 CC-SnapProtect-XX_SC-59 volume                                             local_aux_controller:/SP_users

42968 Mirror                    42833 CC-SnapProtect-XX_SC-59 volume                                             local_aux_controller:/SP_backup

52126 Backup                    42833 CC-SnapProtect-XX_SC-59 volume                                             remote_aux_controller:/SP_users

43084 Backup                    42833 CC-SnapProtect-XX_SC-59 volume                                             remote_aux_controller:/SP_ctx

43087 Backup                    42833 CC-SnapProtect-XX_SC-59 volume                                             remote_aux_controller:/SP_backup

 

Step 3: DFM Backup and CommServe DB Backup

dfm backup create backup_file_name

Run a DR backup in SnapProtect/Commvault

You can also capture a snapshot in vCenter also just to be sure

 

Step 4: Run dfpm dataset relinquish <id of secondary resource you want to remove> to break the relationships in DFM.

dfpm dataset relinquish 43087

dfpm dataset relinquish 42968

 

Step 5: Then, edit the dataset and remove the secondary resource if it is still part of the dataset via the Management Console

 

Step 6: In SnapProtect remove the volume from the subclient so that it’s no longer backed up.

And that’s that. You can now use the volume in other storage policies without it impacting another configuration

post

How to: Present iSCSI storage from a NetApp vfiler (7-Mode)

As part of a recent data migration I had to enable a vfiler to allow iSCSI traffic as a number of virtual machines in the environment require block storage for clustering reasons. The vfiler already presents via NFS and iSCSI. As this is a test environment I’ve decided to put iSCSI on the same link as the NFS and CIFS. I know this is not normal best practice but given that the vLANs are already in place and that this is a test environment I decided to use the same IP address range. The servers accessing the iSCSI LUNs don’t have access to CIFS or to any NFS mounts already so there should be no traffic cross-over. So onto the steps to set it up:

Step 1: Allow iscsi protocol and RSH on vfiler (at vfiler0)

Check the status of the vfiler using the command

vfiler status -a tenant_vfiler
tenant_vfiler running
 ipspace: tenant_vfiler_NFS_CIFS
 IP address: 192.168.2.1 [a1a-107]
 IP address: 192.168.2.2 [a1a-107]
 Path: /vol/tenant_vfiler_vol0 [/etc]
 Path: /vol/nfs03
 Path: /vol/nfs04
 Path: /vol/nfs02
 Path: /vol/nfs01
 Path: /vol/cifs01
 Path: /vol/iso01
 Path: /vol/iscsi_test
 UUID: 93c62e36-4e76-11e4-8721-123478563412
 Protocols allowed: 7
Disallowed: proto=rsh
 Allowed: proto=ssh
 Allowed: proto=nfs
 Allowed: proto=cifs
Disallowed: proto=iscsi
 Allowed: proto=ftp
 Allowed: proto=http
 Protocols disallowed: 2

Next run the command:

vfiler allow tenant_vfiler proto=iscsi
vfiler allow tenant_vfiler proto=rsh

Step 2: Start iSCSI protocol on vfiler (at apaubmwvfi01)

vfiler context tenant_vfiler
iscsi start

Step 3: Create a new volume at vfiler0

vfiler context vfiler0
vol create iscsi_test_vol -s 20g

Step 4: Migrate the volume to apaubmwvfi01 and log into the vfielr to check the volume status

vfiler add tenant_vfiler /vol/iscsi_test
vfiler context tenant_vfiler
vol status

Step 5: Set priv advanced and modify the exports to the correct settings as below

To modify the exports read the current /exports and write it back. Once done run the exportsfs -av command to push the changes out

rdfile /vol/tenant_vfiler_vol0/etc/exports
/vol/nfs01 -sec=sys,rw=192.168.1.0/24,anon=0
/vol/nfs02 -sec=sys,rw=192.168.1.0/24,anon=0
/vol/nfs03 -sec=sys,rw=192.168.1.0/24,anon=0
/vol/nfs04 -sec=sys,rw=192.168.1.0/24,anon=0
/vol/iso01 -sec=sys,rw=192.168.1.0/24,anon=0
/vol/iscsi_test -sec=sys,rw=192.168.1.0/24,anon=0
vfiler run tenant_vfiler exportfs -av

Step 6: Create a lun from the volume (iscsi_test)

vfiler run tenant_vfiler lun create -s 10g -t windows2008 /vol/iscsi_test/iscsi_lun

Step 7: Change filer and run lun show

lun_show

Step 8: Verify iSCSI network within VMware has been assigned to the VM
iSCSI network
Step 9: Enable iSCSI Initiator – grab the iqn

iSCSI initiator iqn

Step 10: Create an igroup with the iqn of the server

igroup create -t Windows2008 ds_iscsi 
igroup add ds_iscsi iqn.1991-05.com.microsoft:microsoft:server.domain.com

Step 11: map the lun to the group name

map_lun_to_group

Step 12: run lun show -m to check the mapping

lun_show_mapping

Step 13: Run a quick connect to the IP address of the controller

iscsi_quick_connect

And now your disk should appear in the disk manager on the server. It’s not too different to setting up a normal iSCSI connection but RSH must be enabled otherwise it can’t tunnel the iSCSI  request to the vfiler iqn target.

post

Cisco HyperFlex – Welcome to the HCI Party!

Cisco has finally decided to bring the vodka to spike the punch at the Hyper-Converged Infrastructure party. And it tastes pretty damn good. There have been rumours for a while now that Cisco was working with Springpath and as a major third round investor it’s not surprising to hear about their entrance into the HCI arena. The Register’s Chris Mellor reported about Something bubbling up at Springpath back in early December.  So what is the offspring of Cisco and Springpath called? Cisco HyperFlex!!

hyperflex systems

The Play:

Hyper-converged systems so far have delivered on simplicity and scale but there’s been a massive gap in  the lack of network integration in existing solutions. Yes you can use top of rack fast switches. In some cases customers use Cumulus on whitebox top-of-rack switches for software defined networking but networking is not a built in feature of the two leading hyper-converged solutions, Nutanix and Simplivity.

HyperFlex joins the comprehensive DC portfolio along with UCS, MDS and Nexus. It means that Cisco now has a play in traditional component based infrastructure, converge infrastructure and now hyper-converged infrastructure. Cisco is adding HyperFlex to provide it with another string to its software defined infrastructure. It will now have:

  • UCS – compute (service profiles, APIs etc.)
  • ACI – for software defined network
  • HyperFlex- software defined storage, compute and network

hyperflex systems overview

On the initial release Cisco HyperFlex will support file storage and VMware. There are a number of other storage types, such as block and object, and hypervisors on the roadmap.  There’s also going to be container support. Given that Springpath was hypervisor agnostic I’d expect a quick ramp up from Cisco and fast feature release cycle.

The Potential:

Like pretty much every other hyper-converged solution Cisco sees its expected use-cases to be:

  • VDI
  • Server virtualisation
  • Test and development
  • Large remote branch offices

UCS Manager is already familiar to multiple thousands of customers worldwide and the server and network deployment settings in HyperFlex come from pre-configured Service Profiles. Service Profiles are well and truly familiar to anyone that has worked with Cisco UCS.  Given that customer base and the familiarity with existing management tools there’s massive potential for Cisco HyperFlex here. There are some well developed existing incumbents in the hyper-converged market with Nutanix leading the way and HyperFlex will allow Cisco to gain a foothold in that rapidly growing market.

The Deep-dive: Read More