Earlier this week I was fortunate enough to be invited to an ACI Test Drive run by Firefly on behalf of Cisco. Recently I attended the Cisco Roadshow in Melbourne and was really interested in the speech by Dave Robbins around ACI. I’ve read quite a bit about ACI recently but wasn’t able to really picture in my head what is is, how it works and what benefits if any it can provide for me. Cisco have been really pushing ACI hard on the various media streams lately. There’s has also been quite a bit of discussion around the competition between Cisco with it’s ACI fabric and VMware’s NSX network virtualisation software. I’ve heard about NSX but haven’t had a chance as yet to play about with it. When the opportunity arose to join a test drive workshop on ACI it was too good to miss so I jumped at the chance. My background is not in networking but in virtualisation, compute and storage so I thought it would be a good opportunity to brush up on my networking skills at the same time. It’s definitely my weakest area so I’ve made a commitment to myself to work on my networking knowledge and understanding as much as I can.

Cisco ACI Leaf-Spine architecture

What is ACI?

ACI is the new vision for Cisco to manage their data center networks into the future. ACI is an application centric, software/policy driven, leaf-spine architecture that abstracts the logical definition of the physical hardware to provide re-usable and extensible policies for quick deployment of network infrastructure. ACI extends the principles of the Cisco UCS service profiles to the entire network fabric. The Nexus 9000 series released by Cisco earlier this year is at the core of the new ACI platform. There are a number of 9000 series switches in the family which I’ll go into more detail soon, but one item of note in all of the Nexus 9000 switches is that they can be run in standalone mode with NX-OS installed or in ACI mode (with two exceptions) which involves having the ACI ASIC installed in the switch. The line speed of the standalone version is phenomenal and can be used by developers to use their own custom code. This opens up the range to a great number of possibilities and puts it into direct competition with Arista who already have this capability. The Nexus 9000 comes with a merchant Broadcom chip for standalone mode and combines the ACI chip for ACI fabric related operations.

What ACI is not?

One thing that the ACI fabric is not is an orchestration tool. It provides Restful APIs etc. to allow orchestration of APIC policies to be deployed across the fabric but it does not provide the orchestration component. That was one of the misconceptions I had before the course which was quickly clarified. UCS Director or other orchestration tools such as Chef or Puppet can be used to orchestrate the policies. The APIC and the ACI fabric provide the automation and UCS Director et al. provide the orchestration. In a way this is similar to NSX providing the management and vRealise Automation Center providing the orchestrations from a VMware perspective.

APICs – the core of the ACI Fabric!

At the heart of the ACI fabric are the APICs (Application Policy Infrastructure Controllers). These are physical Cisco C200 rack servers that connect either to a switch or fabric extender and require at least 3 for fail-over and performance.

ACI APIC leaf-spine Nexus 9000

 

The APIC is a policy driven controller similar in a way to UCS Manager within the compute layer. The policies provide a level of abstractions from the physical devices and allow for quick, repeatable and programmable deployment of application services within the network. The APICs host the data of all the objects in the environment in a tree structure database. APICs are not pass through devices, they do not handle traffic directly but rather control the policies in the contracts that have been assigned to EPGs or even ports. Each contract that is required between EPGs (discussed more below) is stored in the database. APICs don’t sit in the data plane or even the control plan, it’s more of a management plane device that directs a switch to enable capabilities on its own ASIC to carry out the policies. One concern is that if a switch reboots does it require the APICs to be online? No. The policies in the contracts have already been pushed out by the APIC and are stored in memory so when the physical switch reboots it doesn’t require any further input from the APICs. In fact, if you have 3 APICs and they all go offline the ACI fabric will continue to function.

So what can a Nexus 9000 do

Some of the key reasons to go for a Nexus 9000 are: Power, Price, Programmability, Performance and Ports. Everything within the ACI fabric runs off of a Spine-Leaf framework which means that every other leaf is only ever two hops away. Any new leaves or spines that are added to the fabric are automatically discovered via LLDP. ECMP is used to weight the paths. IS-IS is the routing protocol used within the Nexus 9000 which can also be found in the Nexus 7000. VRFs are also used to allow multiple routing tables with the same IPs on the same switch. With the ACI fabric VXLAN is used to encapsulate traffic via layer 2. A VTEP (vxlan tunnel end point) in the hardware layer gathers packets coming from southbound traffic, strips the headers and adds the VXLAN header to route through the internal fabric. Once at the destination leaf the VXLAN header is stripped and a new header is added to direct the packet to the destination. This means that it’s possible for physical servers to communicate with virtual servers across the ACI fabric, or for traffic to flow via VXLAN from a VMware host to a Hyper-V host running NVGRE. Another feature of the Nexus 9000 range using the ACI fabric is that there are flowlets that measure the response time of applications across the network. There is a feature which is turned off by default that splits the packet into frames when a packet leaves a spine it and sprays this across the fabric to utilise all spines for faster processing and repackages at the destination leaf.

Cisco Nexus Series 

Some of the models of Nexus 9000 are:

Cisco Nexus 95xx – Modular switch with 4, 8 or 16 slots (9504, 9508 and 9516)

Cisco Nexus 9700 – For the ACI spine (contains Broadcom Merchant ASIC & ASE (spine) chip)

Cisco Nexus 9500 – For ACI leaf (contains Broadcom Merchant ASIC and ALE (leaf) chip)

Cisco Nexus 9300 TOR and spine switches – 1U fixed spine switch. The most common is 36 port 9336.

Cisco Nexus 9400 and 9600 are not for ACI deployments, they are really only for high-performance traffic switching that can be programmed.

Cisco Nexus 9000 model range

The modular 9000 series switches require line cards. The x9700 line card can be used only for ACI spine but the x9500 line cards can be used for both ACI spine and leaf. My recommendation before dropping a PO at the feet of a reseller is to discuss your requirements and ensure you are getting the right kit at the outset.

How to control the Fabric

ACI requires a different way of thinking about traffic flows and how to manage your network. ACI is broken down into applications, flows, contacts and EPGs (End Point Groups). An EPG is a group of servers/vlans of similar type or service requirements. For applications we need to know the data flow and the ports used by the applications and in which direction the traffic flows. The services that are required to run across those links such as SSL Offload, firewalls, IDS etc. also need to be know so that they can also be factored into the contracts that take place between the EPGs. The contract defines how traffic and services are provided such as inbound and outbound permits, denies, quality of service and service graphs. Within ACI all traffic is whitelisted so there are no traffic flows allowed unless explicitly specified. A contract contains a number of subjects which is itself made up on a filter, action and label. There is no contract required for intra-EPG communication however.

Cassandra ACI integration

The best example of EPGs is a 3-tier web application which has an external user accessing web content over port 80 from the EPG Web which contains a number of web servers. There is a different contract between the EPG Web and EPG Apps to allow the web servers to communicate with each other. Likewise between the EPG Apps and EPG DB there is another contract to allow those servers to communicate. In most instances, particularly if the web front-end is heavily utilised, traffic from EPG Web to EPG Apps will need to run across a load balancer or possibly an IDS. If this is required the load balancer and IDS can be packaged into a Service Graph which can then be used to route traffic through. Once again this will require contracts between the EPGs and the Service Graph. The below diagram gives a clearer picture of what has been described above.

ACI EPG Management

There are some rules in relation to EPGs. Servers on the same subnet within the same EPG can communicate, servers in different subnets but the same EPG can communicate, servers in different EPGs cannot communicate unless explicitly specified. This might need to be done to direct traffic via a bridging EPG or Service Graph. It should be noted also that both physical and virtual servers can be added to the same EPG. There is also a construct called a Bridge Domain which can contain a number of EPGs and can be used to communicate with other Bridge Domains via VXLAN. To me at least a Bridge Domain is a container that provides logical isolation between EPGs. Please don’t hold me to account on that definition, I’m sure it doesn’t appear in official documentation with that description. There are some more rules for EPGs in relation to Bridge Domains. EPGs can only sit in one bridge domain. Each IP address can only sit in one EPG. An EPG can have multiple contracts.

VMware Integration

Currently ACI integration is only available in VMware. When an EPG is defined a port group on a distributed virtual switch is added to the relevant ESXi host. Any VMs that need access to the port group, no matter what VLAN or subnet they exist on, are added to the port groups and the policies get applied from the APIC. When the AVS (Application virtual switch) is released during 2015 it will be interesting to see how it integrates into the virtual infrastructure and how it can potentially enhance the abilities of ACI.

ACI VMware Integration

Migration Path

The migration path of ACI is not an easy one. For application migration it is possible to create a “taboo” contract that opens any-any traffic to facilitate the migration of the app. It’s possible to create this “taboo” contract with logging so you can lock down the destination ports. This would obviously be revisited afterwards but it does allow for a quick migration method in case the apps team haven’t mapped out their application flows within the infrastructure. Before migrating anything into the ACI fabric it is recommended to work out all the application flows, ports and services for each application so that the relevant EPGs can be implemented without having to use the “taboo” contract. Let’s be honest, this is a massive undertaking. Most apps people have no idea how their apps and data interact with each other. I know myself that within our environment it would take forever to gather that data due to the wide range of apps supported and the general lack of knowledge from the apps team as to how the back end of their applications actually work. Cisco may potentially have some tools internally that might help to map out the application flows however this will most likely only be available to Advanced Systems or Cisco partners. ACI is a whole different way of thinking and process to how applications and networks are currently used. It’s recommended to at least do a PoC to ensure that the ACI fabric and APIC can meet your requirements. If it doesn’t, well you still have the rest of the Nexus range and various topologies to choose from.

For the ACI fabric Cisco recommend using Nexus 9000 for the leaf and spine switches and over time replacing the Nexus 7000 and Nexus 5000 switches. One major problem from my perspective on this is that Nexus 9000 doesn’t support OTV which is a feature of the 7000, and a requirement for NetApp Metrocluster that backends my Flexpod, and the FCoE protocol support on the Nexus 5000 is not supported on the 9000 series switches. While this isn’t a deal breaking it would be nice if the 9000 was at least on par with the 5000 on its feature-set.

Some other items to keep in mind

  • ACI currently supports up to 5 vCenters which will scale to 25 by the end of next year
  • Multi-tenancy with overlapping/duplicate VLANs are not supports
  • NSX only works on the Nexus 9000 in standalone mode, not in ACI mode
  • Clustering of APICs across sites is not supported (so a stretch VMware and storage cluster would require a separate set of APICs on each site with export and import of APIC xml tree objects to ensure both sites have the same configurations)
  • PIM is not support but is on the roadmap
  • AVS (Application Virtual Switch) does not provide layer 3 routing and is not due for release until mid 2015.
  • Policy changes cannot be scheduled but can be scripted if you know your way around some programming tools, but no native support for scheduled policy changes
  • Support for vSphere 6.0 in mid-2015 – but long distance vMotion may cause the APICs problems, however this may be ironed out by the Insieme development team before then
  • These features are expected in Q1 2016 – multi-pod, multi-site, remote leaf and investment protection (AVS & Remote TOR)

Does ACI Work for me

The short answer is right now for me me ACI doesn’t work with my needs. The environment I manage is a Flexpod with Cisco Nexus 7000 & 5000, running on a NetApp Metrocluster shared across two sites on dark fibre. Metrocluster requires OTV to function correctly. We are running 7-mode and I haven’t checked the C-dot documentation but I believe the same requirement exists. That rules out a Nexus 9000 as the cores but they are still an option for a replacement to the Nexus 5000. Although they were only purchased last year so a replacement is unlikely any time soon. The most likely scenario is that if expansion is required a Nexus 9000 could be a potential option.

Another reason why ACI and in particular APIC is not currently suitable is that it there is no cluster of APICs across sites. In my scenario I have one VMM domain (vCenter) across two sites. If I vMotion a VM across to the second site OTV is used to facilitate this. In order to make sure that the APIC controllers on the second site have the same tree structure in the database it would require exporting tree objects from one database and importing on the second site. Which, lets face it is not really a viable solution in the long term.

There are other reasons why ACI might not be an option, for now. One of the key aspects of ACI is application mapping so that the correct EPGs and Service graphs can be created to allow swift deployment of networking infrastructure and to enable re-usability among the EPGs, contracts and objects. This is a massive task. I’m currently working through a similar task in my environment where I can map out what applications exist on the servers, what ports and services is it using, who the application and business owners are and mapping the application dependencies. It’s a back-burner task but it will be a significant help when ACI becomes a mature product and we are in a position to avail of it.

The migration plan is highly disruptive and new line cards need to be added with the ACI ASIC which requires entire fabric to be shutdown, if contracts are not set up correctly between EPGs then all traffic on the network will be blocked when the switches come online. I’m sure Cisco or a partner would assist with this or even a potential PoC/test environment could be used to pre-test the EPGs. It’s not a show stopper but it is something to factor in.

Final View

Cisco ACI is not software defined networking a la NSX. It’s policy driven network management abstracted from the physical layer via a software controller. This is something that NSX is unable to do. I think that NSX and APIC are not direct competitors. They are playing the same game but one is in the English Premiership and the other is playing in the Australian A-League. They even speak the same language but they are miles apart.

Nexus 9000 with ACI and the APIC controllers is a wholesale change to networking infrastructure. I think ACI and APIC were released early by Cisco but it’s not ready and really isn’t production ready until Q1 2016. Despite all my caveats around how ACI doesn’t work for me and the current lack of features (roadmap features excluded here) I really do think that ACI and APIC is a fantastic leap by Cisco in the right direction. It was mentioned during the course that right now it’s possible to deploy VMs in minutes and applications in minutes but networking configuration and deployment sucks up days if not weeks of configuration and implementation time. ACI will greatly reduce that time and mean businesses can be more agile in their application and infrastructure management and deployments. APIC is essentially going to be UCS for the networking world and that isn’t a bad think. Having used UCS exclusively for over a year now I can say it’s rock solid and just about bomb-proof. At the moment ACI and APIC are not as solid but with the strong roadmap that Cisco has planned out I think that once it all matures and we start to head into 2016 ACI will really grab traction in the market. I suppose the main problem for Cisco is that in the meantime NSX is going to be going at it like a runaway train to capture as much of that market as it can. Cisco are putting their weight behind ACI & APIC as the way forward. So much so that they announced back in early September that Flexpod will not support NSX and there are no plans in the pipeline to allow Flexpod to run NSX but it will run ACI. I would prefer to have the option of either/or rather than be locked to one possible solution but by the time I need to look at doing an infrastructure refresh I’m sure the networking landscape will have changed. Only time will tell.

 

3 thoughts on “Cisco ACI

  1. Pingback: Cisco Live Session Review « Virtual Notions

  2. Pingback: Flexpod, NSX and ACI – Rant! « Virtual Notions

  3. Pingback: Flexpod and SDN « Scamaill Beag

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.