As part of a recent data migration I had to enable a vfiler to allow iSCSI traffic as a number of virtual machines in the environment require block storage for clustering reasons. The vfiler already presents via NFS and iSCSI. As this is a test environment I’ve decided to put iSCSI on the same link as the NFS and CIFS. I know this is not normal best practice but given that the vLANs are already in place and that this is a test environment I decided to use the same IP address range. The servers accessing the iSCSI LUNs don’t have access to CIFS or to any NFS mounts already so there should be no traffic cross-over. So onto the steps to set it up:

Step 1: Allow iscsi protocol and RSH on vfiler (at vfiler0)

Check the status of the vfiler using the command

vfiler status -a tenant_vfiler
tenant_vfiler running
 ipspace: tenant_vfiler_NFS_CIFS
 IP address: 192.168.2.1 [a1a-107]
 IP address: 192.168.2.2 [a1a-107]
 Path: /vol/tenant_vfiler_vol0 [/etc]
 Path: /vol/nfs03
 Path: /vol/nfs04
 Path: /vol/nfs02
 Path: /vol/nfs01
 Path: /vol/cifs01
 Path: /vol/iso01
 Path: /vol/iscsi_test
 UUID: 93c62e36-4e76-11e4-8721-123478563412
 Protocols allowed: 7
Disallowed: proto=rsh
 Allowed: proto=ssh
 Allowed: proto=nfs
 Allowed: proto=cifs
Disallowed: proto=iscsi
 Allowed: proto=ftp
 Allowed: proto=http
 Protocols disallowed: 2

Next run the command:

vfiler allow tenant_vfiler proto=iscsi
vfiler allow tenant_vfiler proto=rsh

Step 2: Start iSCSI protocol on vfiler (at apaubmwvfi01)

vfiler context tenant_vfiler
iscsi start

Step 3: Create a new volume at vfiler0

vfiler context vfiler0
vol create iscsi_test_vol -s 20g

Step 4: Migrate the volume to apaubmwvfi01 and log into the vfielr to check the volume status

vfiler add tenant_vfiler /vol/iscsi_test
vfiler context tenant_vfiler
vol status

Step 5: Set priv advanced and modify the exports to the correct settings as below

To modify the exports read the current /exports and write it back. Once done run the exportsfs -av command to push the changes out

rdfile /vol/tenant_vfiler_vol0/etc/exports
/vol/nfs01 -sec=sys,rw=192.168.1.0/24,anon=0
/vol/nfs02 -sec=sys,rw=192.168.1.0/24,anon=0
/vol/nfs03 -sec=sys,rw=192.168.1.0/24,anon=0
/vol/nfs04 -sec=sys,rw=192.168.1.0/24,anon=0
/vol/iso01 -sec=sys,rw=192.168.1.0/24,anon=0
/vol/iscsi_test -sec=sys,rw=192.168.1.0/24,anon=0
vfiler run tenant_vfiler exportfs -av

Step 6: Create a lun from the volume (iscsi_test)

vfiler run tenant_vfiler lun create -s 10g -t windows2008 /vol/iscsi_test/iscsi_lun

Step 7: Change filer and run lun show

lun_show

Step 8: Verify iSCSI network within VMware has been assigned to the VM
iSCSI network
Step 9: Enable iSCSI Initiator – grab the iqn

iSCSI initiator iqn

Step 10: Create an igroup with the iqn of the server

igroup create -t Windows2008 ds_iscsi 
igroup add ds_iscsi iqn.1991-05.com.microsoft:microsoft:server.domain.com

Step 11: map the lun to the group name

map_lun_to_group

Step 12: run lun show -m to check the mapping

lun_show_mapping

Step 13: Run a quick connect to the IP address of the controller

iscsi_quick_connect

And now your disk should appear in the disk manager on the server. It’s not too different to setting up a normal iSCSI connection but RSH must be enabled otherwise it can’t tunnel the iSCSI  request to the vfiler iqn target.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.