SUMMARY - Configuring StorageTek 2510 iSCSI for best ZFS

From: Chris Hoogendyk <hoogendyk_at_bio.umass.edu>
Date: Tue Sep 22 2009 - 13:11:19 EDT
hmm. I thought I owed a summary, but I can't seem to find that I ever 
posted the question. I guess I just hammered my head against the wall 
for enough days that I was sure I must have posted a question. Anyway, I 
think it's worth posting a summary, because there were missing links in 
the documentation, and, finally, a critical blog I found that helped me 
figure out what to do.

It's hard to do a useful micro summary; but, basically, I was able to 
work with command line only using Common Array Manager (CAM) to 
configure the 2510 so that each drive was exported as a LUN. Then I 
could assign drives to individual hosts (I have 2 directly connected to 
the array), see them in `format`, and use zpool to group them into raidz.


-- *Physical setup* --

This configuration is a StorageTek 2510 with SAS drives and 2 T5220's. 
The 2510 has one controller, and each T5220 is connected directly to one 
of the iSCSI ports on that controller. The controller on the 2510 has a 
management port and two data ports. There are stickers on the back near 
the ports with the Mac addresses printed on them.


-- *Management port *--

Originally, I wanted to have command line only management directly over 
the data lines (in line management). Since I was having trouble figuring 
out how to set this thing up at all, I decided to just go ahead and 
expose the management port on the 2510. I set up the Mac address of the 
management port in our DHCP server, plugged the management port into a 
switch in our private network, and tried to ping it. No ping and no 
evidence from the DHCP server that it had requested or been given an 
address. hmm. Turns out it does it using bootp. So, I added an 
appropriate entry in /etc/bootptab on our server that does it. Power 
cycled the 2510. Ping. Then I scanned it with nmap. nmap failed to 
identify the device and said all ports were closed. OK. Cool. on to the 
next step.


-- *Install CAM* --

Now I needed to install CAM. I thought I should look for the latest. 
Turns out 6.2 and more recent say that they need a firmware upgrade of 
the 2510 before they can talk to it. The software for the firmware 
upgrade seemed to be GUI only. My server has no GUI. Really. Minimal 
network install with whatever packages I needed beyond that individually 
installed. No GUI. So. CAM 6.1.1.10 seemed to be the most recent before 
6.2. I downloaded that and checked the readme. It had a command line 
only install option. `./RunMe.bin -c`. I did that. Selected a custom 
install, command line only with firmware. Somehow that claimed it was 
installing Java 2 standard edition.

There were some differences in where the documentation said I would find 
the software and where I actually found it. The critical application is 
sscs and was found at /opt/SUNWstkcam/bin/. Since I was on the machine 
that was to manage and access the array, I didn't need to do the `sscs 
login ....`, and I could get basic information on the fly from `sscs 
--help`.


-- *Documentation* --

Ultimately, I found three sources that gave me what I needed.

(1) The Sun StorageTek 2500 Series Array Installation Guide gave me the 
overview of how to set up the array. In particular, the section on 
Configuring iSCSI Tasks, the Table A-2, iSCSI Configuration Steps, and 
the section with the Example for Solaris. It fell short in that at a 
crucial step it simply says, "Follow CAM Documentation to . . . ." But, 
the CAM documentation is very cursory on iSCSI, assuming, I guess, that 
you will get it from the GUI. There is a table listing iSCSI commands 
with no explanation, overview, or examples of how to use them.

(2) The man page for sscs was a key reference. However, it is 123 pages 
long and doesn't really give you an overview of what you need to do or 
how the different subcommands interact. It's also a bit hard to go 
scrolling back and forth through it. So, I printed it out to a pdf 
preview. Then I could search the pdf for "NAME", which was the section 
title preceding every command. That gave me a sort of table of contents 
into the commands that I could scan through. I still encountered at 
least one command whose syntax turned out to be very different from what 
the man page listed.

(3) I found a blog that gave me some critical conceptual overview and 
details for what I was trying to do. That is referenced further down 
when I get to that.


-- *Registering & Configuring the 2510* --

Following page 95 and forward in the Installation Guide.

[Note: I've slightly obscured the IP address in the following]

# cd /opt/SUNWstkcam/bin

# ./sscs add -i 172.xx.xx.xx registeredarray

   Name      Type Network Address Serial Number             
   --------- ---- --------------- --------------------------
   unlabeled 2510 172.xx.xx.xx    SUN.540-7198-01.0812BE3D2D

# ./sscs modify -N edsel array unlabeled

[After setting up a new J4200, this seemed an appropriate name for the 2510]

# ./sscs list firmware

Analyzing array edsel,(172.xx.xx.xx),200400a0b8490699

Controller: Some FRUs not at baseline.
Name                 Model     Current     Baseline   
Tray.85.Controller.A LCSM100_I 06.70.42.10 06.70.54.10

Disk: Some FRUs not at baseline.
Name             Model            Current Baseline
Tray.85.Drive.01 HUS1530SBSUN300G SA02    SA04    
Tray.85.Drive.02 HUS1530SBSUN300G SA02    SA04    
Tray.85.Drive.03 HUS1530SBSUN300G SA02    SA04    
Tray.85.Drive.04 HUS1530SBSUN300G SA02    SA04    
Tray.85.Drive.05 HUS1530SBSUN300G SA02    SA04 

System/NVSRAM: All FRUs at baseline
Name  Model Current          Baseline        
edsel 2510  N1532-670843-901 N1532-670843-901
  
# ./sscs modify -a edsel firmware

WARNING:  This command will load new firmware if needed and may impact
array management and data availability.
Do you wish to continue? [y,n] : y
Started firmware upgrade job Install:task28

# ./sscs list -a edsel jobs

Job ID: Install:task28  Status: Done

# ./sscs list firmware

Analyzing array edsel,(172.28.55.33),200400a0b8490699

Controller: All FRUs at baseline
Name                 Model     Current     Baseline   
Tray.85.Controller.A LCSM100_I 06.70.54.10 06.70.54.10

Disk: Some FRUs not at baseline.
Name             Model            Current Baseline
Tray.85.Drive.01 HUS1530SBSUN300G SA02    SA04    
Tray.85.Drive.02 HUS1530SBSUN300G SA02    SA04    
Tray.85.Drive.03 HUS1530SBSUN300G SA02    SA04    
Tray.85.Drive.04 HUS1530SBSUN300G SA02    SA04    
Tray.85.Drive.05 HUS1530SBSUN300G SA02    SA04    

System/NVSRAM: All FRUs at baseline
Name  Model Current          Baseline        
edsel 2510  N1532-670843-901 N1532-670843-901

[So, it updated the firmware on the controller, but not on the drives. 
It took a bit of fiddling around to find the correct syntax and details 
to do that.]

# ./sscs modify -a edsel -t disk firmware

WARNING:  This command will load new firmware if needed and may impact
array management and data availability.
Do you wish to continue? [y,n] : y
Started firmware upgrade job Install:task32

# ./sscs list -a edsel jobs

Job ID: Install:task32  Status: Done
Job ID: Install:task28  Status: Done

# ./sscs list firmware

Analyzing array edsel,(172.28.55.33),200400a0b8490699

Controller: All FRUs at baseline
Name                 Model     Current     Baseline   
Tray.85.Controller.A LCSM100_I 06.70.54.10 06.70.54.10

Disk: All FRUs at baseline
Name             Model            Current Baseline
Tray.85.Drive.01 HUS1530SBSUN300G SA04    SA04    
Tray.85.Drive.02 HUS1530SBSUN300G SA04    SA04    
Tray.85.Drive.03 HUS1530SBSUN300G SA04    SA04    
Tray.85.Drive.04 HUS1530SBSUN300G SA04    SA04    
Tray.85.Drive.05 HUS1530SBSUN300G SA04    SA04    

System/NVSRAM: All FRUs at baseline
Name  Model Current          Baseline        
edsel 2510  N1532-670843-901 N1532-670843-901

[All set]


-- *Configure data ports on server* --

OK, so now I need to set up data connections between the T5220 and the 
2510. They are connected directly with a plain ethernet cable (not a 
crossover). I get the IP addresses for the 2510 data ports from sscs, 
and then configure the T5220 with ifconfig.

# ./sscs list -a edsel iscsi-port

[output truncated]

  Controller:           A

  iSCSI Port:           A/1
  Listening Port:       3260
   IP Address:          192.168.130.101
   Gateway:             0.0.0.0
   Netmask:             255.255.255.0

  iSCSI Port:           A/2
  Listening Port:       3260
   IP Address:          192.168.131.101
   Gateway:             0.0.0.0
   Netmask:             255.255.255.0

# ifconfig -a

[just to check]

# ifconfig e1000g2 plumb

# ifconfig e1000g2 192.168.130.88 netmask 255.255.255.0 up

# ping 192.168.130.101

192.168.130.101 is alive!

# cat > /etc/hostname.e1000g2
192.168.130.101
# cat >> /etc/hosts
192.168.130.88   imladris.scsi.mor.nsm
192.168.130.101   edsel.scsi.mor.nsm
# cat >> /etc/netmasks
192.168.130.0   255.255.255.0

In the above, the ifconfig got it running; but, the additions to the 
/etc files will make sure it comes up if/when the system is ever 
rebooted. The fully qualified network names are made up to match, in 
essence, the way we do our private networks. It won't go out on any 
network, even private, since they are direct connected, but, anyway. 
Morrill (mor) is our building, and NSM is our College within the University.


-- *Set up iSCSI initiators* --

This is basically following the steps in the Installation Guide, in 
particular the Solaris example on p. 101 in my hard copy. Be sure to 
check software versions and patches as instructed. I didn't have to mess 
with any of that, because I was on Solaris 10 u7 (5/09) with the latest 
recommended and security patches. Anyway. Need to find the IQN (iSCSI 
Qualified Name) for the initiator associated with the T5220 and the data 
ports on the 2510, and set up connections between them.

# iscsiadm list initiator-node

Initiator node name: iqn.1986-03.com.sun:01:00212813a10c.49188968

[rest of output cut]

# iscsiadm modify discovery --static enable

# iscsiadm add discovery-address 192.168.130.101:3260

# iscsiadm list discovery-address

Discovery Address: 192.168.130.101:3260
        Target name: 
iqn.1986-03.com.sun:2510.600a0b80004906990000000048c13342
                Target address: 192.168.130.101:3260, 1
        Target name: 
iqn.1986-03.com.sun:2510.600a0b80004906990000000048c13342
                Target address: 192.168.131.101:3260, 1

# iscsiadm add static-config 
iqn.1986-03.com.sun:2510.600a0b80004906990000000048c13342,192.168.130.101

# iscsiadm list static-config

Static Configuration Target: 
iqn.1986-03.com.sun:2510.600a0b80004906990000000048c13342,192.168.130.101:3260

# format
[intention is to check what is there now, before setting up drives on 
the 2510]

Searching for disks...done

c6t8d0: configured with capacity of 16.00MB


AVAILABLE DISK SELECTIONS:
       0. c1t0d0 <LSILOGIC-LogicalVolume-3000 cyl 65533 alt 2 hd 16 sec 
273>  raidboot
          /pci@0/pci@0/pci@2/scsi@0/sd@0,0
       1. c1t3d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
          /pci@0/pci@0/pci@2/scsi@0/sd@3,0
       2. c6t8d0 <SUN-UniversalXport-0670 cyl 8 alt 2 hd 64 sec 64>
          
/iscsi/disk@0000iqn.1986-03.com.sun%3A2510.600a0b80004906990000000048c13342FFFF,31
Specify disk (enter its number): 2
selecting c6t8d0
[disk formatted]
Disk not labeled.  Label it now? no

partition> p
Current partition table (default):
Total disk cylinders available: 8 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders     Size            Blocks
  0       root    wm       0            0         (0/0/0)     0
  1       swap    wu       0            0         (0/0/0)     0
  2     backup    wu       0 - 7       16.00MB    (8/0/0) 32768
  3 unassigned    wm       0            0         (0/0/0)     0
  4 unassigned    wm       0            0         (0/0/0)     0
  5 unassigned    wm       0            0         (0/0/0)     0
  6        usr    wm       0 - 7       16.00MB    (8/0/0) 32768
  7 unassigned    wm       0            0         (0/0/0)     0

partition> q

[hmm. ??? What is that 16MB Thingy ???]


-- *Set up hosts & disks* --

OK. Getting there. Now I have to configure the hosts and disks on the 
2510 so that the disks will be linked to the host where I want to mount 
and format them (the T5220). This is where the Installation Guide drops 
the ball, just saying, "Follow CAM documentation to: ...." The CAM 
documentation, in turn, doesn't give a conceptual overview, assumes you 
are using the GUI (which will automate a bunch of stuff), and doesn't 
say much about iSCSI. This is where I found "Sun StorageTek 2540 / ZFS 
Performance Summary" by Bob Friesenhahn 
(http://www.simplesystems.org/users/bfriesen/zfs-discuss/2540-zfs-performance.pdf), 
which describes setting up the 2540 to present each drive as a separate 
unit. That gave me enough conceptual overview and detail to then mine my 
pdf print of the man page for sscs and work out the commands to do it.

The basic overview is to create a separate storage pool for each disk. 
Create a separate volume for each disk within its own separate storage 
pool. Then link those to a host and set it up in zfs. He was dealing 
with fiber channel and multipathing, but that doesn't matter. The CAM 
concepts are the same.

First, just some housekeeping. Mining the pdf man page, (& some trial 
and error) . . .

# ./sscs list -a edsel date

  Date: Tue Sep 15 12:29:58 2009

# ./sscs modify -a edsel date 1755

# ./sscs list -a edsel date

  Date: Tue Sep 15 17:55:16 2009

# ./sscs list -a edsel os-type

SOLARIS_MPXIO - Solaris (with Traffic Manager)

[rest of output clipped]

# ./sscs create -a edsel host marlin

# ./sscs create -a edsel host snapper

# ./sscs create -a edsel -h marlin -o solaris -i 
iqn.1986-03.com.sun:01:00212813a10c.49188968 initiator marlin

[creating a couple of hosts and an initiator for the one host, naming 
the host and the initiator with the same name]

# ./sscs list -a edsel disk

Tray: 85    Disk: t85d05
  Capacity:       279.396 GB
  Type:           SAS
  Speed (RPM):    15000
  Status:         Optimal
  State:          Enabled
  Role:           Unassigned
  Virtual Disk:   -
  Firmware:       SA04
  Serial number:  000809C69Y7C        J8W69Y7C
  WWN:            50:00:CC:A0:05:43:DC:38

[clipping a bunch of output]

Tray: 85    Disk: t85d01
  Capacity:       279.396 GB
  Type:           SAS
  Speed (RPM):    15000
  Status:         Optimal
  State:          Enabled
  Role:           Unassigned
  Virtual Disk:   -
  Firmware:       SA04
  Serial number:  000809C090LC        J8W090LC
  WWN:            50:00:CC:A0:05:38:E5:70


# ./sscs list -a edsel profile

Profile: Default

# ./sscs list -a edsel profile Default

Profile: Default
  Profile In Use:            yes
  Factory Profile:           yes
  Description:               Pre-configured Default profile
  RAID Level:                5
  Segment Size:              512 KB
  Read Ahead:                on
  Optimal Number of Drives:  variable
  Disk Type:                 Fibre Channel
  Dedicated Hot Spare:       no
  Pool:                      Default
  Pool:                      disk1

[hmm. That just won't do. I wonder why they put that on the 2510 as the 
default profile? So, create a new profile.]

# ./sscs create -a edsel -r 0 -s 128K -h on -n 1 -k SAS -H no -d "One 
SAS disk" profile RAW_SAS

[That took a fair bit of trial and error, but it worked. Now create a 
bunch of pools using that profile.]

# ./sscs create -a edsel -p RAW_SAS pool Disk-01

# ./sscs create -a edsel -p RAW_SAS pool Disk-02

# ./sscs create -a edsel -p RAW_SAS pool Disk-03

[etc.]

# ./sscs list -a edsel pool

Pool: Disk-01  Profile: RAW_SAS  Configured Capacity: 0.000 MB
Pool: Disk-02  Profile: RAW_SAS  Configured Capacity: 0.000 MB
Pool: Disk-03  Profile: RAW_SAS  Configured Capacity: 0.000 MB

[clipped]

# ./sscs list -a edsel pool Disk-01

Pool: Disk-01
  Description:          null
  Profile:              RAW_SAS
  Total Capacity:       0.000 MB
  Configured Capacity:  0.000 MB
  Available Capacity:   278.896 GB

[cool. Now create the corresponding volumes.]

# ./sscs create -a edsel -p Disk-01 -s 278.896gb -d t85d01 volume Disk-01

# ./sscs list -a edsel pool Disk-01

Pool: Disk-01
  Description:          null
  Profile:              RAW_SAS
  Total Capacity:       278.895 GB
  Configured Capacity:  278.895 GB
  Available Capacity:   278.896 GB
  Volume:               Disk-01

# ./sscs list -a edsel volume Disk-01

Volume: Disk-01
  Type:                            Standard
  WWN:                             
60:0A:0B:80:00:49:06:99:00:00:02:26:4A:B0:EF:52
  Pool:                            Disk-01
  Profile:                         RAW_SAS
  Virtual Disk:                    1
  Size:                            278.895 GB
  State:                           Free
  Status:                          Online
  Action:                          Initializing...
  Condition:                       Optimal
  Read Only:                       No
  Controller:                      A
  Preferred Controller:            A
  Modification Priority:           High
  RAID Level:                      0
  Segment Size:                    128 KB
  Read Cache:                      Enabled
  Write Cache:                     Enabled
  Write Cache with Replication:    Disabled
  Write Cache without Batteries:   Disabled
  Write Cache Active:              True
  Flush Cache After:               10 s
  Disk Scrubbing:                  Enabled
  Disk Scrubbing with Redundancy:  Disabled

# ./sscs create -a edsel -p Disk-02 -s 278.896gb -d t85d02 volume Disk-02

# ./sscs create -a edsel -p Disk-03 -s 278.896gb -d t85d03 volume Disk-03

[and so on]

[Finally, map the volumes to the host.]

# ./sscs map -a edsel -P readwrite -h marlin volume Disk-01

# ./sscs map -a edsel -P readwrite -h marlin volume Disk-02

[etc.]


-- *Create iSCSI device link & format drives* --

That's basically it for setting up the 2510. Now, from the T5220's 
perspective, I have to set up the iSCSI device links. This is straight 
from the Installation Guide.

# devfsadm -i iscsi

# format

Searching for disks...done

c6t8d0: configured with capacity of 16.00MB
c7t600A0B8000490699000002264AB0EF52d0: configured with capacity of 275.99GB
c7t600A0B8000490699000002284AB0EFB6d0: configured with capacity of 275.99GB
[clipped]

AVAILABLE DISK SELECTIONS:
       0. c1t0d0 <LSILOGIC-LogicalVolume-3000 cyl 65533 alt 2 hd 16 sec 
273>  raidboot
          /pci@0/pci@0/pci@2/scsi@0/sd@0,0
       1. c1t3d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
          /pci@0/pci@0/pci@2/scsi@0/sd@3,0
       2. c6t8d0 <SUN-UniversalXport-0670 cyl 8 alt 2 hd 64 sec 64>
          
/iscsi/disk@0000iqn.1986-03.com.sun%3A2510.600a0b80004906990000000048c13342FFFF,31
       3. c7t600A0B8000490699000002264AB0EF52d0 <SUN-LCSM100_I-0670 cyl 
65533 alt 2 hd 128 sec 69>
          /scsi_vhci/ssd@g600a0b8000490699000002264ab0ef52
       4. c7t600A0B8000490699000002284AB0EFB6d0 <SUN-LCSM100_I-0670 cyl 
65533 alt 2 hd 128 sec 69>
          /scsi_vhci/ssd@g600a0b8000490699000002284ab0efb6
[clipped]

Specify disk (enter its number): 3
selecting c7t600A0B8000490699000002264AB0EF52d0
[disk formatted]
Disk not labeled.  Label it now? no

 > quit


-- *ZFS* --

So, I'm basically home free now, and I can configure ZFS using the 
information from the format command. Something like, . . .

# zpool create mypool raidz c7t600A0B8000490699000002264AB0EF52d0 [and 
so on]


I also tried setting up email notification, but I don't know if it works.

# ./sscs add -e hoogendyk@marlin.mor.net.nsm -m minor notification 
local_email                 

[which doesn't match the man page, but rather is based on the error 
messages I got when I entered it according to the man page]


-- *in Closing* --

I hope this extended howto turns out to be helpful to someone else. I 
had a fair bit of trouble finding the information that made this 
possible and working out the details. It should have been more 
elementary than that. The Sun sales people said, "sure, you can do 
that." Then, since we aren't doing things according to their book, it's 
hard to actually get support and guidance. It is sort of like when you 
want to do a Solaris install and minimize the OS for security purposes. 
They actually have a security white paper (somewhat dated now) 
recommending that. However, it isn't SOP. So you can't get support when 
you try to do that. Even in a room full of Solaris people and several 
Sun engineering types (at an OpenSolaris Users Group meeting), they just 
shrug and say, "do a standard install."

Anyway, if your purpose is to use ZFS, and you really just want a JBOD, 
I would recommend the J4200 and related products over the StorageTek 
25xx. I think my multipathed dual SAS J4200 is much faster than my 
non-multipathed single GigE connection 2510, and it was much easier to 
set up. I haven't actually done I/O tests yet. We had originally thought 
we were going to share drives off the 2510 between a couple of servers 
and maybe even mount them read-only to our backup server, but that's not 
really possible. Even though it can sit on a network, the drives are 
basically owned and formatted by a single server. It's probably easier 
and more featurefull to just have a SAS JBOD, maybe with expansion 
trays, hanging on a single T5xxx server, using ZFS, and then share that 
out if other servers want access. Sort of build your own 7000 series.

Comments appreciated.

If I get comments, and it seems warranted, I'll post a summary-2.


-- 
---------------

Chris Hoogendyk

-
   O__  ---- Systems Administrator
  c/ /'_ --- Biology & Geology Departments
 (*) \(*) -- 140 Morrill Science Center
~~~~~~~~~~ - University of Massachusetts, Amherst 

<hoogendyk@bio.umass.edu>

--------------- 

Erdvs 4
_______________________________________________
sunmanagers mailing list
sunmanagers@sunmanagers.org
http://www.sunmanagers.org/mailman/listinfo/sunmanagers
Received on Tue Sep 22 13:12:31 2009

This archive was generated by hypermail 2.1.8 : Thu Mar 03 2016 - 06:44:14 EST