Many thanks to all those who replied with suggestions. There was a suggestion to do this: - 'cfgadm -la -o show_SCSI_LUN' - you will see the removed LUN's listed as 'unusable' e.g. c2::50060e80102375e1,34 disk connected configured unusable - 'cfgadm -o unusable_SCSI_LUN -c unconfigure c2::50060e80102375e1' - this will remove the unusable LUN's from cfgadm output - devfsadm -C -c disk - to clean up /dev/dsk Now you run powermt check, cldev clear and cldev refresh ------- However, cfgadm doesn't appear to be supported for my hardware: cfgadm: Configuration administration not supported Another suggestion was to just do: cldev clear cldev check cldev refresh cldev populate (The last command is what removed the old LUNS from the cluster.) ------ However, in the state of my machines, 'cldev clear' didn't report any errors - just said it was updating devices on other nodes. However one or more of the cldev check, refresh, and/or populate would complain about 'Inquiry on device xxxxxx failed'. I do have a workaround, although not as slick as I'm sure it could be. If I reboot the cluster node, then 'cldev clear' seems to be all I need. No powermt commands and none of the other cldev commands. The difference in functionality is that 'cldev clear' (after a reboot) says "Unable to remove driver instance '32' - No such device or address" -- for each device. It may complain, but it's doing the job. It updates the device list then on all nodes and then that node is clean. I have to do it on all the cluster nodes, but at least I appear to have a way to clean house. Again, thank you to all that offered suggestions. Tom Lieuallen Oregon State University On 2/13/12 9:13 AM, Tom Lieuallen wrote: > We have an old sun cluster 3.2 installation that we're trying to migrate > away from. It is comprised of 4x nodes (T5120, T5220, and T5140) in > solaris 10. They are connected to an EMC CX3-40 and we're using > powerpath (5.2SP1) to provide the multipathing. > > My question is how to properly remove LUNs from the cluster. As we > migrate disk space off the sun cluster, we remove them from the host > group on the EMC, so the sun cluster no longer has access. I then have > tried any/all combinations of powermt commands and cldev to remove the > old paths. It seems to me that powermt does try to remove dead paths, > but they seem to keep coming back. It is my suspicion that the cluster > is counter acting the power path attempts to remove dead paths. And if I > use cldev to clean up paths, they might still be seen on some node via > powerpath (although it's a dead path). > > I suppose I could just ride this out with dead paths and failed > (non-existent) drives, but sooner or later, I'm afraid this will degrade > the functionality of anything remaining on the sun cluster. > > Any advice for the proper way of doing this? > > The specific commands I have used are: > > powermt check (will ask you to clear any paths that are invalid) > It does clear them on that node (in powerpath), but they return. > > > cldev clear > cldev refresh > > > I'll summarize. > > thank you > > Tom Lieuallen > Oregon State University _______________________________________________ sunmanagers mailing list sunmanagers@sunmanagers.org http://www.sunmanagers.org/mailman/listinfo/sunmanagersReceived on Tue Feb 14 12:47:42 2012
This archive was generated by hypermail 2.1.8 : Thu Mar 03 2016 - 06:44:18 EST