Well, I was completely wrong with my first summary !
Thanks go to
for confirming that I am not the only person that is considering booting
from an A1000.
Special thanks go to Cristian Villalon from Sun here in Canberra for helping
me find the solution to my problem.
Originally I had reported the error
"Fatal SCSI error at script address 258 Unexpected disconnect"
when attempting to boot Solaris from the A1000. I thought it was the :a
added to the boot-device by the Solaris installation program, but it turns
out it was a coincidence that it went away when I had managed to boot
without the :a.
It turns out that this error is common, the description I have from Sun is
"What is happening in this case is that the A1000 being booted off of has
the only disk that is going to respond to the host this early in the boot
process. It doesn't matter if you had any LUNs created or not on the HW
RAID Module. You will see these messages when you boot off of this PCI host
and it will not cause any problems with your configuration"
So we can safely ignore the error.
My second problem was that the root file system was getting corrupted on the
4th, 5th or 6th reboot after installing the OS. The symptoms were the error
"/dev/dsk/c1t5d0s0 is not this fstype" when attempting to remount the root
file system read-write during the boot process. At this stage the system
was left in single user mode with the root file system mounted read-only, a
truss of the mount command "mount -m -o remount" showed a return code of
EINVAL. Examining the superblock of the file system with the file system
debugger showed it had the same magic number as all of the other file
systems, booting from another disk allowed the file system to be mounted
(and fsck-ed), but it would still produce the same result when attempting to
boot from the A1000.
The solution that Cristian provided was a Sun document titled "Preparing the
RAID Module as a Boot Device". The extract below summarises the steps
required, if you are going to attempt this, make sure you read the whole
document first, there is some other information you will probably need. If
someone would like to host the original document on a web or ftp site,
please contact me.
1. Backup all data on your HW RAID Module before beginning procedure.
2. Install LUN 0 on your HW RAID Device. If this is a new installation, you
might want to make sure that your default LUN 0 from the factory is the size
that you want before proceeding.
3. Boot cdrom or install Solaris through JumpStart onto LUN 0 on your HW
RAID device. Let the Solaris installation program set your eeprom to boot
off your RAID Module. After OS installation, let it reboot off your RAID
Module. The OS install includes any and all patches for RM6 6.1.1 Update 1.
4. Install RM6 6.1.1 Update 1.
5. Edit the /usr/lib/osa/rmparams file and make the variable
6. Boot -r.
7. Edit the rmparams file again and make Rdac_SupportDisabled FALSE.
8. Run the command /etc/init.d/rdacctrl config.
9. Edit the /etc/system file and add the following entry:
The rdnexus and rdriver numbers are based on an entry in the
/kernel/drv/rdriver.conf file. For example:
name="rdriver" module=1 lun=0 target=4 parent="/pseudo/rdnexus@0"
Look at the "target" number for the rdriver number. For systems with more
than one RAID device, the correct module should be the first instance of
lun=0, target=4 from the bottom of the file. In that line, you should see
the correct rdnexus@<n> number.
10. Boot -r.
You are all set to boot off your HW RAID Device.
I did all of this and it now works just fine.
This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:13:19 CDT