I originally posted...
> How do I run installboot from a Solaris 1.1.1 cdrom?
>
> Background:
>
> Have 2 servers running SunOS 4.1.3_U1 (aka. Solaris 1.1.1) which use
> periodic rdist to keep disk contents mostly the same.
> One lost the system disk. It was replaced.
>
> Booted Solaris 2.5 cdrom on the system with newly replaced disk.
> With that boot was able to open a shell to work with and
> format new disk
> label new disk
> foreach partition
> newfs the partitions of new disk
> mount partition on /a
> rsh other-server dump 0f - <partition> | (cd /a; ufsrestore rf -)
>
> Neither the installboot command on the Solaris 2.5 cdrom or the one
> from the restored disk would place a boot block on the new system disk.
> This was because they refused to accept full paths in the arguments.
> I had to use full paths because the system disk was still mounted at /a.
>
> When I try using the Solaris 1.1.1 cdrom, I "boot cdrom" and exit at the
> option to exit or install miniroot. At that point I do not find any
> installboot command on the cdrom. There had to be a way to run
> installboot with the pre-Solaris 2.x OSs? What am I missing?
>
Thanks to
ebumfr@ebu.ericsson.se
chrisc@Chris.Org
steveb@pcs1.co.uk
hkatz@panix.com
poffen@San-Jose.ate.slb.com
stevee@sbei.com
and especially brb@ike.safb.af.mil (most clear step-by-step instructions)
who all pointed out that I needed to actually install and use
the miniroot.
That was the answer for my problem. Since I also had other learning
experiences in this process, the following are from my notes (names
and addresses changed). I hope this might help someone else.
Thanks also to caruso@cinm.com and mike.holly@west.sun.com who also
gave me some phone support while I was in the best of the learning
experiences.
After replacing a root (system) disk on server it is possible to
use the Solaris 1.1.1 cdrom to boot the system with the new disk and
to pipe the files over the net from another system.
Boot the cdrom (Solaris 1.1.1 in this case)
boot cdrom
install miniroot (installs to swap partition, /dev/sd0b)
partition the new drive
label the new drive
boot the miniroot
Install Unix file system on the partitions of the new disk which
are actually mounted and visable to the users. I caused myself
problems by installing a filesystem on the whole disk overlay
partition (/dev/sd0c) I was using. Do not do that. Do not install
a file system on the swap partition or any unused partition either.
In the end I was checking very carefully and ran fsck on each partition.
# newfs /dev/rsd0a
# fsck /dev/rsd0a
# newfs /dev/rsd0e
# fsck /dev/rsd0e
... ( I used a, e, g, h )
In order to be able to use the network to get files from another
system it is necessary to create the files needed for networking
and turn on the net. In this case
# cat >> /etc/hosts
130.xxx.x.75 abcd30
130.xxx.x.80 abcd35
^d
# cat > /etc/defaultrouter
130.xxx.x.254
^d
# ifconfig le0 130.xxx.x.75 netmask 255.255.255.0 \
broadcast 130.xxx.x.255 (I actually used one line)
Then the files were transfered over using rsh, dump and restore.
I ran into some problems with "rsh hostname..." and in the end
found "rsh -n hostname..." worked more reliably. Also, on the
restore I had tried using "r" option and ended up using "x"
instead. The /a mount point is already available. A /mnt
is apparently more traditional and can be mkdir'd first.
Also note that the raw partition is being used remotely.
foreach partition
# mount /dev/sd0a /a
# rsh -n abqn35 dump -f - /dev/rsd0a | (cd /a; restore xf - )
# umount /a
# fsck /dev/rsd0a
Note that I ended up fsck'ing each as they were done because I
was having some strange errors before I started using the "-n"
on rsh. I just wanted to make certain all was ok. On one partition,
the largest, I got some errors which as far as I can tell were
simply bogus. These were "disk full" errors, "missing ." and
"missing .." errors. I went on all sorts of side trips until Mike
Holly convinced me I might try just ignoring them. I had been going
back and starting all over again. Mike also taught me that I can just
re-"newfs" a partition if I think/know it to be corrupted.
After the files were transfered the disk had to be made bootable.
# mount /dev/sd0a /a
# cd /usr/kvm/mdec
# installboot -vlt /a/boot bootsd /dev/rsd0a
Note that installboot runs from it's own directory. The /a/boot
is the location of the new boot block. The bootsd is designating
the disktype-file from the /usr/kvm/mdec directory. This is a
parameter which will not take a full path argument and forces you
to be in /usr/kvm/mdec. The last argument also will not take
a full path and it must be the *raw* disk device for the / partition.
Note also that it is possible to use any (recent) SunOS cdrom to boot
the system and get the work I needed to get done done. I had originally
started with a Solaris 2.5 because I know how to exit out of the
install to a shell. During that iteration of learning I was really
close to the solution except that the failure to use the "-n" on the
rsh with pipe from dump to ufsrestore was probably responsible for
just enough corruption to cause the final product to not boot.
It seems to me that the ufsrestore actually does the file system type
conversion on the fly. But then again I can not prove it from my
experience.
-Robert Lopez
This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:10:53 CDT