Well here it is. The upshot is that you *can* rebuild a system onto another
of different hardware! :-)
Thanks go to:
goos <goos@xs4all.nl>
robsonk@ebrd.com
Karl_Sprules@ACML.COM
Seth Rothenberg <SROTHENB@montefiore.org>
James Ranks <ranks@avnasis.jccbi.gov>
Pyne, Jeffrey <Jeffrey.Pyne@schwab.com>
Rae, Scott <Scott.Rae@ScottishPower.plc.uk>
for the replies. Special thanks must go to Ken Robson (robsonk@ebrd.com)
though since despite my consistantly asking how he did this, he kept giving
me more suggestions of how to resolve it. Many thanks.
In the situation under question we wanted to move an E250/pci/scsi system to
an Ultra10/pci/ide. Also an Ultra 1/sbus/(internal disk; SEA?) to the Ultra
10. As you can expect the devices are all different and hence you can't just
restore one on top of the other and reboot.
The basic sequence of commands we settled one were:
1) reboot new system from cdrom in single user mode ('boot cdrom -s')
2) mount and restore the original file systems. This is done through a
sequence of format, newfs, mount and ufsrestore commands. I was happy with
this bit, so I'm not going to go through it in detail. If you need to,
check the man pages.
3) The new root filesystem is located on /a. So,
cd /a/etc
cp -p path_to_inst path_to_inst.old
cp -p vfstab vfstab.old
cp -p name_to_major name_to_major.old
(I did this just in case I needed the originals on disk.)
4) cd /a/dev
mv dsk dsk.old
mv rdsk rdsk.old
(This seemed the easiest way to ensure I had the right disk entries, but
left any other devices alone.)
5) cd /dev
tar cf - ./dsk | (cd /a/dev; tar xvfp -)
tar cd - ./rdsk | (cd /a/dev; tar xvfp -)
(in effect, copy the /dev/dsk and /dev/rdsk entries to the new root)
6) cd /a/devices
drvconfig -r . -p ../etc/path_to_inst
cd /a
disks -r /a
ports -r /a
devlinks -r /a
tapes -r /a
audlinks -r /a
(I didn't do the last 2 but Jeffrey Pyne suggested them.)
7) cp /etc/name_to_major /a/etc/name_to_major
(Okay, this was the bit that threw me. The disks up to this point look
fine since you can see them when booting from the cd. However, upon
rebooting the system falls through to the boot prom since the actual
device node numbers (in the file) don't tally with the devices on the
disk (in /devices).)
8) Modify /a/etc/vfstab as needed.
9) Create any required soft links in /a/platform and /a/usr/platform for the
type of system.
(If you don't do this then upon rebooting you are prompted
for the kernel/unix file location. I tried this but the system kept
falling through to the boot prom (possibly due to other requirements).)
10) Unmount the disks; sync; sync; halt
reboot -rs (or 'boot -rsv' for more verbose output)
(I used the -s option since I wanted to see that all was okay. The
system should now, at least, boot up from the disk(s).)
11) Anything else. In our case this included:
1) cp -p /etc/hostname.le0 /etc/hostname.hme0
(we were moving from an le0 card to hme0)
ifconfig hme0 plumb
ifconfig hme0 up
2) Rebooting stated that 'bdconfig' needed running. We didn't do this.
3) The Ultra 1 had an sbus ATM card which was not present in the
Ultra10. So it whined a bit about the interface not being present.
That's it! Needless to say all I have done really is get to the state where
the system reboots. Now as to whether the networking software we use still
works is for the networking guys to sort out!! :-)
I should finally add that this has been a bit of a learning curve, and the
reason we haven't done this before is because our current disaster recovery
plans typically simply require moving an 'account' from one machine to
another. E.g for the mailhub we move the 'exim' software; for the www its
apache/squid. No hardware is involved or any restorin of entire disks. But
since the networking stuff is so integrated onto the system we have to move
the whole lot.
Thanks again,
John.
On 20-Jan-00 at 11:33:05 John Horne wrote:
> We are trying to sort out some disaster recovery for 3 Sun systems which
> run network monitoring software. As such the software is somewhat
> 'integrated' with Solaris - i.e. it's not located in one directory or
> even one partition. As such we would like to be able to carry out a full
> backup of one system and then restore that entire system onto a 'spare'
> system in the event of system failure. We do not want to have to install
> Solaris and then the software - but to simply restore everything.
>
> The problem is that we have a PCI ultra 10 with an internal IDE disk as
> the spare system. The other three are an E250 with an internal SCSI disk;
> an Ultra 1 with internal disks (2 of them); and another Ultra 10 the same
> spec as 'spare'.
>
> Now it seems that this isn't going to work because the machines are of
> different hardware architectures - pci and sbus, and some have ide disks
> and others scsi. So restoring the entire system is no problem, but upon
> booting the /dev and /devices directories now contain links to the wrong
> devices (or rather the wrong type). In some cases the system won't boot -
> it can't find the kernel because its looking (say) for a scsi disk which
> it doesn't have. In other cases, it boots but doesn't find the /usr
> partition.
>
> Anyone know of any way around all of this? Our thinking is that either we
> may have to get all 3 systems at the same spec, or we have to reinstall
> solaris and then the software (or at least solaris, and then try and pick
> out the bits of network software we need!).
>
--------------------------------------------------------------------------
John Horne, University of Plymouth, UK Tel: +44 (0)1752 233914
E-mail: jhorne@plymouth.ac.uk
Finger for PGP key: john@jhorne.csd.plymouth.ac.uk
This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:14:02 CDT