This is the original problem:
------------------------------8<------------------------------
I have made a very simple script to backup a SS4 to a HP tape
drive. The drive is the basic DDS-2 model without compression, so I
thought of something to backup about 3 gigabytes in a single 2 Gb tape.
The script works, but it takes 6 hours to backup 3 gigs :(
What am I missing here? Maybe setting a given block size? or is
it simply that gzip is too slow for this kind of job?
A summary will follow, as usual. TIA.
#!/bin/sh
#
# This is the log file:
HISTORICO=/var/cs/historico
# This is the list of files to exclude from backup, mainly devices and swap:
NOCS=/var/cs/no_cs
TAR=/usr/sbin/tar
GZIP=/opt/gnu/bin/gzip
DD=/usr/bin/dd
echo > $HISTORICO
date >> $HISTORICO
${TAR} cvfX - ${NOCS} / | ${GZIP} -c | ${DD} of=/dev/rmt/0 2>> $HISTORICO
date >> $HISTORICO
/bin/mt offline
cat $HISTORICO|mail adminsis
------------------------------8<------------------------------
Credits go to (in chronological order):
Chris Marble <cmarble@orion.ac.hmc.edu>
Jim Harmon <jharmon@telecnnct.com>
Ed Poorbaugh <poorbaugh@norcross.mcs.slb.com>
bismark@alta.Jpl.Nasa.Gov (Bismark Espinoza)
Casper Dik <casper@holland.Sun.COM>
Rich Kulawiec <rsk@itw.com>
foster@bial1.ucsd.edu
Solutions:
The main problem was that the drive didn't receive data fast
enough to stream, so it wasted a lot of time rewinding.
The first thing I tried was Chris' suggestion of adding --fast
to gzip. This alone saved one hour.
Another suggestion was to use dump instead of dd. I'm not sure
if this is one of the differences between SunOS 4 and Solaris, but in
Solaris 2.5.1, dd seems to be the way to do this.
I was also corrected about DDS drives... I stated incorrectly
that my DDS-2 drive didn't compress. What I have is a DDS-1 drive. All
the rest of DDS drives do compress. To get them to compress on the fly,
you just have to use the device name that includes an "h" (v.g.:
/dev/rmt/0h).
Another thing that several respondents pointed was that the
pipeline should be taking up a lot of memory and other resources. This
is one good point for GNU tar, that packs, compresses and writes all in
one.
Casper hit the nail with the hammer with the output block size.
dd was giving the tape pieces of 512 bytes, while a good block size has
proven to be 32k. This one made a BIG difference in backup time.
One respondent was contrary to use anything except ufsdump with
a tape drive that does compression in firmware. Sorry, that costs money
and gzip does not, and in practice is about as reliable.
Finally, Foster sent me a little nice script using GNU tar.
Using GNU tar with the default (very small) output block size and the --
gzip switch was faster than the first approach, and I hope after some
tuning will be faster than anything else. In any case, remember that
tapes like to stream with a constant flow of reasonable sized (32k,
maybe 64) data blocks.
Saludos,
Alfredo Sola
Administrador del sistema
This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:11:55 CDT