SUMMARY - II: ufsdump question

From: Todd Urie <>
Date: Tue Mar 19 2002 - 09:32:05 EST

Apparently my first summary on this subject was a bit premature.  Since I
sent that summary, I have received numerous additional responses.  Some of
the new responses were restatements of responses that I sent in my first
summary.  However, there were plenty of responses that added what some might
consider valuable information.  Therefore, here is the sum total of
everything that I received.  This isn't a TRUE summary because I have not
yet had time to go through the multitude of responses and compile everything
into a single summary.  I will be doing that shortly.  If anyone is
interested in the result, let me know and I'll forward it.

Once again, I would like to thank all those that replied and pay my
complements to the people of this list in general.  It is absolutely great
that someone like myself, who knows a little about Unix (enough to be
dangerous :-) ) but is by no means an expert, can essentially have the
benefit of years of experience of others.  This list will, in effect, be
able to cut considerable time off of my learning curve.  It won't make me an
overnight expert, but it will hopefully keep me from making the same
mistakes that others have already made.  I'll have to be more original in my

Here is every one of the responses that I received, including those
published in the previous summary:


I have been using UFSDUMP on live sytems for a few years now.  The systems
do not need to unmount anything, it all works perfectly well as is.  I have
done many ufsrestores also and have had no problems

Peter Duncan
both ufsdump and tar will have problems with files that change as they are
read. Most cases, this is not important with log files but very important
with db files.

John Julian
ASI, Ameritech, SBC
You don't have to unmount a filesystem to use ufsdump.

I think it's a case of weighing up the pros and the cons.
If you don't umount a fs before doing a dump, any open file will not
be backed up.
But, what are the chances that that open file will be accidentally deleted
by the user?
I've used only ufsdump since I started sys admin 10 years ago and never
bothered with umounts. Never had a problem restoring.

Steve Elliott
Please think about what you've said here.

Why would you expect 'tar' backups to be any more consistent on a mounted,
running filesystem than 'ufsdump' backups?  Both run the risk of generating
inconsistent backups on an active filesystem.  Most folks accept this
possiblility when using 'ufsdump' by scheduling it to run at off-hours when
a system can be expected to be in as near a quiescent state a possible.

If this isn't acceptable, you'll need to look into snapshot-type backups
sold by third parties.
Tim Evans, T.Rowe Price Investment Technologies
> I have been setting up some backup scripts for a company and have
> contemplated using 'ufsdump'.  However, the systems involved are
> systems and unmounting file systems is out of the question.  Therefore, I
> had ruled out 'ufsdump' because the documentation says that it requires
> systems to be unmounted.

Untrue. It only requires relatively "calm" filesystem during the
It can fail if the FS is very active.

> I wanted to use 'ufsdump' because of the incremental backup capability
> is built in.  Currently I am using the old standby 'tar' and will add
> incremental capability by addition of 'find -mtime'.

You may also look at It can utilize either
ufsdump or tar on backend,
and it supports incremental backups for both.

Some people just don't care about what man-pages
say ;) So, if you really want to make sure you
have a consistent filesystem backup either unmount
or use a snapshot facility like Instant Image or
fssnap which comes with Solaris 8 01/01 and later.

Solaris 8 System Administration Supplement

if a file system is mostly inactive during the backup, you won't have a
using ufsdump on a mounted file system.  just don't run it on a file system
with an active database!

however, if you're using solaris 8,
you can take advantage of the UFS snapshot feature -- you take a snapshot of
the file system which takes up very little space and back up the snapshot.
there are some minor quirks here and there but it works pretty well.  man
fssnap for more info.  this feature may also be available in a later patch
release of solaris 7, but i'm not sure.

We use ufsdump on a number of servers, both live and development and have
unmounted the filesystems. I seem to remember asking Sun about this and they
said that it was more of a guideline than a rule. We've not had any problems
resulting from this and have restored data from the media in question.

If you're using Solaris 8 then you can use ufsdump in conjunction with
I think... apparently... which will actually allow you to unmount a snapshot
the filesystem... apparently... I say this because I don't have Solaris 8
actually running!

Hope this helps...

the unmounting of filesystems is a total ideal world thing - i don't
think anyone
 who uses ufsdump does this - ufsdump merely suffers from all the same
 that all other backups face on live filesystems (ie. files growing,
changing and being
 created or deleted during the duration of the backup) - it's just that
 works at a lower level and designed to create far larger archives.
Anyway, we use it here over night and have no problems with it (do some
 with tar and have more problems with them) - just pick a time when the
 is going to be fairly quiet and you'll be fine.
If there is no time when the filesystem is quiet you should consider
looking at the
 fs-snap thing in Solaris 8 (all the technical terms here ;) or using
ODS mirrors
 (splitting the mirror, attaching the unused half somewhere read only,
backing this
 up, reattaching the mirror) - beyond that you'll have to start spending
more dosh!
Hope that helps!
We clone our boot disks using ufsdump and don't unmount them first. ufsdump
seems to work fine even though the source is mounted.

Hello Todd

Curently we do perform backups for large systems without umount anything.

It worked fine for us the only thing that can happen is that the files
which are open when you are performing the backup are not included.

If you find this ok or you have another answer please submit the SUMMARY

Gustavo A. Lozano
>include unmounting and remounting file systems being backed up.  Is this
>because the unmount / mount is done in another script or because you really
>don't need to worry about unmounting to use 'ufsdump'?

ufsdump doesn't understand the concept of mounting or unmounting; it thinks
is reading a large file with the same format as a ufs filesystem - it just
happens that most people run it against /dev/rdsk/something which is a
file associated with a hard disk - which may be mounted.

What am I on about? well, the point is, you MUST NOT ufsdump mounted
filesystems; you may _think_ that it works, and it may _look_ like it works,
it doesn't. The kernel may well be cacheing writes and not have commited
them to
the filesystem, or may try to change the filesystem at the point where
is trying to read it (at this point you'll see a load of bread() errors). It
adds up to a horrible mess.

>I wanted to use 'ufsdump' because of the incremental backup capability
>is built in.  Currently I am using the old standby 'tar' and will add
>incremental capability by addition of 'find -mtime'.

tar is not a backup tool
neither is dd or cpio; I don't care what anyone else says.

If you *must* backup a mounted filesystem, then use Solstice backup or
Netbackup, or some third party application. Failing that, consider mirroring
filesystem with disksuite (it's free!), detach the mirror and run the backup
against the detached mirror using ufsdump. It's easier to do than it sounds,

Just don't fall into the trap of running ufsdumps of your root fs every
and thinking you're bulletproof - I'd love a ten pound note for every
I've spoken to with backups that they've been running for years and never
actually gone through a disaster recovery scenario with (i.e. actually tried
use them before they *had* to...)

good luck.

       /_____/\ Justin Stringfellow
      /____ \\ \ Senior Support Engineer

ufsdump 'prefers' if you unmount the filesystem, or at least if it is

However, everyone I've spoken to on the topic basically suggests,

-> Only "active files" will have any problems (if you fail to unmount or
backup a "live" filesystem..)

-> assuming you are dumping system slices, and NOT installing softare at the
same time, this typically amounts to log files having some content

-> If you are doing "disaster recovery" dumps then ... you really don't care
a few logfiles are missing a couple of lines, if the backup is enough to get
your system up and running!

-> For existing free backup options, check: - maybe
something of use is there for you?

Hope this helps a bit,

Tim Chipman
>I have been setting up some backup scripts for a company and have
>contemplated using 'ufsdump'.  However, the systems involved are production
>systems and unmounting file systems is out of the question.  Therefore, I
>had ruled out 'ufsdump' because the documentation says that it requires
>systems to be unmounted.

Well, most of us out there rung 'ufsdump' without unmounting filesystems,
because it's not practical. You are exposing yourself to backups with
missing files, or corrupted ones, possibly. For instance, if a file was
deleted between when your backup creates a list of inodes, and when it
actually reads the file, you will get an error. Similarly, if a file is
created after the inode list, or if the file is locked, it will not make it
onto your backup tape. Finally, if a file is being written by a process
while 'ufsdump' reads it, you will likely find it corrupted on your backup

However, most people can live with that. Backups are normally run when the
system is "quiescent", in the wee hours of the morning, when the only
processes running are maintenance administrative processes, which create
temporary files that don't need to be backed up.

Hope it helps.

Fabrice "Script It!" Guerini
Blue Martini Software, Inc.
I used ufsdump succssefully on over 100 machines without umounting any of
more important I have restored full systems from these backups with no
problem ;-)

We use ufsdump and we do not unmount the file systems.  I have a 12 hour
window to do bakups where there is little or no user activity, though.  I
don't remember, but I think ufsdump will skip open files.  Tested many times
and proven reliable.

Kevin Metzger

We use ufsdump / ufsrestore to do both incremental and full backups of our
production solaris systems - nary a problem noted.  I raised the same
questions when I started here (2 years ago) and was told by the "guy who has
been here forever" that it really doesn't matter - but any open files won't
get saved.  As long as no one is logged in and using files, you're okay.

The script was originally written in 1993 - so we've been using ufsdump for
a while now . . .

Brian Dunbar
ufsdump is the only tool that correctly dumps all file types one
finds in a Solaris environment.  I use it without unmounting file systems
the time.  There is the risk missed files, but I can live with that.
maps the directories first, then dumps files.  If a file is deleted after
mapping pass,  but before it gets written to tape, then you get error
but nothing terrible happens.

If you have file systems where creation/deletions are frequent, you may
have some unhappy campers when you do a restore after a disk crash.
The -ONLY- way to make sure you get 100% of everything in a 100%
consistent state is to do backups with the system booted single user,
regardless of what backup tool you use.

Using tar is not a good thing, IMHO.  There are many file types it cannot
correctly (fd files and sparse files come to mind) and as a result you
possibly signficant data loss when a disk dies.

My 2 cents,
Ric Anderson (

Hi Todd,

It can be especially dangerous to use ufsdump when new files are
created while you're dumping the file system.  One way around it (that
has worked for me in the past), is to remount partitions read-only.
It means that for the duration of the backup, it is not possible to
change any information.  Web servers can't write to the logs, but can
still serve web pages.

For databases you cannot do this.

If something in the file system changes while you're using ufsdump, you
can end up with a corrupted file system on tape, that won't contain
information you think you've backed up.

If you are going to use tar, I'd suggest that you install the GNU tar,
and use the file reference time stamp, instead of find / -mtime.

Before we moved to arkeia, this is what we did:

bring down machine.  Perform ufsdump on every partition to tape.  Label
tapes as zero-level-ufsdump.  From that point onwards, use tar (every
day / week).  If there are databases that need to be backed up, include
a mechanism that will keep them from writing data while you're backing
up the database files, after you have completed the backup, make sure
that the databases are fully working again.

Use your zero-level-ufsdump on another machine to load OS and
applications.  Use your tar-ed tapes to get the information up to date,
and check to make sure that your backup strategy is working.

I cannot stress this enough: it's not enough to be able to read the
tapes.  You have to KNOW that you can get a service up and running from
the tapes.

When you've gone through that exercise, report to your upper management
how LONG it takes to get from a broken machine to a working rebuild
with different hardware.  This is how I got myself a tape robot, arkeia
software and spare parts. :-)



From: Mike Salehi


     I have been using ufsdump for a longtime and some of the things they
say seems to have
been good for UFS before ffs and it always was OK for me.

You don't _have_ to unmount to use ufsdump, but it is _recommended_

I've used ufsdump to back up data while the system was live with no real
issues, but you do have to be careful when you're using volatile data which
must be consistent (e.g. databases).

John Riddoch
Unix Project Engineer
I use ufsdump for a number of clients and never unmount the filesystems.
I've never had a problem restoring.  Make sure the system is quiet during
the backup.  If you have a filesystem-based database and cannot shut it
down, export to disk and backup the export.

> ufsdump 'prefers' if you unmount the filesystem, or at least if it is
> "inactive".
> However, everyone I've spoken to on the topic basically suggests,
> -> Only "active files" will have any problems (if you fail to unmount or
> even
> backup a "live" filesystem..)

Actually, that somewhat misses the point.  Yes, "active" files are
always a challenge to archive, but ufsdump has a different subtelty from
tar and others.

Ufs dump saves inodes (and directories) in the first passes, then later
goes and saves data from the inodes.

That means that any directory changes (file deletions, file renames,
directory deletion), will change the data storage.

Most of these problems are pretty minor, but in rare situations you
could restore data from one file into another (usually due to inode
mv's).  The amount of data change between the first pass and the last
pass is the critical bit.

Because 'tar' and other in-band programs "walk" the filesystem at the
same time that the data is being stored, they are less susceptible to
such problems.  Of course you can do similar things to them.  I don't
know what would happen if you mv'ed a directory while tar was "inside"

In both situations, the more backups you have, the better.  That way a
single unusual event doesn't leave you without data.

Darren Dunham                                 
Unix System Administrator
Hi Todd

If you want to really restore in a disaster recivery situation, I would
strongly urge you to start using ufsdump. tar is fine for saving away
sections of the filesystem, but when you need to recover on a parttion by
partition basis ufsdump is your tool.

As for unmounting a filesystem this is not necessary with the following
caveat. Open files will not be saved, hence the blanket recommendation to
unmount the filesystem. Most people either

1) use it during the quiet hours when no one is using the system (not always
possible with web servers etc).

2) Use a snapshot facility and then backup the snapshot.

3) If you using Oracle or similar, perform an online backup within Oracle to
a separate area and then use ufsdump to back it up.

4) I have also seen where when using a mirrored system the mirror is taken
offline and remonted to allow backup to be made and then the mirror resynced
back into the system. Not really recommended for critical machines, but one
way of getting a 'quiet' filesystem.

As a minimum you should make sure you a ufsdump of the system eg / and any
other system partitions eg /usr if separate. This will allow you to recover
oh so much simpler than tar etc.

Hope this helps.

Peter Stokes
Sorry - I didn't see your original post.  With Solaris 8 the fssnap is a
very slick option, as inferred by one of your responses.  I wrote this
article for our customers, it might be of use to you:

On 2002-03-18 09:21:39 -0500, Todd Urie wrote:
> I have been setting up some backup scripts for a company and have
> contemplated using 'ufsdump'.  However, the systems involved are
> systems and unmounting file systems is out of the question.  Therefore, I
> had ruled out 'ufsdump' because the documentation says that it requires
> systems to be unmounted.

The requirement is not enforced in any way, except that changes made to the
filesystem while the ufsdump is in progress may result in the backup being
incomplete and/or inaccurate. It is perfectly safe to dump a filesystem
while it is mounted read-only, and reasonably safe to dump a filesystem
while the system is idle.

If you're running Solaris 8 (or 9, I suppose) you could also use the new
command fssnap, and ufsdump a snapshot of the filesystem. This is meant to
be perfectly safe.

> I wanted to use 'ufsdump' because of the incremental backup capability
> is built in.  Currently I am using the old standby 'tar' and will add
> incremental capability by addition of 'find -mtime'.

If you're going to use find, you may want to use cpio or pax rather than

Note that the inode change time (-ctime) may be a better choice than the
modification time (-mtime). I believe ufsdump effectively uses the ctime.
Hi Todd,

Yes, documentation says that you need to umount and mount the file system to
use ufsdump but in our production environment we are using the ufsdump for
last couple of years without any problem. We don't unmount the file system
to use ufsdump. Moreover I have used that DLT tape to do Disaster Recovey
and that was absolutely successful. Make sure if you are taking backup of
any database then use the native hotbackup and export in addition to


Hi there-
In a perfect world you unmount the filesystems.
I use UFSDUMP on production servers and NEVER umount.
Make sure you put
in your script.  This will sync up the filesystem and
is usually "good enough".
I have restored several times and have never had a
problem with ufsdump.

Good Luck-
Eddie Rozier
I have been using ufsdump to do my backups without any problems.

The real problem (that may force you away from it) is that if the files
   being backed up are changed, the backup may be left in an inconsistent
   state (a file backed up as half new, half old).

I have more of a developer system (actually an "educational faculty" user
   I do my backup in the middle of the night when my users are not busy.

Michael Schulte

If it's a Solaris 8 syste, look at fssnap to create a snapshot.

If not, you really need to unmount or you'll be saving files in unknown
state (probably unusable  after a restore).


Curtis Preston has a book about
Unix Backup and Recovery:

which had a whole chapter on the dangers
of running ufsdump on a mounted filesystem.
He concludes that the danger is very small.

It is a very good book, btw.
ufsdump is fine.

the Solaris Sysadmin Manuals do say repeatedly that you must make the
file systems inactive. this is a COA statement. they don't want to be in
the position of having something not recover and being blamed for it.

the O'Reilly Unix Backup & Recovery book by W. Curtis Preston covers
this in a bit more detail. They basically say that this is going to be a
potential problem with any backup utility. There is a chance that in the
course of a backup, something in a file or the filesystem will change,
and your backup will not properly reflect the state of the system. You
might recover a file that shouldn't be there, or you might find a file
to be corrupted. It can affect directories. However, the potential for
this is pretty low and people who dump live filesystems that are fairly
idle will generally get pretty good backups. You would experience a
similar success or failure using tar or cpio.

cpio and tar affect the inode time stamps, ufsdump does not.

(I've quoted some phrases of that out of Preston, pp.662-663)


Chris Hoogendyk


Todd Urie
sunmanagers mailing list
Received on Tue Mar 19 08:33:18 2002

This archive was generated by hypermail 2.1.8 : Thu Mar 03 2016 - 06:42:37 EST