Many people responded and the majority said this should work, and I
agree. Some even stated that they currently do this exact thing.
Several people suggested rsync but I didn't want to introduce any more
than necessary into this environment.
One person stated that this can't be done but I think he was looking at
it in reverse, trying to dump an nfs filesystem to a local filesystem.
I'm going the other way. The perplexing part is that it pukes as soon
as it starts to write to the nfs partition so the possibility of a 2gb
limit is unlikely, especially since we have dumped this to tape
successfully many times in the past 3 months.
I am attempting this as root (some people asked) and the nfs filesystem
is exported with root permissions for that machine so permissions are
not the issue. I can create files in this structure and have tar'd
files to this fs before with the same permissions.
I knew about dumping to stdout and piping to the fs (several people
suggested that as well, thanks), and this does work to a point (again
making permissions and size limitations unlikely). The 20gb volume has
no problem with this syntax and has completed successfully as well as
updating /etc/dumpdates. The 96gb volume did not fair so well; it seems
to have finished but ended with the following messages:
DUMP: 97.33% done, finished in 0:10
DUMP: 190566654 blocks (93050.12MB) on 1 volume at 4280 KB/sec
DUMP: DUMP IS DONE
changing volumes on pipe input
abort? [yn] n
changing volumes on pipe input
abort? [yn] y
dump core? [yn] n
It did not update the /etc/dumpdates file either; and the performance
leaves everything to be desired considering we're on a gigabit backbone
and these guys are plugged directly into a BigIron 8000, but that's
another issue. I am unsure what the above message means for my dump, if
anyone has light to shed on it I would appreciate it.
Just for the record here's what I was trying to use and what I ended up
using respectively:
/usr/sbin/ufsdump 0uf /netapp/backup /yuni/vol1
/usr/sbin/ufsdump 0uf - /yuni/vol1 | ( cd /netapp/backup; ufsrestore xf
- )
I still don't have any idea why I get the "cannot open" error shown
below (original query), but someone stated that I wasn't dumping to a
dumpfile. Maybe, since normally you dump to a block file (tape) and the
command I used that works was dumping to stdout. Could be that I can't
dump to a directory but need to go to a "file" instead. Makes some
sense to me except there are people who said they are doing this
currently.
Thanks again to all who responded.
~JK
Jeff Kennedy wrote:
>
> Sorry, several people mentioned that I didn't supply the actual
> command. Here it is:
>
> /usr/sbin/ufsdump 0uf /netapp/backup /yuni/vol1
>
> Thanks.
>
> Jeff Kennedy wrote:
> >
> > I guess the first question is will that work? I have a E250 with 2
> > A5000 arrays acting as a file server for one of our engineering groups.
> > I am trying to move all of the data contained in 2 volumes to a NetApp
> > server that is nfs mounted to this file server.
> >
> > Here's the layout:
> >
> > Filesystem kbytes used avail capacity Mounted on
> > /proc 0 0 0 0% /proc
> > /dev/dsk/c0t0d0s0 70282 51388 11866 82% /
> > /dev/dsk/c0t0d0s6 1030951 694041 275053 72% /usr
> > fd 0 0 0 0% /dev/fd
> > /dev/dsk/c0t0d0s1 30689 11050 16571 41% /var
> > swap 1874288 8 1874280 1% /tmp
> > /dev/vx/dsk/rootdg/vol01
> > 116020247 96466918 7951305 93% /export/home
> > /dev/vx/dsk/yuni/vol01
> > 33152361 19988239 9848886 67% /yuni/vol1
> > /dev/vx/dsk/yuni/vol02
> > 33152361 9 29837116 1% /yuni/vol2
> > /dev/vx/dsk/yuni/vol03
> > 33152361 9 29837116 1% /yuni/vol3
> > netapp:/vol/backup 152526276 98323640 54202636 65% /netapp/backup
> >
> > I know I can use tar to do this but it's not going to be a "copy it over
> > and they'll start using the new mount" type of thing. I need to run a
> > full copy initially and then run incremental copies every night
> > afterwards until their project is finished and they can move over
> > permanently. This will probably be a month or more. ufsdump would be
> > perfect for this but it isn't working. Here's what I get....
> >
> > DUMP: Level 0 dump on Fri Sep 08 07:13:13 2000
> > DUMP: Writing 63 Kilobyte records
> > DUMP: Date of this level 0 dump: Mon Sep 18 20:00:04 2000
> > DUMP: Date of last level 0 dump: the epoch
> > DUMP: Dumping /dev/vx/rdsk/yuni/vol01 (fileserver:/yuni/vol1) to
> > /netapp/backup.
> > DUMP: Mapping (Pass I) [regular files]
> > DUMP: Mapping (Pass II) [directories]
> > DUMP: Estimated 41769046 blocks (20395.04MB).
> > DUMP: NEEDS ATTENTION: Cannot open `fileserver:/netapp/backup'. Do
> > you want to retry the open?: ("yes" or "no") DUMP: The ENTIRE dump is
> > aborted.
> >
> > As you can see, the volume is a Veritas volume but I don't think that
> > has anything to do with it. I know that ufsdump references the raw
> > device but you can also dump to a remote tape, although that references
> > a network host with a raw device......
> >
> > Any help or ideas would be appreciated.
-- Jeff Kennedy UNIX Systems Administrator AMCC jkennedy@amcc.comS U BEFORE POSTING please READ the FAQ located at N ftp://ftp.cs.toronto.edu/pub/jdd/sun-managers/faq . and the list POLICY statement located at M ftp://ftp.cs.toronto.edu/pub/jdd/sun-managers/policy A To submit questions/summaries to this list send your email message to: N sun-managers@sunmanagers.ececs.uc.edu A To unsubscribe from this list please send an email message to: G majordomo@sunmanagers.ececs.uc.edu E and in the BODY type: R unsubscribe sun-managers S Or . unsubscribe sun-managers original@subscription.address L To view an archive of this list please visit: I http://www.latech.edu/sunman.html S T
This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:14:18 CDT