SUMMARY: Caching NFS

From: kevanh@lsl.co.uk
Date: Wed Apr 12 1995 - 09:52:47 CDT


REQUEST THAT I MADE
+++++++++++++++++++

We are trying to speed up our compile and link times, we have a LARGE
system wrote in "C" (about 1.5E6 lines), at present all the source code,
and library are on NFS servers, the library file are shared between
programers, if all the "C" files they depend on are frozen, otherwise
each programer has there own version of a library.

We have a lot of differnt Unix systems in use, but are mostly buying
SunOs 5.x systems when we buy new hardware.

We are thinking of putting large local disks on each machine:

  We could write a set of scripts to copy (and keep up to date) the
  source and and libraries to the local disk.

  We could use the cacheing NFS in SunOs 5.x

  We could buy AFS and use that (How much?).

If you have use the caheing NFS surport in SunOs 5.x, how well does
it work? Is there anything we need to watch out for?

Does other venders (DEC, HP, IBM) have anything like it? Any 3rd
parties selling cacheing NFS softwere?

Thanks for your help.

Ian Ringrose
ianr@lsl.co.uk

WHAT MY TESTS SHOWED:
+++++++++++++++++++++

I found that the best speed up in order were:

  using a link farm, instead of 180 "-I" on the compile line.
  puting the executable that you are linking into on the local disk.
  putint all the ".a" on the local disk.
  having a copy of all the header files on the local disk.

I have not tried CacheFS yet, as most of our system at present run ULTRIX,
but we will look at it when we have more suns.

Ian Ringrose
ianr@lsl.co.uk

SUMMARY:
++++++++

From: checkedg@uk.ac.birmingham.elec-eng (Dr. Dave Checketts)

Ian,

I have just gone through a fairly long session with Sun about a problem
which I encountered with cacheing NFS under Solaris2.4.

They have submitted it as a bug but stated that cacheing IS working as
expected. They suggested that my complaint will be looked at for future
improvements.

I found that, having cached a file, everything works fine until you change
the original on the server. The cache detects this (and even appears to
update the local version). The problem is that it DOES NOT USE the new
version and will re-read from the server. The only way to get around this
problem which I have found is to 'cd' to the cached directory and then
try a 'umount'. This will fail with a filesystem busy error but will have
resulted in the newly cached version being used from that point onwards.
If you use the automounter, then this problem is somewhat reduced by the
fact that a umount will be done by the automounter after a period of non
use anyway.

Regards

Dave

***************************************************************************
Dr. Dave G. Checketts | E-Mail d.g.checketts@bham.ac.uk
Computer Officer |
School of Elec. & Elec. Eng., |
University of Birmingham | Telephone: 021 414 4322
Birmingham, B15 2TT, | Fax: 021 414 4291
England
***************************************************************************"

From: Anat Gafni (gafni@acsc.com) <gafni@com.acsc>

I was forwarded your message about your need for a caching file-system.
We may just have the thing for you. ACSC has a product called Personal Data
Cache
that is available on IBM RS/6000. In one month it will be available on Sun and
Windows

It caches the files you need locally (automatically), and updates the server
copy automatically whenever the file is changed locally (without necessarrily
deleting the local copy). We'll be happy to give you additional information.
There is additional information that we can mail or fax you.

You may call us at :
(310) LET ACSC (same as 538-2272)

You can communicate directly with the person who handles that product by
sending e-mail to:
monty@acsc.com

Regards
Anat

From: atalwar@com.sbi.bravura (Aditya Talwar)

Hello.

You might want to use a distributed make tool like
gnumake or dmake. For this all your client env. should
be similiar for compilers, etc. With these tools, the
compiles can be done on clients, and all the *.o files
can be linked on the server or from the make was orignated.

the overall compile time is reduced considerably.

aditya talwar
atalwar@mhnj.sbi.com

From: merccap!saieva@net.uu.uunet (Salvatore Saieva)

All our local stations have a 1.05GB hard disk internally that's mounted at
/build. Developers have their own local software environments that they can
muck with as they wish. We keep each environment up-to-date by using SCCS and
 a shared software library.
Our makefiles will pull down anything that's been changed in the SCCS
libraries on demand. This gives us (in effect) multiple beta or alpha type
versions of our software that can each be developed independently of the other.

/builds, by the way, on each machine can be automounted by any other machine.
 This allows others to easily try someone else's version of the software.

The best way to speed up your links is to use PureLink by Pure Software, 408-720-1600 (I talk to Katie there). Awesome product.

Sal.

From: brobbins@COM.Newbridge.mako (Bert Robbins)

I think that you should look a the CacheFS for Soalrsi 2.X. This will
enable all of your developers desktop Solaris 2.X workstations to
do caching and the servers can be of any flavor.

--
Bert Robbins                             Newbridge Networks Inc.
brobbins@newbridge.com                   593 Herndon Pwky
703 708-5949                             Herndon, VA 22070

From: Birger.Wathne@no.sdata.vest (Birger A. Wathne)

If you write to this NFS-mounted file system during the compiles, you may need prestoserve instead. It consists of a driver and a SIMM or board to add on the server side. This will cache NFS writes in battery-backed-up RAM. I *think* cachefs will only cache reads (ideal for shared libraries, applications, CD-ROM's etc).

NFS read performance is usually fast enough for most uses. But NFS writes are synchronous, so they are usually the bottleneck.

If your problems are on NFS reads, try cachefs (should reduce your network load anyway). Else, try with prestoserve on the server side.

From: Jim Reid <jim@uk.ac.strath.cs>

If you're going to have lots of simultaneous compiles and links, it'll be best to give each workstation a local copy of the libraries. The UNIX linker makes a lot of unstructured reads and writes when it generates an a.out. This can be very unpleasant NFS load on the server.

From: john@edu.iastate

ianr <ianr@lsl.co.uk> wrote: }We are trying to speed up our compile and link times, we have a LARGE } We could buy AFS and use that (How much?).

I think you're looking at about US$4K to get started upto about US$30K for a site-wide license. I presume sales@transarc.com would be happy to give you details.

John (not affiliated with transarc except as a customer)

From: Joseph P DeCello III <decello@edu.msu.cpp.beal>

> > We could write a set of scripts to copy (and keep up to date) the > source and and libraries to the local disk. >

If you do this, see 'man rdist'. I use it to keep some filesystems in sync each night between a Solaris and SunOS boxes.

> We could use the caching NFS in SunOs 5.x >

PrestoServe works well, and it's invisible to you.

> We could buy AFS and use that (How much?). >

Don't know about this, but it's popular in the computer lab.

-- =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Joseph P. DeCello III | Opinions expressed here are mine Computer Assistant | alone and do not reflect those of Michigan State University | MSU and/or Campus Park and Planning.

From: doyle@OEC.COM (Jim Doyle)

: ianr <ianr@lsl.co.uk> wrote: : }We are trying to speed up our compile and link times, we have a LARGE : } We could buy AFS and use that (How much?).

DCE/DFS is also available for the Solaris platform.. Transarc is also selling that.

We've had very good luck using this filesystem. Administration is quite easy.

In either case, I think AFS or DCE/DFS is the way to go... TOo many of my friends that work as administrators spend their work-weeks babysitting and tweaking NFS based environemtns. Waste of time IMHO.

-- Jim

From: leclerc@com.slb.asc.austin

Ian,

Have you thought about exporting your frozen code as read-only NFS directory ? I think the cache algorithism is better for a RO directory

just an alternative to mention ... Francois

From: Shane.Sigler@COM.Sun.Corp (Shane Sigler)

There are 2 possibilities using the Solaris machines.

1. Add Prestoserver NVRAM to the servers, this will speed up the writes on the server.

2. If the most of the data that the clients are using to build the software is read mostly you could use CacheFS on the clients so that they don't have to read everything over the net every time they use it. This is only availabe on Solaris 2.3 and up. This can also cut down on the load on your network as well as the backend servers.

A combination of both of these would be a good thing.

CacheFS does most of what AFS does and it doesn't cost anything.

I use CacheFS on our /usr/dist (our /usr/local) and it works quite well since it is consistent across reboots which also helps.

Shane

From: Mark <mark@au.com.lochard>

Hi Ian,

Another option we are currently looking at is installing a FASserver to handle all the NFS operations and provide significant disk space with a high degree of reliability and negligable down time.

Email info@netapp.com for details.

From: ericb@com.telecnnct (Eric William Burger)

> We could write a set of scripts to copy (and keep up to date) the > source and and libraries to the local disk. You'll never keep it up to date.

> We could use the caching NFS in SunOs 5.x > > We could buy AFS and use that (How much?). Caching NFS is essentially AFS.

> If you have use the caheing NFS surport in SunOs 5.x, how well does > it work? Is there anything we need to watch out for? It works well for accessing source. It would be terrible for files that update often, like .o's.

> Does other venders (DEC, HP, IBM) have anything like it? Any 3rd > parties selling cacheing NFS softwere? AFS.

-- -- Eric William Burger -- Eric.Burger@telecnnct.com -- -- The Telephone Connection -- Tel. +1 301/417-0700 -- -- 15200 Shady Grove Road -- Fax. +1 301/417-0707 -- -- Rockville, MD 20850-3218 -- U.S.A. --

From: Kevin.Sheehan@au.com.uniq (Kevin Sheehan {Consulting Poster Child})

Two things to think about before you go to that trouble. We looked at a client's NFS usage for compilation with snoop and nfswatch, and found some things that caused a huge blowout in performance.

1) using symbolic links from one NFS mount to another. 2) -I paths that searched for things like types.h all over the place (and thru the above links) instead of finding common files first 3) bad executable paths - shells cache entries, but make/compilers don't, so they search the whole path. In this case, the compilers were in the last component of $PATH and had to look thru several NFS filesystems the hard way first.

Bottom line:

for a given file, the symbolic links created about 20 times the needed traffic (and small packets cost the same amount of time as large packets in a round trip...).

for a badly done include path, the lookup and search would create about 30-50 transactions before it did the read of interest.

Path lookups (and getattrs and such) were abou 1/30th of optimal if they had specified the full path name.

In short, it may be worth looking at you use of the NFS filesystem first.

l & h, kev



This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:10:21 CDT