My original question was:
I have a quick question. I have a Sparc10 that is used as a NFS
<> server and nothing else. The Sparc 10 is a 40mz machine that
<> has 96MB RAM, 256MB swap. Because it is an older model SS10,
<> I cannot add a second CPU, so I have to get as much as I can out of the one
<> that is installed, and eliminate any possible I/O bottlenecks.
<>
<> What processes can I eliminate or not run at boot up that will increase
<> performance?
<> I have already:
<> - added /etc/defaultrouter
<> - mv'd S88sendmail to _S88sendmail (so sendmail won't start up)
<> - nfsd's are set at 64
<> - installed Prestoserve Sbus card
<>
<> Is there anything else that I can do? I do not subscribe to this newsgroup
<> anymore, but I will post a summary. Thank you in advance.
Thanks to;
mshon@sunrock.East (Michael J. Shon {*Prof Services} Sun Rochester)
Reto Lichtensteiger <rali@meitca.com>
Rahul Roy <roy@bluestone.com>
Kevin.Sheehan@uniq.com.au (Kevin Sheehan {Consulting Poster Child})
for their resonses:
To increase performance you can:
add /etc/defaultrouter
In /etc/rc2.d:
mv S88sendmail to _S88sendmail (so sendmail won't start up)
mv S92volmgt _S92volmgt (stop volmgt)
mv S80lp _S80lp (no printing)
(IMHO I don't think that killing the above processes will make your box scream, but it won't hurt)
in K60nfs.server set nfs to 64 (this is what I set my medium use servers to, numbers vary)
IN /etc/system: (research your own values)
set autoup=240
set ufs_ninode=5000
set ncsize=5000
set maxpgio=960 (!! This value is dictated by the type & # of disks you have, see below.)
(The 960 is my calculated value)
>From mshon@sunrock.East and rali@meitca.com:
This is from Adrian Cockcroft's Book "Sun Performance and Tuning" from
SunPress:
You should probably concentrate on your disk subsystem.
optimize in any or all of the following ways
add more disks - the more spindles the better
add more controllers
load balance the disks and controller traffic
use DiskSuite or Veritas to
stripe
potentially put all of your disks into a single
large stripe! This tends to load balance very evenly
and applies the maximum number of spindles to
the job.
mirror (yes, this helps too)
use more than one swap area (even though you probably are not
swapping )
the first should be same size as RAM (for system crash dump).
others on other disks, other controllers
increase maxpgio accordingly
( from another person's email )
2. You tune the maxpgio value in the kernel paramter
file (/etc/system) to suit the number and speed of the
drives that you will be swapping on. If you have 5400
rpm disks, then maxpgio should be set to 60 * n drives.
If you have 7200 rpm disks, then maxpgio should be set
to 90 * n drives. This "instructs" or provides a hint
to the kernel that a far greater rate of paging can
occour than the standard (a single drive @ 60
pages/sec)
If you have a small NFS server the DNLC is likely to be too small. if you
have less than 256MB of RAM set ncsize=5000 in /etc/system. You will find
virtual_adrian.se gives some tuning advice when run as root, but it cannot
dynamically change ncsize.
autoup and tune_t_fsflushr
Unlike SunOS 4 when the update process does a full sync of memory to
disk every 30 seconds, Solaris 2 uses the fsflush daemon to spread
out the sync workload. Autoup is set to 30 seconds by default, amd
this is the maximum age of any memory resident filesystem pages that have
been modified. Unlike update, fsflush wakes up every 5 seconds (set by
tune_t_fsflushr) and checks a portion of memory on each invocation
(5/30 = one-sixth of total RAM by default). The pages are queued on the
same list that the pageout daemon uses and are formed into clustered
sequential writes.
On machines with more then a few hundred Mbytes of RAM, fsflush can take
over almost an entire CPU in the worts case, when very many pages are
being modified. This problem can be avoided by reducing the rate at which
fsflush checks memory. It should still always wake up every few seconds,
but autoup can be increased from 30 seconds to a few hundred seconds if
required. In most cases, the files that are being written are closed
before fsflush gets around to them. For NFS servers all writes are
synchronous so fsflush is hardly needed at all. Note the time and the CPU
usage of fsflush, then watch it later and see if its CPU usage is more
then five percent. If it is, increase autoup as showen.
set autoup=240 (in /etc/system)
- Hardware:
- Add NVRAM/SBUS Prestoserve if you got the $$
- Add a second + CPU
Thanks to everyone for your help!
This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:11:00 CDT