I got quite a bit of info regarding the 4/490 in comparison to the Auspex.
I believe the Auspex to be a better long term solution, but can't purchase
one right now, and wanted to extend the useful life of my 4/490. I have
edited the responses to provide the information most applicable to
"hot-rodding" the 4/490 with Presto/Omni. If anyone would like the
complete (unedited, long) file containing all the responses, including
much on comparisons between the Auspex and 4/490, please send me e-mail
and I'll forward it to you.
**********************************************************************
use the presto board if your NFS write operations account for > 5-8%
of your total (use nfsstat -s on the server to determine this). if
you have a server that's handling mostly reads, don't bother with the
presto board.
for high-end configurations: 4 nc400 boards, about 750 NFSops/sec,
pull the presto board out of the system. in the very high end, the
CPU cycles needed to drive the presto board are better spent driving
the omni nc400 boards, and the presto cache starts to look like
a "write through" cache with a very poor hit rate.
the configurations that sun "publishes" are
1 net: presto + 1 nc400
2 nets: presto + 2 nc400s
4 nets: 4 nc400s , no presto
-------------------------------------
The general points that I feel have come from the net are:
- you should split file and CPU service on both the Sun and Auspex
systems
- Auspex provides excellent support and has extremely strong technical
people (I leave it to you to judge Sun from your own experience)
- 490 matches or beats an Auspex for NFS service up to some load
point ranging from 150-200 NFS ops/sec to ~400 NFS ops/sec depending
on the configuration, after which the Auspex wins. Low end
configuration is a "vanilla" 490 (no details on how much memory),
the high end is a 490 with 64MB of memory, and both Prestoserve and
Interphase boards. For reference, a rule of thumb is diskless =
mid-teens, dataless = just under 10... these numbers really need
verifying, and are important for these decisions.
- 490 vs. 390 performance ranges from twice the capacity to no
difference, depending on the configuration. The configuration for
the 2 to 1 performance is unknown (probably vanilla systems?); the
configuration for the same performance is with Prestoserve and
Interphase boards
- Sun SPARC servers should have 64+MB of memory
- There were claims that Auspex serial disk access is surprisingly
slow
- The two popular tools used to test NFS service are Traffic
Generator (from Auspex) and NHFStones (from Legato). These tools
generate different "kinds" of loads: TG => all 8K (maximum) NFS
blocks, NHFStones => mixed (1,2,4K) blocks. NHFStones will show
higher NFS ops/sec (similar data bandwidth requires more NFS ops
to get it). The general opinion (from both camps) is that, with
default configurations, most NFS traffic will be of the 8K block
flavor. Sun does better with the smaller blocks.
- love them ("get this 4/490 out of here and give me back my
Auspex")
-----------------------------------
We will have parallel Cisco ports for non-server traffic outside the subnets,
-----------------------------------
Sun and Solbourne couldn't deliver the performance on so many networks. The
4/490, even with Prestoserve and Omni network boards, takes a big performance
dip after two networks, by Sun's own results on the test we provided. [...]
The prices were pretty nearly identical with 2 GB of disk and enough guts to
run 6 networks if the interfaces were added. Solbourne put 2 of their work-
group servers in. Sun put in all the Prestoserve and Omni Solutions stuff.
Expansion of the Auspex will be lots cheaper than expansion of the others,
though. Your configuration will start out with Auspex cheaper -- 30-40 GB of
IPI disk doesn't come cheap. And on Suns adding lots of SCSI disk is a
nightmare.
------------------------------------
The two are different in nature. The OMNI board downloand the nfsd
processes and thus frees the server's CPU to be used as a CPU server.
This will speed compilation and other CPU consuming jobs on the server
since they won't be competing with the nfsd's for CPU cycles.
The Prestoserver is a write cache. It will speed up NFS writes.
If writes are not a bottleneck in your server, it won't do any good.
Run nfsstat -s and see what percentage of NFS calls are writes (add
the numbers for write + create + remove). If this is around 6% or less,
you won't see a big difference, if any.
-----------------------------------
We are running omni and presto on 4/490 machines. They work. We just
turned on the presto's this week, the omni's have been running for
a few weeks. They seem to work as advertised, but I don't think we are
pushing them yet, so I can't say "wow, 3045% increase in throughput" or
similar claims. You must be very careful on what data and net measurements
you use for deciding if these things are helping a lot, we are just now beginning
to get a handle on what data we want to look at, so it will be a while
before we know anything concrete. If you believe the claims, then these
things should extend the capability of a 4/490 from 150 NFS ops/sec to
450+ NFS ops/sec depending on how balanced your ethernets are.
Sorry I can't be more specific at this time. Let me know if you have
specific questions.
2 omni's. About 45 clients and other servers. No diskless. All
stations have os and swap local. Home directories, project workspace
and shared binaries off of the servers. We rarely go over 150 nfs ops/sec
on each ethernet. Again we are planning for the future here, not trying
to alleviate a current bottleneck.
Almost all of our clients are SS1, SS1+/IPC, and SS2 about half
and half between 1 series and 2 series. The all are color/GX accellerated
and have local 200mb disk for os and swap. We have two basic kinds of
uses, compile and interactive. The compile environment is for a small (30)
group of developers which do compiles from the NFS mounted partitions.
The other group does interactive chip design using tools from the
shared binary partitions. There is a little disk intensive activity from
the interactive process, but not like the compile. I am going to be
hacking on nfswatch to measure NFS response times and nfs ops/sec per
workstation so I may have some interesting data in a week or two.
-----------------------------------
The NC-400 is an ethernet co-processor, which offloads the processing
related to nfs onto a separate (dedicated) processor. The Prestoserve
is an NFS write accellerator, which speeds up NFS writes at the cost of
some CPU.
The CPU freed up by the addition of the NC-400s should serve to reduce the
cost of the Prestoserve. We expect the combination to be a happy one.
We've ordered a 4/490 with two NC400s and a Prestoserve, which should arrive
in a week or two.
**************************************************************************
Many thanks to all who responded:
stern@sunne.East.Sun.COM (Hal Stern - Consultant)
glascock@mayo.EDU (Don Glascock)
msaul@auspex.com (Mark Saul)
rmilner@zia.aoc.nrao.edu (Ruth Milner)
jfd@octelb.octel.com (John F. Detke)
Dan Butzer <butzer@cis.ohio-state.edu>
ekrell@ulysses.att.com
mis@seiden.com (Mark Seiden)
tjt@cirrus.COM (Tim Tessin)
John DiMarco <jdd@db.toronto.edu>
--- Seth J. Bradley Internet: sbradley@iwarp.intel.com UUCP: uunet!iwarp.intel.com!sbradley
This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:06:15 CDT