SUMMARY : Performance Benchmarks...

From: Susan KJ Thielen (
Date: Thu Apr 22 1999 - 14:17:13 CDT

Thanks a lot for all your informative answers. My original
question will be at the bottom of these email..

The general consensus is.. there are no hard values... There
are a couple of books out there one should get to study the
problem more precisely.

"Configuration and Capacity Planning for Solaris Servers" by Brian L.
ISBN: 0-13-349952-9 Published by Sun Microsystems Press.

Sun Performance Tuning by Adrian Cockroft

Download proctool from

Other than that, I can only quote the more pertinent responses..

Andrew Maddox wrote:

I never have found any "objective standards" to measure your own server
against. If they're out there, and you get a pointer to them, please
pass it
on. What we did was poll the users. Started up the performance monitors
the Solaris logging stuff), and asked the users to keep tabs of what was

good performance (to them) and when it got slow. We set up some low-load

tests where we would know the usage was light and things would be about
fast as possible, and took the real stats we got there as a baseline.

Did this for each server, and now use those as a measuring stick. When
things start to drift away from those numbers, which are "tuned" to each

server, if you will, we'll know that there may be a performance problem.
far, things are still fine. But I think that's the best way to benchmark


Measure I/O, CPU use, memory, swapping, etc., etc., when the system is
quiet. Archive that. Then run a series of controlled tests mimicking
usage. Record and archive. Then let things run in normal production for
week or two, record and store that. Compare all these numbers, make some

pretty graphs for management, and there you go.

As you said, trying to compare one server's raw numbers directly to
server is almost always useless. Acceptable performance stats are
stats you gather at a time when users are on the system and are happy
the performance at that moment. Show people how many things are
different on
two systems, even if they're pretty similar, and they may see that. I'm
lucky that my management did.

Bill Sherman write:

I believe this type of request to be very germain to system
administration. You should be able to compare/contrast web hits, orders

processed and other meaningful business stats to CPU usage, disk I/O
rate, memory use, database cache hit ratios, and other items.

I would create some scripts to count the number of web hits per hour
(from the access logs), get your business units to tell how many orders
per hour are generated, use sar for CPU, disk I/O, page/swap usage and
other stats. Check in the database logs for transactions/hour or rerun
stats weekly to generate new row counts for some idea of database

If there are bottlenecks:
+ check CPU modes - are there a lot of user mode and less system mode?
+ Is one disk or controller hit harder than the others?
+ Is there ANY swapping?
+ Are your database caches being hit in the 90% on reads?
+ How does CPU usage vary with web hit numbers?
+ What sort of stats are showing in the web server? Netscape has a
performance chunk you can enable and poll hourly. Check the Netscape
web site. If you can't find it there, I've got it somewhere here.

Charlie Mengler wrote

or me the only performance measure that matters is end user response
It can never be fast enough & frequently it is "too slow"; as measured
by a stop watch in multiple seconds. In a complex networking & database
environment many, many interrelated factors combine to result in the
elapsed response time the user experiences. To target any specific
with "precise" values is an exercise in futility. It is similar as
trying mathematically model rush hour freeway traffic; where a slight &
unexpected slowdown in a single location can snowball into full scale
gridlock before you have any chance to react.

In my ideal world I would try to build an environment where estimated
PEAK load on any resource is about 50% of its capacity. This way when
your estimate is low or the load suddenly surges, there is slack
capacity in the system to accommodate the increase without bogging down.

You can NEVER have too much CPU power, too much disk space, too much RAM

or too much bandwidth. Within 18 months what ever resources you have,
they will be running in saturation. This is a corollary to Moore's Law!

Original Question.

  I have been asked to gather a set of benchmarks for CPU useage,
> Disk use, and Load for a production system that we currently run.
> Now my opinion on this matter is that these statistics are a very
> difficult thing to benchmark from system to system. That is, some
> systems run fine when the CPU is running at near 100% and other
> don't run so great even at 60% CPu... it all depends on the hardware,
> the applications, the user load, the amout of RAM, whether or not
> swapping is an issue etc. But in any case, I have been asked to
> with "industy experts", my opinion alone doesn't seem to be important
> in this matter.. So The Question.
> Experts! What do you think are acceptible performance benchmarks
> for a web-based oracle application running on an Ultra 2 running
> Solaris 2.5 on a full T1 line? The benchmarks being percentage
> CPU utilization, Disk I/O, and processor load.
> If anyone knows of somewhere they can point me to information on
> this matter, but doesn't want to offer and expert opinion, I'd be very

> happy to get anything on this topic.

Susan KJ Thielen                                         System
Centric Systems                                         ph: 519
1074 Dearness Drive                                           fx: 519
London ON N6E 1N9             email:

This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:13:18 CDT