SUMMARY: Performance tuning E250 server for better disk I/O perfo rmance.

From: Kumar, Dinesh (MED) (Dinesh.Kumar@geind.ge.com)
Date: Fri May 12 2000 - 06:06:26 CDT


Thanks to all who responded.

Finally I decided to change my config from RAID5 to RAID 0+1 which will give

good read and write performance.

To improve my present setup there was only one response from Mr. Buddy
Lumpkin
he gave some idea to look at the SCSI performance and application setup.

Thanks

Dinesh

Here are the feedback :
what was your interleave size? Here's the skinny..
to saturate that disk and find out the maximum sequential disk i/o do
something like this:
dd if=/dev/rdsk/<yourdisk> of=/dev/null bs=1kk
and while doing this, do an iostat -x 2
and watch the transfer rates, these are sequential reads mind you, if
nothing important is on the disk, do this:
dd if=/dev/zero of=/dev/rdsk/<yourdisk> bs=1kk
Now here is the deal. Lets say your application uses sequential i/o. You
need to find out how it is writing to the disks, if you know then great, if
you don't here are a couple of things that you can do.
truss -p <procid> {read, write}
now you can watch the actual read and write calls to disk, this wont help if
the application is using mmap() though, but there are ways to see how they
are using mmap() also.
so lets say we are optimizing for something with an average read and write
size of 8k, then if you have 6 disks, then the stripe interleave size for
sequential i/o would be 8k or a multiple, maybe 16k or even higher. The
point is that a single read or write will access *all* spindles
simultaneously. Now, lets say you have random i/o. This is an entirely
different animal. Now, you must stripe so that each read or write will only
access a single disk, so in this case, if you had an application that was
very random like a database that also read and wrote in 8k chunks, you would
want an interleave of 8k*6drives = 48k. That way a single read or write will
hopefully only hit exactly one drive. Then you can have 6 concurrent seeks
for data.



This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:14:07 CDT