SUMMARY: Performance Poll

From: Martin Thorpe <Martin.Thorpe_at_DATAFORCE.co.uk>
Date: Tue Mar 02 2004 - 10:42:17 EST
 Hi

Many apologies that this is so late, I am guilty of not reading the Sun
Managers FAQ, please accept my pologies for this, this list is a valuable
source of information for me and a SUMMARY of the data received regarding
my Performance Poll should have been posted. Again, apologies, I am new
to the list and hope you can forgive me!!

Problems with our SAN have been highlighted as being down to the
controllers in the disk array, they are not powerful enough for the job
and having been quoted #52k to replace them with 4 new controllers (each
with 1GB cache), we have decided to look into alternative technology as
we can get a new array for this kind of price.

A summary of all responses that I received can be found below regarding
the Performance Poll, I would like to thank everyone that replied to me,
thank you!
The SAN issues and Brocade issues were resolved via consultants using
there own lab and hardware to duplicate our enviroment (PICK RDBMS mixed
in with Windows volumes all on the same SAN all using RAID 5).

Following is a copy and paste of all performance poll replies, hope they
are of interest to you as much as they were to me!

-------------------------------------------------------------------

Martin,

Here are my results on V880 (4 x 900Mhz, 8GB) connected to two T3+'s each
configured as follows:

1GB cache, 9 x 36GB 10K RPM, RAID 5 (7+1, 1 hotspare) write caching
enabled.

Note: this is not a partner pair.

This box is a database server using Oracle and so some filesystems are
mounted with Direct IO as Oracle does it's own caching. Hope these
figures help.

On a filesystem striped accross both T3's (mounted logging)
dagda {1089}# time mkfile 1g test

real    0m18.54s
user    0m0.09s
sys     0m8.70s
dagda {1090}#

iostat output:
25.2 3928.6  201.6 31387.6  0.0 18.5    0.0    4.7   0  99 c2t1d0
23.8 3911.2  190.4 31198.2  0.0 17.7    0.0    4.5   0  99 c4t1d0

On a filesystem mirrored across both T3's (mounted logging,forcedirectio)
dagda {1093}# time mkfile 1g test

real    1m14.48s
user    0m0.03s
sys     0m13.89s
dagda {1094}#

iostat for the T3s looked like this during it:
7.2 1831.4   57.6 14694.3  0.0  0.8    0.0    0.4   0  77 c2t1d0
7.4 1830.4   59.2 14677.3  0.0  0.8    0.0    0.4   0  78 c4t1d0

On another striped filesystem, this time mounted logging,forcedirectio

real    1m7.60s
user    0m0.05s
sys     0m9.06s

iostat:
0.0 1042.4    0.0 8329.0  0.0  0.4    0.0    0.4   0  42 c2t1d0
1.0 1042.8    8.0 8325.1  0.0  0.4    0.0    0.4   0  44 c4t1d0

-------------------------------------------------------------------

Hi Martin, Just thought I'd participate in your poll. I have a Sun Fire
V880 - with 4 x 900Mhz CPU's and 4 GB RAM.Internal 6 x 73GB 10000RPM hard
disks.Connected to an EMC400 SAN with 9 x 73GB 10000RPM disks.These disks
are mirrored in twos using Disksuite, except for 3 which are RAID-5. The
'timex mkfile 1g test' command produced the following results for me. /
root      Mirror       RAID-5Real     36.19        20.40        16.07User     
0.11         0.13         0.10Sys       6.74         6.08         6.36 I
don't have process accounting turned on.
-------------------------------------------------------------------

Here are the numbers for a V880 (2 x 900Mhz, 4gb mem)

Disks mirrored with disksuite

r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device

0.0  229.4    0.0 28749.0  0.0  5.3    0.0   23.0   0  73 c1t4d0

0.0  229.4    0.0 28749.0  0.0  5.3    0.0   23.3   0  77 c1t5d0

timex mkfile 1g test

real       35.80

user        0.09

sys         6.09

and for a 4800 (4 x 900Mhz, 4gb mem)

dual attached directly to an Hitachi 9960

r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device

0.4 1603.6    3.2 12829.0  0.0  0.9    0.0    0.6   0  92
c4t500060E80000000000007B000000011Fd0

timex mkfile 1g test

real     1:21.12

user        0.13

sys         7.67

-------------------------------------------------------------------

ROOT@db02# timex mkfile 1g /user28/test

real       11.35
user        0.16
sys        10.65

V880 8cpu 32gb ram. Connectrix fiber switches tied to EMC Symmmetrix. 

-------------------------------------------------------------------

# timex -ops mkfile 1g test3

real       13.66
user        0.09
sys         5.95

That's a Sun Fire 280R, 2x 1.2GHz Sparc III, 8GB memory
LSI E4400, RAID 5, 18x 73GB 10K RPM
LSI 409190 FC cards
Brocade 2250 FC switch

-------------------------------------------------------------------

This is a V880/8*900MHz, 32Gb RAM, EMC cx400 array direct attached via
Emulex 
LP9000s, Veritas 3.2, EMC ATF.

3512:/var/pkms/clust/logs-> timex mkfile 1g test

real       13.33
user        0.06
sys        13.17

On the internal disks:

3512:/var/tmp-> timex mkfile 1g test

real     1:13.57
user        0.20
sys         9.53

And the same thing on a V880, 8*750, EMC FC4700

1502 # timex mkfile 1g test

real       23.41
user        0.06
sys        15.20

---- original message ----

Hi

V440, 8 GB RAM, 4x1 GHz, Solaris 8 7/03, 36 GB 10 krpm*4
3510 two controllers with 0.5 GB cache each, 36 GB 15 krpm*12, HW RAID5 (10+2 spare)

V440 internal, mirror with disksuite:
bash-2.03# timex mkfile 1g test

real    38.69
user    0.09
sys     6.06

3510:
bash-2.03# timex mkfile 1g test

real    18.67
user    0.07
sys     4.28

Almost no load, just oracle and som web stuff idle at the time of test.

-------------------------------------------------------------------

Hi, I've done a couple of tests with:

1. e250 with A1000 - real 1:17.82
2. e450 with hp-SAN - real 1.14.27

Make sure that you have a healthy battery in the controller.(or if you know that you have ups-protected the controller/disks, turn on the "cache without battery" option)

I'm not sure that is your problem, but it could be.

-------------------------------------------------------------------

Here are my results

SF3800 (4x900MHz, 6GB, Solaris 8), Brocade 2400, EMC CLARiiON FC4700
Disks are a striped set of 3 mirrored pairs of 73GB/10k disks (hardware
raid 1/0).
Filesystem is VxFS

timex mkfile 1g test

real  9.62
user  0.00
sys 8.49

iostat results

device r/s  w/s kr/s  kw/s wait  actv  svc_t  %w %b
atf2 0.0 501.2  0.0 31651.6  13.2  1.0 28.3  99 99

-------- Original Message --------

Subject:

PERFORMANCE POLL

Date:

Mon, 09 Feb 2004 11:50:13 +0000

From:

Martin Thorpe <Martin.Thorpe@DATAFORCE.co.uk>

Reply-To:

Martin.Thorpe@DATAFORCE.co.uk

Organization:

Dataforce Group Ltd

To:

sunmanagers@sunmanagers.org

Hi all

I am looking for a ball park figure for throughput on different 
servers/setup, predominantly SAN enviroments/V880 etc.

Could you just run a timex mkfile 1g test on your SAN/V880/etc and tell 
me the result, that or equivalent and give me some idea of your setup.

My setup is:

SunFire 3800 (2x900mhz CPU, 4GB MEM)  -   2400 Silkworm Brocade Switch   
-   LSI e2400 Metastor Disk Array (10x73GB fibre based Seagate 15k rpm, 
HW RAID 5).

TEST: mkfile 1g test
SunFire 3800 ROOT DISKS (mirrored via Veritas Volume Manager)   -      

timex mkfile 1g test
r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
0.6  108.1    4.8 12611.1  0.0 74.8    0.0  688.0   0 100 c0t0d0
0.4  108.5    3.2 12683.3  0.0 76.0    0.0  697.6   0 100 c0t1d0

real     1:23.19
user        0.20
sys         9.98

SunFire 3800 SAN:

timex mkfile 1g test
r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
0.0   71.0    0.0 4437.5  0.0 56.7    0.1  798.6   0  90 c1t1d4
0.3   69.4    0.6 4344.4  0.0 66.0    0.1  947.0   0  95 c3t0d4

real     4:18.66
user        0.16
sys         8.97

Thanks in advance
_______________________________________________
sunmanagers mailing list
sunmanagers@sunmanagers.org
http://www.sunmanagers.org/mailman/listinfo/sunmanagers
Received on Tue Mar 2 10:42:00 2004

This archive was generated by hypermail 2.1.8 : Thu Mar 03 2016 - 06:43:29 EST