Hello,
* Though the Disksuite1.0 installation script gives a warning about
4.1.3 not being supported, it DOES SUPPORT 4.1.3 over sun4m.
Thanks to our Sun sales rep. and the following sun-managers
Jim Davis <jdavis@cs.arizona.edu>
Shekhar Bhatt sbhatt@cbis.com
* Frank Peters answered most of my questions and they are very crisp and
informative. Thanks Frank!.(see summary below)
* Christian Lawrence talks about different ways of improving the I/O
performance by concatentation and chosing the appropriate interlace
values. (see summary below)
Thanks to Jim Lick for introducing me to a tool called 'ofiles' .
Thanks guys!.
Christian Lawrence <cal@soac.bellcore.com>
Jim Lick <jim@pi-chan.ucsb.edu>
fwp@CC.MsState.Edu (Frank Peters)
----------------------------------------------------------------------------
From: Jim Lick <jim@pi-chan.ucsb.edu>
> 1. Is there any tool which can relate the ports(seen via netstat) to
> the users' process? Couple of times I discovered a lot of X traffic
> from a certain port but couldn't identify the process(using 'top').
> Is there anyway this info from /dev/kmem?
ofiles will do this. You may have to search for a while to find a version
that works on your system. The one in /pub/misc on ferkel.ucsb.edu is
known to work on SunOS 4.1.3.
----------------------------------------------------------------------------
From: fwp@CC.MsState.Edu (Frank Peters)
:
: Does anybody out there know about Disksuite-1.0's compatibility with
: 4.1.3 running on S10mod30(sun4m)?. I was trying to install Disksuite
: and got a warning message about the compatibility.
: * Has SunSoft come out with a version compatible with 4.1.3.?
The current version works fine with 4.1.3 on a sun4m. Just install it
and ignore the warning.
: * The S10 has one FSBE/S and the other is (SBE/S), will this be a problem
: when mirroring two disks each using one the SCSI buses?
Hmmm...it won't be a problem no. Keep in mind that the system will
treat the disks on the two chains as if they were equal. There will be
no effort to place greater load on the disks on the faster chain.
: * Is there a recommended value for the interlace block size on the meta
: disk?
This depends entirely on the access patterns for your data...in other
words, on what sorts of things your users do. The default is pretty
good for a generic typical load.
: I also have few questions which is unrelated with my previous query
:
: 1. Is there any tool which can relate the ports(seen via netstat) to
: the users' process? Couple of times I discovered a lot of X traffic
: from a certain port but couldn't identify the process(using 'top').
: Is there anyway this info from /dev/kmem?
There is a program called ofiles that gives that information. It can
be found via archie and on several anonymous FTP sites.
: 3. How can I identify the I/O bound processes?
Do a "ps aux" and look for processes spending most of their time in
D state (that is a I/O wait state).
: 4. Our NFS file server(4/670) crashed with the following messages to-day
: (sigh)
:
: dev = 0x706, block = 5535, fs = /usr
: panic on 3: free: freeing free frag
: syncing file systems... panic on 3: zero
This is bugid 1039693, fixed by Sun patch 100623-02. The problem
description for this patch is as follows:
1078521 Zero length directories can be left when a system is powered off
1039693 panic: ifree: freeing free inode
1082206 bmap references block after calling brelse
1071839 iget shouldn't hammer i_flag when reclaiming an inode
(the numbers are bugids).
-- ---------------------------------------------------------------------------- From: Christian Lawrence <cal@soac.bellcore.com>I have OnLine Disk Suite (OLDS) running on a 690 under 4.1.3 using both the DSBE/S and the SBE/S. you need to restore the 4.1.3 version of rpc.lockd and/or update it with any other rpc.lockd patches specifically for 4.1.3. I also added 100173-09 (NFS jumbo) and 100623-03 (UFS jumbo) ignore the bogus message that comes out of the install script .... Sun just didn't want people running older versions to attempt OLDS since it would break some things.
as for interlace values, striping will only buy you performance if your file I/O is large and sequential (regularly between 100 KB & 16 MB). in many instances (e.g. home directories) this is not the case and as time goes on you may actually see performance degradation since the arms in a disk stripe move together and seek times can grow (e.g. UFS file block allocation scheme).
in a typical UFS/NFS env where multiple users are doing somewhat random access, you would probably be better off concatenating a bunch of small partitions across drives (e.g. 200 MB from sd1, 200 from sd2, 200 more from sd1, 200 more from sd2, etc.). this will balance your disk I/O and give you a sliding window (e.g. 400 MB) across drives which will give you some performance improvement -- particularly when you are close to the logical block boundaries.
some advice : use as many controllers as makes sense. more SCSI buses yield more SCSI traffic and concurrent operations. if you're making big file systems SERIOUSLY consider mirroring. if pieces of a concatenation or stripe fail the WHOLE thing is lost. ..... and then it could take you eons to do the restore !! don't use default parameters supplied by newfs since they are not always optimal (e.g. maxcontig=1 but can be 7 for a 600 series, 15 for a sun4c (and probably SS10)).
P.S. freeing frre fragment is fixed in 100623-03
---------------------------------------------------------------------------
This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:07:27 CDT