Well, i got a few replies (as well as a few "me too" messages :), and here
they are :
Richard Sands <ras@hubris.CV.Com> :
-- I don't know of any lists, but you can run 'nm' on '/kernel/unix' to show all the available parameters. You have to weed out which are tweakable, & there's no explanation of what the parameters are though, this makes it not that useful, but interesting though. --Steve Lee <steve@opensys.com> : -- I'd check Adrian Cockcroft's book, Sun Performance Tuning. It seems to have a good (if not completely thorough) list. --
dave@exar.com (Dave Haut) : -- Check out Appendix A of the SunOS x.x Administering Security, Performance, and Accounting manual. The Appendix gives a pretty good listing and description of the major Kernel Parameters for Solaris 2.x --
seanw@amgen.com (Sean Ward) : -- Hello Andrew. I don't know if this includes everything, but there is a rather expansive list in the Answerbook You can find it if you search for "tunable parameters." --
Glenn.Satchell@uniq.com.au (Glenn Satchell) : -- Since the settings in /etc/system effectively get adb'd into the kernel at boottime the answer is that you can set any variable that is in the kernel. Now of course not all of them are ok or safe to set.
I think others have asked for this and there is no definitive list - even Sun doesn't publish this, perhaps simply because there are too many possibilities, and when you think of all the combinations it would be pretty easy to so something which would stuff your kernel and prevent it from booting or worse introduce some subtle problem that only shows up in three months time :-)
The only settings I've seen are for the IPC parameters. Tis is taken from a variety of sources.
Solaris 2.X IPC tuneables
NOTE: Max value can be deceiving, since almost all the IPC tuneables are delcared as ints, which on a SPARC has a max value of 2147483647. In many cases, the data item itself must fit into something smaller, such as a ushort imbedded in a data structure, which of course reduces the theoretical maximum significantly. In anty event, the Max listed is strictly a technical limitation based on the data type, and in no way should be construed as something attainable in real life.
It is not really possible to provide a realistic maximum, since too many other factors need to be considered, not the least of which being the amount of kernel memory required by everything besides the IPC resources.
*** Shared Memory
Tuneable Type Default Max Notes ----------------------------------------------------------------------------- shmmax int 131072 2147483647 The maximum size of a shared memory segment...the largest value a program can use in a call to shmget(2). Setting this tuneable way high doesn't really hurt anything in that kernel resources don't get allocated based on this value alone.
shmmin int 1 2147482647 The smallest possible size of a shared memory segment...the smallest value that a program can use in a call to shmget(2). The default being 1 byte, there should never be any compelling reason to change this, and making it too large could potentially break code that grabs shared segments smaller then shmmin. Ask yourself "Do I really need to tweak this?".
shmmni int 100 65535 Max number of shared memory indentifiers that can exist in the system at any point in time. Every shared segment has an identifier associated with it, this is what shmget(2) returns. The number of these you need is completely dependent on the needs of the application....how many shared segments do we need. Setting this randomly high has some fallout, since the system uses this value during initialization to allocate kernel resources. Specfically, a shmid_ds structure is created for each possible shmmni, thus the kernel memory allocated equals (shmmni x sizeof(struct shmid_ds)). A shmid_ds structure is 112 bytes, you can do the arithmetic and determine the initial overhead in making this value arbitrarily large.
shmseg int 6 65535 Max segments per process. Tweaking this one way or the other doesn't seem to effect anything, other then the obvious. In other words, we don't allocate resources based on this value, we simply keep a per-process count of the number of shared segments the process is attached to, and we check that value to insure it is less then shminfo_shmseg before we allow another attach to complete. The Maximum value should be the current value of shmmni, since setting it greater than that won't buy you anything, always less than 65535. Again, application dependent. Ask yourself "How many shared memory segments do the processes runnin my application need to be attached to at any point in time?".
** Semaphores
NOTE: Semaphores are not as easy as shared segments in terms of understanding the tuneables. This is due to the "features" of semaphores in System V, such as the ability to use a single semaphore, or several semaphores in a set. Thus, it may be difficult to convey what a tuneable does without going down the kernel-implementation-of-semaphores rathole...but I'll try.
Tuneable Type Default Max Notes ----------------------------------------------------------------------------- semmap int 10 2147482647 Number of entries in semaphore map. Briefly, the memory space given to the creation of semaphores is taken from the semmap, which is initialized with a fixed number of map entries based on the value of semmap. The implementation of allocation maps is generic within SVR4, supported with a standard set of kernel routines (rmalloc(), rmfree(), etc). The use of allocations maps by the semaphore subsystem is just one example of thier implementation. They basically prevent the kernel from having to deal with mapping additional kernel memory as semaphore use grows. By initializing and using allocation maps, kernel memory is allocated upfront, and map entries are allocated and freed dynamically from the semmap allocation maps. Should never be larger than semmni. If the number of semaphores per semaphore set used by the application is known, and we call that number 'n', then you can use: semmap = (((semmni + n - 1) / n) + 1) If you make it to small for the application, you'll get: WARNING: rmfree map overflow.... messages on the console. Tweak it higher and reboot.
semmni int 10 65535 Max number of semaphore sets, system wide. Number of semaphore identifiers. Every semaphore set in the system has a unique indentifier and control structure. During init, the system allocates kernel memory for semmni control structures. Each control structure is 84 bytes, so once again you can calculate the result of making this arbitrarly large.
semmns int 60 65535 Max number of semaphores in the system. A semaphore *set* may have more than one semaphore associated with it, and each semaphore has a sem structure. Once again, during init the system allocates (semmns * sizeof(struct sem)) kernel memory. Each sem structure is only 16 bytes, but you still shouldn't go over the edge with this. Actually, this number should really be: semmns = semmni x semmsl. Read on. semmnu int 30 2147482647 System wide maximum number of undo structures. Seems intuitive to make this equal to semmni which would provide for an undo structure for every semaphore set. Semaphore operations done via semop(2) can be undone if the process should terminate for whatever reason. An undo structure is required to guarantee this.
semmsl int 25 2147482647 Maximum number of semaphores, per unique ID. As mentioned previously, each semaphore set may have one or more semaphores associated with it. This tuneable defines what the max number is per set.
semopm int 10 2147482647 Max number of semaphore operations that can be performed per semop(2) call.
semume int 10 2147482647 Maximum per process undo structures. Make sure the app is setting the SEM_UNDO flag when it gets a semaphore, otherwise you don't need undo structures. Should be less than semmnu (obviously), but sufficient for the application. Seems logical to set this equal to semopm times the average number of processes that will be doing sempahore ops at any point in time.
semusz int 96 2147482647 Listed as size in bytes of undo structure, in reality is bytes required for max configured per process undo structures. I don't know why this is listed as a tuneable. During init, it gets set to (sizeof(undo) + (semume x sizeof(undo)), so it seems to me that setting it in /etc/system is pointless. Leave it alone...it should be removed as a tuneable.
semvmx int 32767 65535 The maximum value of a semphore. Due to the interaction with undo structures (and semaem, below), this tuneable should not exceed a max of its default value of 32767 (half of 65535) unless you can guarantee that SEM_UNDO is never being used. semaem int 16384 32767 Maximum adjust-on-exit value. A signed short, because semaphore operations can increase or decrease the value of a semaphore, even though the actual value of a semaphore can never be negative. If semvmx (above) is 65535, semaem would need to have 17 bits to represent the range of changes possible in a single semaphore operation, thus the recommendation above. It seems that we should not be tweaking either semvmx or semaem unless we *really* understand how the apps will be using semahores. And even then, leave semvmx at the default. ---------------------------------------------------------------------------- set pt_cnt=NNN set the maximum number of pseudo tty's. I've set this as high as 1024 without error (but I haven't actually had 1000 people log in at the same time!) ---------------------------------------------------------------------------- This is what Online Disksuite puts in to tell the kernel where the meta databases are located.
* Begin MDD database info (do not edit) set md:mddb_bootlist1="sd:28:16 sd:28:1050" * End MDD database info (do not edit) ---------------------------------------------------------------------------- --
A thousand thanks to all who replied, Andrew. --
- Andrew J. Cosgriff - andrew@unico.com.au - SysAdmin, UNICO Computer Systems. Mail Server - "send file help" as subject PGP and/or MIME ok The trouble with being quoted a lot is that it makes other people think you're quoting yourself when in fact you're merely repeating yourself. (Larry Wall)
This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:10:23 CDT