First , thank all these gurus give me great advices:
Roger Merchberger <zmerch@northernway.net>
Robert C. Steppacher <bobby@moe.cybermedical.com>
Joel Lee <jlee@thomas.com>
Karl Vogel<vogelke@c17mis.region2.wpafb.af.mil>
Bob Bridgham<robbie@b-e-s-t.com>
Birger<birger@Vest.Sdata.No>
Peter L. Berghold<berghold@tcg.com>
david<david@xtend.net>
Jonathan.Loh@BankAmerica.com
0085P@VM1.CC.NPS.NAVY.MIL
Joe<joe@technocraft.com>
I got a lot of great information.
clearly , the resource of the mail server is fairly low .
I will increase the mem to 512M RAM and 1G swap , and
get /var/mail onto a disk strip at proper time to share the
load.
But about how the sendmail problem mentioned in my original mail ,
such as 'I/O error' and 'Broken pipe' , is concerning to
the lack of system resources , I will do deeper research. welcome
good hints.
followings are the mail from world:
from Roger Merchberger <zmerch@northernway.net>
--- I'm not sure if this will actually help you as I'm absolutely no good with sendmail, but there is something you can do to improve not only the speed of your mail server, but also the security and ease of administration of your mail server.That would be: don't use sendmail. As you may know, sendmail constantly has CERT advisories about it and as such is a much easier target for crackers to gain access to your system.
I was having problems with my sendmail setup (as supplied by Sun) and after confusing myself greatly over the sendmail.cf file, I started digging into other MTA's and found qmail.
Qmail is very small, it's extremely low on resource utilization, memory usage, and is very modular -- if qmail doesn't do something you want it to, you can write a C program to do it. Like Perl instead, no problem. TCL/TK? Python? Shell Script? It can interface to anything. (I wrote a listserver program in Perl that uses qmail's internal abilities - it was very simple.)
Just to make a point: Your box could probably support up to 50,000 users *easily* running qmail.
Also, do you use NFS to mount the drives where the actual mailboxes are? Did you know that NFS does not implement file locking very well? Many Sendmail boxes have a big problem with this -- but not Qmail. Qmail's Maildir mail storage does not need any type of file locking, so it works wonderfully under NFS. It also handles standard Mailbox files as well, if you don't want to switch, or don't use NFS.
You may want to check it out -- if you have web access, go here: http://www.qmail.org/
or to the page of the man who wrote it (his name is Dan Bernstein):
http://pobox.com/~djb/qmail.html
The setup and operation of this software is so much simpler than Sendmail, that you don't need a book to use it -- just the FAQ and INSTALL documents that come in the qmail tar file are enough to get up and running with the software. If you do encounter problems, don't despair: there's a listserver that Dan set up for the discussion of qmail problems / questions that many knowledgeable people can help you with.
Please keep in mind that I do not speak for Dan or anyone on the listserver, and I have no professional connection to him or qmail other than I use it on our Sun Netra i5 and i20 boxes.
Oh, and qmail is totally free for any use, including commercial use.
If you have any questions about qmail, please feel free to contact me.
Hope this helps, Roger "Merch" Merchberger -- Roger Merchberger | If at first you don't succeed, Programmer, NorthernWay | nuclear warhead disarmament should *not* zmerch@northernway.net | be your first career choice. ---------
from Robert C. Steppacher <bobby@moe.cybermedical.com> ---- This may or may not be of help, but let me discribe a similar situaton that I was having with a web server. Like yourself I was experianceing inaduquate preformance, lots of broken pipes, etc. Also like yourself, I could not find any disk or memory related problems, both are fine. The one clue in your situaton is the high persentage of retransmits.
Do a netstat -s and look for the following information:
tcpOutDataSegs =1310031 tcpOutDataBytes =1047163536 tcpRetransSegs =238250 tcpRetransBytes =203035218
then calculate tcpRetransSegs/tcpOutDataSegs, for this system I get %18 which is not bad. If you have more that this, somthing is wrong. In my case the problem was a failing ethernet tranciver, when I was in the mist of the problem I had tcpRetransSegs/tcpOutDataSegs of about 80%, too high !!!! Swaping the NIC brought it back into line.
Somthing else to try is to tune your TCP with ndd. Try the following:
/usr/sbin/ndd -set /dev/tcp tcp_rexmit_interval_min 2500
Good Luck !!
------------------------------- Robert C. Steppacher ;} bobby@moe.cybermedical.com "Go placidly amid the noise & haste & remember what peace there may be in silence." Desiderata -1692 ----------------
from Joel Lee <jlee@thomas.com> --- Do a ps -ef and see how many process are running. Concurrently, use lsof to see any open process hidden behind the scene that's not suppose to be there. Your mail server is having problem forking since it's using more and more resources. Check your /var/spool/mqueue and see if it's generating null files. If it is, check if your mail server also talking to another local smtp system. If it is, check if that system is up or not.
-- Joel
-- Joel Lee <jlee@thomas.com> | Network Administrator/System Programmer Thomas Brothers Inc., Irvine, California | http://www.thomas.com Disclaimer: The statements are my personal viewpoints only and has nothing to do with anyone, even my employer. ---------------------------
from Karl Vogel<vogelke@c17mis.region2.wpafb.af.mil> -------- You may want to consider dumping sendmail completely and moving to qmail if you have lots of mail traffic.
http://www.qmail.org/ has the source plus docs. -- Karl Vogel ASC/YCOA, Wright-Patterson AFB, OH 45433, USA vogelke@c17mis.region2.wpafb.af.mil -----------------
from Bob Bridgham<robbie@b-e-s-t.com> -------------- This means 2 things. First off check your /etc/passwd and find out who user id 847 belongs to. That user is starting too many pop3 sessions or smtp sessions. Do a ps -awux |grep <WHOEVER UID 847 IS>
You can either log in at the console and kill his/her processes, or increase the # of user processes allowed. I think it is in the /etc/system file or ulimit. Check the man pages for both. Let me know if you need further help.
Bob Bridgham
************************************************************************ * * Bob Bridgham Webmaster@b-e-s-t.com * * * * B. - Billing http://www.b-e-s-t.com * * E. - Electronic * * S. - Services * * T. - Technology, Inc. * ************************************************************************ *
from Birger<birger@Vest.Sdata.No> --------- First, check who the user with that uid is. If it's the same user each time the server reboots, I would get suspicious. If it's a user that runs daemon processes, check their requirements. If it's an ordinary user, check his/her mailbox for consistency problems or excessive size.
Also, chek your /etc/system file to see if it contains old tuning entries for older Solaris versions (if you have upgraded). it could be that something in /etc/system that had to be tuned in e.g. 2.3 is now preventing 2.5.1 from tuning itself.
As an example, I have a SPARCstation 20 running as XDM server for a lot of users. These are interactive processes. It currently runs400 to 700 processes simultaneously without any trouble.
The only kernel variables I have tuned are for increasing the number of pty's. This is all I have added to the file:
* 5.x pseudo-ttys - default 48 set pt_cnt=2000
* 4.x pseudo-ttys - default 48 set npty=256
* STREAMS autopush entries - equal to pt_cnt set nautopush=2000
* STREAMS addressable devices - 2 * pt_cnt set sadcnt=4000
Birger -----------
from Peter L. Berghold<berghold@tcg.com> ------ Kun Li wrote:
> I have a mail server is serving for more than 5000 users. >
Wow!
> The hardware: > > Ultra-2 , 128M Memory , 400M swap , >
For 5000 users this seems to be a bit light. I'd go with some bigger iron for that many users. However, I think that we can get it to work with the Ultra II. How many CPUs are you running? What speed?
So far as swap goes, I'd up that a bit and go with a full 512Mb I realize that this is 4x physmem but you are pushing extremes here...
How large a network connection do you have going to this server? Do you have multiple connections?
> OS: Solaris 2.5.1 > Deamon: Sendmail 8.8.8 , pop3 server > Patch: (see the end of the mail) > > Problem describe: > > The mail server is now running very slowly , of course from > the user's opinion , they send mail using pop3 protocol. Sometime > the server will reboot itself automatically . before reboot nobody > could login. There are usually up to 50 or even more sendmail deamon > running simultaneously . >
Let me pick a nit here: The users do not use POP to send EMAIL, they use it to recieve mail. This becomes significant in other ways. To send mail you are still doing a port 25 connection through sendmail to do the sending.
The fact that you have 50 daemons running at a time does not surprise me. That is 1% of your 5000 users being connected at one time.
{ stuff snipped}
> The message at reboot time: > ----------------------- > Dec 23 13:17:43 domain.mail.server unix: out of per-user processes for > uid > 847 > Dec 23 13:17:43 domain.mail.server last message repeated 7 times > Dec 23 13:17:43 domain.mail.server unix: out of processes > Dec 23 13:17:43 domain.mail.server unix: out of processes
This is real significant. You probably can tune this in the kernal, although I have it on good authority that this is not what you want to do. What you should probably consider is adding memory to your server. First increment would be 256Mb (doubling it). From what I understand as a result of many conversations with the Sun SEO is that NPROC will tune itself based on the amount of memory that you have on the machine. The more memory the higher NPROC will tune itself for.
You could tune it yourself in the /etc/system file (the exact syntax escapes me) but you are going to hurt some other resource if you do.
> Dec 23 13:17:43 domain.mail.server sendmail[14860]: NOQUEUE: > SYSERR(root): > daemon: cannot fork: Resource temporarily unavailable
This underscores what I just wrote.
> Dec 23 13:17:43 domain.mail.server unix: out of per-user processes for > uid > 847 > Dec 23 13:17:44 domain.mail.server last message repeated 35 times > Dec 23 13:17:44 domain.mail.server unix: out of per-user processes for > uid > 847
Who is userid 847???
> -------------------------- > > Qestion: > > 1. Is there anybody so kind to explain what's the messages above meaning > ? such as > 'Broken pipe','inetd: accept: Protocol error' etc.
A broken pipe is when a pair of processes are connected via a pipe (e.g. ls | more) and the process on the end of the pipe (or middle) exits before the process before it in the pipe is finished sending output.
A "Protocol Error" is IMHO a catchall diagnostic message for anything that interupts a conversation between two network based processes. In this case a sendmail exited before a sendmail that was talking to it was done talking...
> > > 2. What cause this ? Should it be the problem for solaris 2.5.1 > supporting so > much , over 5000 , users ? >
IMHO Solaris 2.5.1 should be able to support a lot more than 5000 users. It is just a matter of configuring enough iron to do the job. As I said earlier in this EMAIL I think that if I were solving the problem I'd use something heavier than an Ultra II to do the job. The trick here is to get as many CPUs and RAM as you can doing the job.
{ stats snipped }
Suggestion: Get /var/mail onto a disk stripe. This will help with your IO to and from the file system as you'd have the filesystem spread over multiple spindles.
> Sendmail is using sd0 ,with which sometime I can watch %b reach over > 40 .
If you have the Adrian Cockcroft book on performance check out the back of the book for a scrpt that converts sdx notation to /dev/rdsk/cxtydzsw notation. This will let you know which disk is sd0. I suspectit is your root partion, however. If that is the case then why sendmail is hitting sd0 so hard is a curious thing....
> Monitoring the system with SE3.0 also indicated disk I/O is somewhat > high , get amber display. But I don't think the disk I/O high causing > the slow response ,cuz the %w is 0 yet, at least it's not the main > problem, > right ? Do i need to redistribute the disk load ? any advices ? >
I'm not surprised that disk IO is high with 5000 users on one server! :)
Here's the thing: You have to think of your application (sendmail and pop) as being very interrupt driven applications. Actually, they produce interrupts. Here is a table that I would summarize what is happening with:
Protocol Activity Types
Category Description "telnet" type Network activity to/from other sendmail daemons sendmail queues update disk /var/mail/user files
Category Description telnet type Network activity authentication file send copy of pop /var/mail/user file disk to /var/mail/.pop. user file authentication system calls file locking file permissions
> the memory is short ? >
Back on this: I'd go at least 256Mb of RAM and about 1Gb of swap. In fact, that's just for starters...
> > > 4. SE3.0 also shows high tcp retransmission . I think it should be the > main problem . but > my system has installed patch 103852-08 and 103630-05 ,which > announced to solve the bug > id 1233827 ,tcp retransmits too much for short connections as seen at > web sites . How > does the tcp retransmission remain so high ? does i need to install > the patch 103852-15 and 103630-09 ,or something else ? >
Again, what is the size of the network "pipe" into your server. Is it on a congested network?
> 5. it's clearly that the system resource had been used up at reboot > time. but who > cause this ? maybe it is sendmail, it forks itself constantly , until > reach 100, > 490 or more , right? And i don't know where uid 847 come from , I > think each > time system reboot , the uid cause this should be different , but > it's always > 847 in the /var/adm/messages file. why ? > >
If sendmail is forking itself, I'd check the length of your mail queue. Also, check your inbound traffic. How many servers are connecting to your server attempting to send mail to that 5000 users of yours...
Find out who userid 847 is and find out what they are doing on your server. This could even be a contributing cause of all your grief.
{patch list snipped}
Now, let me digress for a moment: If I were going to set up a server for 5000 users here is what I would set up:
Ultra 3000 (or 4000) 4 x 300MHz CPU 2Gb RAM Internal Disk: 4 x 4Gb (spread OS across 2 and mirror to other 2) External Disk 2 x 20Gb Disk Pack (/var/mail on one disk pack, mirror to other)
I realize that is an expensive proposition, but I know it would do the job efficiently. I would not necessarily use all 20Gb for /var/mail from the onset. I'd probably allocate about half of that and keep the rest in reserve. Stripe /var/mail accross multiple spindles in the disk pack (use 4Gb drives in them) and go with a stripe width of 4 spindles. This would keep your IO down to a dull roar.
If your interested in more ideas along these lines, let me know and I'll webify some documents that I have authored and put them up on my web site and ship you the URL....
Another parting thought: You may want to consider splitting the load up across multiple machines. 5000 users on one machine of any size give me the willies. If you lose the one machine that is way to many people to have angry at me! :)
-- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Peter L. Berghold, Sr. Unix Specialist, TCG http://www.monmouth.com/~peterb mailto:PeterB@Cyber-Wizard.Com VOX: (908) 392-2722 FAX: (908) 392-3578 ------------------------
from david<david@xtend.net> ----- In addition to OS configuration issues:
* Check the interval at which sendmail "runs the queue". I've seen folks set this to 15 minutes, for example -- and I almost always find it a Bad Idea. Why? Because the mail that is queued is queued because sendmail had trouble delivering the mail in the first pace... so unless the problem is likely to have been fixed within the last 15 minutes, re-trying the delivery 15 minutes later is not useful. The interval should be set to some value such that re-trying the delivery makes sense; in practice, I have found 1 hour to work fairly well. In more extreme cases, it's possible to set up more complicated solutions that involve "aging" the problem-mail and placing the mail in one of several queues, with different intervals for processing the queues.
* Be sure to avoid installing Sun patches against sendmail, if you're running (fairly) current sendmail from sendmail.org or isc.org.
david -- David Wolfskill david@xtend.net Pager: (408) 989-8067 david@pager.xtend.net -------------------
from Jonathan.Loh@BankAmerica.com ---------- Question: How are your mail mounts mounting? Are you at least mounting them with rw,hard,noac?
Are other machine linking to your system for mail? or is everyone have their own sendmail server? The reason I'm asking is that we had similar problems, till I realized everyone was nfs mounting the mail directory incorrectly. --------------------
from 0085P@VM1.CC.NPS.NAVY.MIL --------- Some quick impressions:
(1) Broken pipes, etc., generally signal that the system has run out of some resource. In SunOS 4.1.3, I know, I saw this a lot until I increased the MAXUSERS in the kernel to allow more processes to run. (2) Sendmail generally forks (starts) a new process for every incoming and outgoing message - for the duration of that message. So your problem sounds like your system may really processing messages very slowly. (3) If so, I'd check your communications and DNS. I've seen "slow" DNS lookups really bring mail to a halt on our systems.
Hope this helps! ------------------
from Joe<joe@technocraft.com> ------- You definitly need to add more ram to that machine. You are coorect when it boots up and sendmail starts its spool and user connect, and you're running out of memory so it cant fork a new process. You may want to try a different pop client too, the one form qualcomm is good, probably better handling of connections.
[-------------------------] Joe mailto:joe@technocraft.com http://joe.technocraft.com When ignorance reigns life is lost - RATM [-------------------------]
I have a mail server is serving for more than 5000 users.
The hardware:
Ultra-2 , 128M Memory , 400M swap ,
OS: Solaris 2.5.1
Deamon: Sendmail 8.8.8 , pop3 server
Patch: (see the end of the mail)
Problem describe:
The mail server is now running very slowly , of course from
the user's opinion , they send mail using pop3 protocol. Sometime
the server will reboot itself automatically . before reboot nobody
could login. There are usually up to 50 or even more sendmail deamon
running simultaneously .
The messages digested from /var/adm/messages file:(at usual time)
-------------------
Dec 23 12:07:47 domain.mail.server sendmail[23573]: NOQUEUE: SYSERR: putoutmsg ([204.98.117.2]): error on output channel sending "220 domain.mail.server
ESMTP Sendmail 8.8.8/8.8.8; Tue, 23 Dec 1997 12:07:47 +0800 (CST)": Broken pipe
Dec 23 12:27:48 domain.mail.server inetd[142]: accept: Protocol error
Dec 23 12:29:52 domain.mail.server inetd[142]: accept: Protocol error
Dec 23 12:29:58 domain.mail.server sendmail[17622]: WAA17622: SYSERR(root)
: collect: I/O error on connection from relay.bt.net, from=<mcec@btinternet.
com>: Resource temporarily unavailable
-----------------
The message at reboot time:
-----------------------
Dec 23 13:17:43 domain.mail.server unix: out of per-user processes for uid
847
Dec 23 13:17:43 domain.mail.server last message repeated 7 times
Dec 23 13:17:43 domain.mail.server unix: out of processes
Dec 23 13:17:43 domain.mail.server unix: out of processes
Dec 23 13:17:43 domain.mail.server sendmail[14860]: NOQUEUE: SYSERR(root):
daemon: cannot fork: Resource temporarily unavailable
Dec 23 13:17:43 domain.mail.server unix: out of per-user processes for uid
847
Dec 23 13:17:44 domain.mail.server last message repeated 35 times
Dec 23 13:17:44 domain.mail.server unix: out of per-user processes for uid
847
--------------------------
Qestion:
1. Is there anybody so kind to explain what's the messages above meaning ? such as
'Broken pipe','inetd: accept: Protocol error' etc.
2. What cause this ? Should it be the problem for solaris 2.5.1 supporting so
much , over 5000 , users ?
3. I gathered some system statistics
the vmstat output:
--------------
% vmstat 5
procs memory page disk faults cpu
r b w swap free re mf pi po fr de sr s0 s1 s6 -- in sy cs us sy id
0 0 0 51984 21536 0 46 21 23 25 0 0 6 3 0 0 146 354 37 6 2 92
0 0 0 304896 34432 0 49 0 0 0 0 0 2 0 0 0 127 106 83 0 1 99
0 0 0 304600 34336 0 102 11 0 0 0 0 24 4 0 0 224 370 79 2 3 95
0 0 0 304912 34456 0 60 0 0 0 0 0 4 0 0 0 129 188 50 1 0 99
------------------
and iostat output:
--------------
iostat -ctx 5 2
extended disk statistics tty cpu
disk r/s w/s Kr/s Kw/s wait actv svc_t %w %b tin tout us sy wt id
sd0 1.0 5.1 21.2 51.1 0.0 0.3 49.2 0 6 0 38 6 2 4 88
sd1 0.1 3.3 1.0 28.0 0.0 0.2 53.6 0 3
sd6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
extended disk statistics tty cpu
disk r/s w/s Kr/s Kw/s wait actv svc_t %w %b tin tout us sy wt id
sd0 0.0 3.2 0.0 29.4 0.0 0.0 9.2 0 3 0 73 0 0 2 98
sd1 0.0 0.2 0.0 1.6 0.0 0.0 14.1 0 0
sd6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
----------------
and the df -k output.
------------
%df -k
Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c0t0d0s0 96391 15143 71618 18% /
/dev/dsk/c0t0d0s3 145135 112135 18490 86% /usr/openwin
/dev/dsk/c0t0d0s4 482023 84832 348991 20% /var
/dev/dsk/c0t0d0s5 1522974 492245 878439 36% /var/mail
/dev/dsk/c0t0d0s6 482023 208424 225399 49% /usr
/dev/dsk/c0t0d0s7 962983 75066 791627 9% /data3
/dev/dsk/c0t1d0s0 1925934 497198 1236146 29% /usr/local/acct
/dev/dsk/c0t1d0s1 962983 34829 831864 5% /home1
/dev/dsk/c0t1d0s3 96391 40730 46031 47% /tmp
/dev/dsk/c0t1d0s4 192799 47128 126401 28% /opt
/dev/dsk/c0t1d0s5 760399 9 684360 1% /home2
-----------------------
Sendmail is using sd0 ,with which sometime I can watch %b reach over 40 .
Monitoring the system with SE3.0 also indicated disk I/O is somewhat
high , get amber display. But I don't think the disk I/O high causing
the slow response ,cuz the %w is 0 yet, at least it's not the main problem,
right ? Do i need to redistribute the disk load ? any advices ?
the memory is short ?
4. SE3.0 also shows high tcp retransmission . I think it should be the main problem . but
my system has installed patch 103852-08 and 103630-05 ,which announced to solve the bug
id 1233827 ,tcp retransmits too much for short connections as seen at web sites . How
does the tcp retransmission remain so high ? does i need to install the patch 103852-15 and 103630-09 ,or something else ?
5. it's clearly that the system resource had been used up at reboot time. but who
cause this ? maybe it is sendmail, it forks itself constantly , until reach 100,
490 or more , right? And i don't know where uid 847 come from , I think each
time system reboot , the uid cause this should be different , but it's always
847 in the /var/adm/messages file. why ?
need your help , any advice/hint/light are welcome
likun
likun@bjaimail.asiainfo.com
Asiainfo-Ctc Network Systems, Co.Ltd.
Patch list:
-------------
Patch: 103640-08 Obsoletes: 103658-02, 103920-05 Packages: SUNWkvm, SUNWcsu, SUNWcsr, SUNWcar
Patch: 104595-02 Obsoletes: Packages: SUNWkvm
Patch: 103591-06 Obsoletes: Packages: SUNWcsu, SUNWcsr
Patch: 103594-09 Obsoletes: Packages: SUNWcsu
Patch: 103600-12 Obsoletes: 103609-02 Packages: SUNWcsu, SUNWcsr
Patch: 103603-04 Obsoletes: Packages: SUNWcsu
Patch: 103612-23 Obsoletes: 103615-04, 103654-01 Packages: SUNWcsu, SUNWcsr, SUNWarc, SUNWnisu, SUNWsutl
Patch: 103622-05 Obsoletes: Packages: SUNWcsu, SUNWcsr, SUNWhea
Patch: 103623-02 Obsoletes: Packages: SUNWcsu
Patch: 103627-01 Obsoletes: Packages: SUNWcsu, SUNWcsr
Patch: 103630-05 Obsoletes: Packages: SUNWcsu, SUNWcsr
Patch: 103663-07 Obsoletes: 103683-01 Packages: SUNWcsu, SUNWcsr, SUNWhea
Patch: 103680-01 Obsoletes: Packages: SUNWcsu
Patch: 103690-03 Obsoletes: Packages: SUNWcsu
Patch: 103693-01 Obsoletes: Packages: SUNWcsu
Patch: 103696-02 Obsoletes: Packages: SUNWcsu, SUNWcsr
Patch: 103699-01 Obsoletes: Packages: SUNWcsu
Patch: 103738-03 Obsoletes: Packages: SUNWcsu
Patch: 103785-01 Obsoletes: Packages: SUNWcsu
Patch: 103810-03 Obsoletes: Packages: SUNWcsu
Patch: 103817-01 Obsoletes: Packages: SUNWcsu
Patch: 103834-01 Obsoletes: Packages: SUNWcsu
Patch: 103847-02 Obsoletes: Packages: SUNWcsu
Patch: 103867-01 Obsoletes: Packages: SUNWcsu, SUNWcsr
Patch: 103891-02 Obsoletes: Packages: SUNWcsu
Patch: 103934-04 Obsoletes: Packages: SUNWcsu, SUNWcsr, SUNWhea
Patch: 103981-04 Obsoletes: Packages: SUNWcsu, SUNWpd, SUNWpdu
Patch: 104029-01 Obsoletes: Packages: SUNWcsu
Patch: 104066-02 Obsoletes: Packages: SUNWcsu
Patch: 104072-01 Obsoletes: Packages: SUNWcsu, SUNWxcu4
Patch: 104081-01 Obsoletes: Packages: SUNWcsu
Patch: 104166-01 Obsoletes: Packages: SUNWcsu
Patch: 104182-01 Obsoletes: Packages: SUNWcsu
Patch: 104220-01 Obsoletes: Packages: SUNWcsu
Patch: 104227-01 Obsoletes: Packages: SUNWcsu
Patch: 104236-01 Obsoletes: Packages: SUNWcsu
Patch: 104245-03 Obsoletes: Packages: SUNWcsu, SUNWcsr, SUNWhea
Patch: 104263-01 Obsoletes: Packages: SUNWcsu
Patch: 104266-01 Obsoletes: Packages: SUNWcsu
Patch: 104277-01 Obsoletes: Packages: SUNWcsu
Patch: 104297-01 Obsoletes: Packages: SUNWcsu
Patch: 104312-03 Obsoletes: Packages: SUNWcsu, SUNWcsr, SUNWhea
Patch: 104317-01 Obsoletes: Packages: SUNWcsu
Patch: 104331-01 Obsoletes: Packages: SUNWcsu
Patch: 104334-01 Obsoletes: Packages: SUNWcsu
Patch: 104361-01 Obsoletes: Packages: SUNWcsu, SUNWarc
Patch: 104380-03 Obsoletes: Packages: SUNWcsu
Patch: 104389-01 Obsoletes: Packages: SUNWcsu
Patch: 104406-01 Obsoletes: Packages: SUNWcsu
Patch: 104433-02 Obsoletes: Packages: SUNWcsu
Patch: 104440-01 Obsoletes: Packages: SUNWcsu
Patch: 104447-01 Obsoletes: Packages: SUNWcsu
Patch: 104463-01 Obsoletes: Packages: SUNWcsu
Patch: 104478-01 Obsoletes: Packages: SUNWcsu
Patch: 104484-01 Obsoletes: Packages: SUNWcsu
Patch: 104490-01 Obsoletes: Packages: SUNWcsu
Patch: 104576-01 Obsoletes: Packages: SUNWcsu
Patch: 104578-01 Obsoletes: Packages: SUNWcsu
Patch: 104613-01 Obsoletes: Packages: SUNWcsu
Patch: 104642-01 Obsoletes: Packages: SUNWcsu
Patch: 104650-01 Obsoletes: Packages: SUNWcsu
Patch: 104652-01 Obsoletes: Packages: SUNWcsu
Patch: 104654-01 Obsoletes: Packages: SUNWcsu
Patch: 104664-01 Obsoletes: Packages: SUNWcsu
Patch: 104672-01 Obsoletes: Packages: SUNWcsu
Patch: 104692-01 Obsoletes: Packages: SUNWcsu
Patch: 104696-01 Obsoletes: Packages: SUNWcsu, SUNWesu
Patch: 104706-01 Obsoletes: Packages: SUNWcsu, SUNWxcu4
Patch: 104708-01 Obsoletes: Packages: SUNWcsu, SUNWhea, SUNWssadv
Patch: 104736-01 Obsoletes: Packages: SUNWcsu
Patch: 104560-01 Obsoletes: Packages: SUNWcsr
Patch: 104628-01 Obsoletes: Packages: SUNWcsr
Patch: 103582-08 Obsoletes: Packages: SUNWcsr
Patch: 103597-01 Obsoletes: Packages: SUNWcsr
Patch: 103618-02 Obsoletes: Packages: SUNWcsr
Patch: 103621-01 Obsoletes: Packages: SUNWcsr
Patch: 103734-01 Obsoletes: Packages: SUNWcsr
Patch: 103857-04 Obsoletes: Packages: SUNWcsr, SUNWhea
Patch: 103903-02 Obsoletes: Packages: SUNWcsr, SUNWhea
Patch: 103928-01 Obsoletes: Packages: SUNWcsr
Patch: 104201-01 Obsoletes: Packages: SUNWcsr
Patch: 104259-02 Obsoletes: Packages: SUNWcsr
Patch: 104283-01 Obsoletes: Packages: SUNWcsr
Patch: 104580-02 Obsoletes: Packages: SUNWcsr
Patch: 104607-01 Obsoletes: Packages: SUNWcsr
Patch: 104608-01 Obsoletes: Packages: SUNWcsr, SUNWhea
Patch: 104609-01 Obsoletes: Packages: SUNWcsr
Patch: 104615-01 Obsoletes: Packages: SUNWcsr
Patch: 103669-03 Obsoletes: Packages: SUNWcar
Patch: 103821-01 Obsoletes: Packages: SUNWcar
Patch: 103955-02 Obsoletes: Packages: SUNWcar
Patch: 104200-01 Obsoletes: Packages: SUNWcar
Patch: 104594-01 Obsoletes: Packages: SUNWcar, SUNWhea
Patch: 104604-02 Obsoletes: Packages: SUNWcar, SUNWhea
Patch: 104656-01 Obsoletes: Packages: SUNWcar
Patch: 104489-01 Obsoletes: Packages: SUNWtltk, SUNWtltkd
Patch: 103566-11 Obsoletes: Packages: SUNWxwplt, SUNWxwslb
Patch: 104338-01 Obsoletes: Packages: SUNWxwplt
Patch: 103901-03 Obsoletes: Packages: SUNWolrte, SUNWolinc, SUNWolslb
Patch: 104533-02 Obsoletes: Packages: SUNWolrte, SUNWolslb
Patch: 104287-01 Obsoletes: Packages: SUNWoldte
Patch: 103959-03 Obsoletes: Packages: SUNWscpu, SUNWlpu, SUNWlps
Patch: 104644-01 Obsoletes: Packages: SUNWscpu
Patch: 103866-02 Obsoletes: Packages: SUNWbcp
Patch: 104093-02 Obsoletes: Packages: SUNWoldst
Patch: 104162-02 Obsoletes: Packages: SUNWoldst
Patch: 104453-01 Obsoletes: Packages: SUNWoldst
Patch: 104538-01 Obsoletes: Packages: SUNWolimt
Patch: 103900-01 Obsoletes: Packages: SUNWowbcp
Patch: 104596-03 Obsoletes: Packages: SUNWpd
Patch: 104605-01 Obsoletes: Packages: SUNWpd, SUNWhea
Patch: 104010-01 Obsoletes: Packages: SUNWvolu
Patch: 103558-07 Obsoletes: Packages: SUNWadmap, SUNWadmc, SUNWsadml
Patch: 103879-04 Obsoletes: Packages: SUNWkcsrt
Patch: 104635-01 Obsoletes: Packages: SUNWdoc
Patch: 104658-01 Obsoletes: Packages: SUNWdoc
Patch: 103948-01 Obsoletes: Packages: SUNWaccu
Patch: 104482-01 Obsoletes: Packages: SUNWpppk
Patch: 104516-01 Obsoletes: Packages: SUNWapppu
Patch: 103750-02 Obsoletes: Packages: SUNWarc
Patch: 104637-01 Obsoletes: Packages: SUNWarc
Patch: 103911-01 Obsoletes: Packages: SUNWbnuu
Patch: 103839-02 Obsoletes: Packages: SUNWbtool
Patch: 104208-01 Obsoletes: Packages: SUNWbtool
Patch: 103743-01 Obsoletes: Packages: SUNWfns
Patch: 103606-02 Obsoletes: Packages: SUNWhea, SUNWtoo
Patch: 104094-01 Obsoletes: Packages: SUNWhea
Patch: 104212-03 Obsoletes: Packages: SUNWhmd, SUNWhmdu
Patch: 104246-03 Obsoletes: Packages: SUNWhmd, SUNWhmdu
Patch: 103643-04 Obsoletes: Packages: SUNWnisu
Patch: 103686-02 Obsoletes: Packages: SUNWnisu
Patch: 103770-02 Obsoletes: Packages: SUNWnisu
Patch: 103995-01 Obsoletes: Packages: SUNWnisu
Patch: 104475-01 Obsoletes: Packages: SUNWnisu
Patch: 104443-02 Obsoletes: Packages: SUNWpcmcu
Patch: 104710-01 Obsoletes: Packages: SUNWssaop
Patch: 104280-01 Obsoletes: Packages: SUNWtoo
Patch: 103794-03 Obsoletes: Packages: SUNWsx, SUNWsxow, SUNWsxxgl, SUNWsxxil
Patch: 104239-01 Obsoletes: Packages: SUNWdial
Patch: 103796-08 Obsoletes: Packages: SUNWffb, SUNWffbcf, SUNWffbw, SUNWvlxil, SUNWffbxg
Patch: 103795-02 Obsoletes: Packages: SUNWleow, SUNWleor, SUNWleoo
Patch: 103793-01 Obsoletes: Packages: SUNWtcxow
Patch: 103801-04 Obsoletes: Packages: SUNWsprot, SUNWxcu4t
This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:12:29 CDT