Thanks everyone who sent me different suggesion on my
TCP UDP results.
I was trying different posibility but there were few
reasons for my anomalous throughput and :
1. for TCP I changed my send and receive buffer size
using SO_SNDBUF and SO_RCVBUF options. I thought
I did same thing before, by editing the kernel
files and rebuilding the kernel but for some reason
I got better result by setting these option.
If anyone know the difference between these two methods
please let me know.
2. To get the accurate results benchmarks need to run
on identically configured machines, but because
of the unavailibility of similar machines, in
some cases I ran between two different vendor's
machines. Because of the effeciency of one ethernet
driver from other, the receiving machine received
all the packets but it dropped them before it put them
in the memory via DMA.
This time I choose the different pair of machines
with similer ethernet driver.
3. The memory fluctuation was partly may be due to
the re-transmission of the packets in TCP; therefore
I saw the fluctuation in both the TCP send and receive
Other reason could be the ethernet interrupt. In some
case in UDP, I saw the fluctuation for receive case.
Ethernet interrupt has hig priority, so when
interrupt was occuring kernel was putting the process
to sleep and taking care of I/O. Therefore, while
monitoring I was seeing burst of work then sleep again.
I have got much better results with all the machines except
for Decstation 5000, at 1100 byte and above size packets
TCP receive rate dropped significantly and also the CPU
utilization for TCP send dropped significantly.
If anyone has any suggesion on this plese let me know.
This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:06:36 CDT