[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: network throughput



Hi, 
Its certainly possible that its a problem with the particular boxes I am
using, or the network - though the boxes are connected to adjacent ports
of a switch..... 

anyway, I will try moving to the newwer ethernet.c 

I know there isnt a public release of > 2.4.14 for devboard lx, but will
it possible to just use the new ethernet.c and mm/init.c (for
prepare_rx_descriptor) while keeping the rest of the kernel code the
same (2.4.14)?

Thanks, 
akshay

Mikael Starvik wrote:
> 
> Hi,
> 
> >Thanks for the program... I am not sure exactly where the problem is, but
> >I consistently get UDP as well as TCP throughput of only around 1.2 to 1.5
> >MBytes/s using tcp_perf.
> 
> It's hard to say why you get much lower performance than other people.
> It may be a duplex problem. Is the card connected directly to a
> switch/router?
> 
> I think you need to look for the problem in your network
> (or on your developer board) and not in the software.
> Everybody else seams to get better performance than you.
> Try different hubs, client computers etc.
> 
> >The thing I am more concerned about is the overruns, since they are
> >causing the box to lock up...
> 
> Between 2.4.14 and 2.4.19 a bug in the ETRAX has been detected.
> This bug can cause problems at Ethernet overrun. This bug
> has been workarounded in 2.4.19 (by calling prepare_rx_descriptor).
> 
> >Is this correct?
> 
> Not really with the latest driver. Something like this happens
> (assuming that all packets are > RX_COPYBREAK):
> 
> 1. Packet is received by DMA
> 2. ETRAX generates interrupt
> 3. Interrupt is disabled
> 4. For all received packets:
>    1. Put packet on a list for later handling by the kernel.
>    2. New memory is allocated for the DMA.
>    3. DMA is restarted
> 5. Interrupt is enabled
> 6. DMA is restarted
> 
> In your TCP test the TCP window should throttle the data
> before the DMA gets out of buffers (unless you have
> lots of other traffic).
> 
> >-In that case, what is number of slots in the DMA ring on the card?
> >(Mikael had mentioned that there are 64 OS buffers, but presumably the
> >card has less - if I configure the gap in tcp_perf's UDP send routine to
> >anything more than 25 packets, there are overruns...)
> 
> The number of "slots" is maximum 64. The average number of
> usable slots is approx. 45 (due to performance reasons and
> the hardware bug).
> 
> For the transmitter we have 256 "slots" i.e. 256 packets
> can be queued up for transmission.
> 
> >- in the latest version of ethernet.c, there is the RX_COPYBREAK
> >optimization Mikael mentioned - "double buffering",
> >right? Is there any data on how this threshold (currently 256 bytes) was
> >arrived at, and if it might be better to do double buffering for all
> >packet sizes?
> 
> Packets shorter than RX_COPYBREAK are copied to avoid wasting
> of too much memory. Longer packets are not copied. The performance
> is decreased by setting the RX_COPYBREAK to a value larger than
> the maximum ethernet packet
> 
> The value 256 is quite random. Other values doesn't affect the
> performance using fullsize packets but may affect performance
> and memory usage with small packets.