Ethereal-dev: Re: [Ethereal-dev] How to stop Linux packet capture from dropping packets?
On Sat, 20 Apr 2002, Guy Harris wrote:
> On Sat, Apr 20, 2002 at 05:36:34PM +0930, Richard Sharpe wrote:
> > Hmmm, since I did not need to see all 16384 bytes sent on lo, I cut the
> > snaplen back to 1500 (which is still more than I needed), and I managed to
> > only loose some 17,000 packets out of 270,000 or so packets.
>
> I'm still curious to see whether cranking the socket buffer size up
> helps. At some point I may add to libpcap a "pcap_open_live_ext()" API,
> or something such as that, with an additional buffer size argument, so
> libpcap can set the size (semi-)portably ("semi-" because the precise
> effect of the buffer size may differ from platform to platform, and it
> may not be settable at all on some platforms), and I'm curious whether
> it'd make a difference on Linux.
I pushed up the defaul and max socket buffer size to 524288 but still lost
packets on a 750MHz Duron.
> (New API because, on BSD, you can only set the buffer size on a BPF
> device before you bind it to an interface, and "pcap_open_live()" binds
> the device to the interface you're opening.)
>
> > Doesn't help. I still lose packets over GigE. Looking at the code it may
> > be because capture applications are not woken up (unless they are in
> > immediate mode) until the buffer is full.
>
> But there are two buffers in BPF, the store buffer and the hold buffer,
> so, whilst you only get woken up when the store buffer fills up, it gets
> rotated to be the hold buffer, with a fresh store buffer rotated in, so
> it's not as if you have to drain the buffer, in its entirety, the
> instant that it fills up. (You have to drain it before the new store
> buffer fills, otherwise the hold buffer can't be rotated in to be the
> new store buffer, and stuff gets discarded.)
OK. When I looked at the code my eyes were starting to glaze over so I
missed that bit. Perhaps if I increase the ring of buffers to three or
four, I will lose less packets.
Regards
-----
Richard Sharpe, rsharpe@xxxxxxxxxx, rsharpe@xxxxxxxxx,
sharpe@xxxxxxxxxxxx