Ethereal-dev: Re: [Ethereal-dev] Display filter as stop condition

Note: This archive is from the project's previous web site, ethereal.com. This list is no longer active.

From: Guy Harris <guy@xxxxxxxxxxxx>
Date: Wed, 29 Oct 2003 22:41:09 -0800

On Oct 27, 2003, at 9:30 AM, sford@xxxxxxxxxxxxx wrote:

But I do have a problem.  When I DON'T provide a capture filter (to
cut down the incoming rate), it does seem to run far behind and
miss lots of packets.  (Even though the cpu is mostly idle and it's
not taking all that much memory!)  Then when it exits, it thinks there
were 0 dropped packets!

Just because libpcap thinks there are 0 dropped packets, that doesn't mean there weren't any. Perhaps either

	1) the code in the OS kernel isn't counting dropped packets properly;

2) the packets are being dropped at a layer where the PF_PACKET code doesn't get to count them;

3) you're running on a system with a 2.2[.x] kernel and no turbopacket patches, in which case the kernel simply doesn't report dropped packet statistics when capturing;

4) you're running on a system whose libpcap wasn't built on a system with the packet-statistics stuff available, so it doesn't even know it *can* get the packet statistics.

When I read in the captured file (without using ANY filters at all),
it seems to take very long time, even though the cpu STAYS mostly idle.

It's trying to do IP-address-to-name resolution?

Ronnie Sahlberg wrote:
... display filters ... require all the packets to be fully
disected.  This ... starts consuming more and more memory while
tethereal runs.  ... capture filters does not ... cause
the internal state in tethereal to start building up.

That sounds bad.  I did not realize that the longer tethereal runs,
the more memory it will consume if display filters are used.  Why
is this?  Memory leak bug?  Or just the nature of display filters?

Just the nature of display filters. They have to fully dissect packets, and correctly dissecting packets might require various bits of state to be kept around, as per replies to your messages asking about that.

As far as implementing stop condition with a capture filter, I
can't see how to do that.  Capture filters are handled within
the pcap lib with pcap_setfilter().  My understanding is that
it performs the packet parsing and filtering in kernel space.

It does that in kernel space if it's doing a live capture, rather than reading a capture file, and if the kernel can do that. Otherwise, it's done in user mode.

Is there a user-mode interface to do it?

"bpf_filter()".  It takes as arguments:

1) a pointer to the first instruction of a compiled BPF program (as compiled by "pcap_compile()") - the "bf_insns" member of a "struct bpf_program" is such a pointer;

	2) a pointer to the first byte of the raw packet data;

3) the length of the packet as it appeared on the wire (the "len" member of a "pcap_pkthdr");

4) the captured length of the packet (the "caplen" member of a "pcap_pkthdr");

and returns the return value of the BPF program - if it's zero, the packet didn't match the filter, and if it's non-zero, it did.

That's not documented; we ("we" the libpcap developers, not "we" the Ethereal developers) should either document it, or write a pcap wrapper and document that.

Ideally the capture filter stop condition would be applied in kernel
space with only one packet parse.  Is there a way to communicate back
to user space that the trigger has happened?  I assume this would
require a change to the pcap lib.

It would require a change to the *kernel* - and that won't help if the kernel doesn't have a BPF interpreter (as is the case in many OSes).

Also, it could be done with one packet parse only if the input filter and stop filter programs were combined *and* there were a way to return separately return to the kernel code that calls it an indication of whether the packet passed the capture filter and an indication of whether it passed the stop filter.