Wireshark-bugs: [Wireshark-bugs] [Bug 1814] Capture filters not work when capturing from named p
Date: Wed, 11 May 2016 20:23:26 +0000

changed bug 1814


What Removed Added
CC   wireshark@goanime.net

Comment # 6 on bug 1814 from
Still an issue in 2.0.2, at least on OS X.

> As such, doing bpf_filter() on the packets when reading from a pipe is
> probably the right idea, if doable.  *HOWEVER*, doing the filtering in the
> program writing to the pipe (or, if that program is getting its input from
> another program, e.g. a tcpdump/WinDump run over ssh, doing it in *that*
> program - i.e., doing itas early as possible in the pipeline) is *STILL* a
> good idea, as per my previous comment.

I agree, doing it as early as feasibly possible in the pipeline is a good idea.
 There are cases though where filtering at capture time is not an option.  For
example, a forensic system that is capturing all network data.  Or an
unfiltered capture file that was taken by someone else.  I deal with both, but
the main pain point is trying to do analysis on forensic data.  In that case
the work pipeline: forensic capture -> forensic export -> local analysis.

Forensic captures, by design, are unfiltered, so filtering in the capture isn't
an option.  Similarly, when working with an existing capture file it isn't
possible to change the capture options that were used at the time.

Next stage would be the forensic export.  This is the prime spot to filter -if-
you know what can safely be excluded or included.  Often that is the case, but
occasionally need to see multi-system behavior.  That means taking a look at
the unfiltered data to get an idea of what can be excluded.  Exclude it,
reanalyze, and repeat the process to try to whittle down the data set size.

That would be great, except analysis on the forensic system itself isn't
exactly fast and what information the analysis does provide is inferior to the
information available in local tools.  That means grabbing the entire data set,
splitting apart into manageable chunks with editcap, then start taking a look
at different chunks in parallel to see what can be safely excluded.

At that point I already have a complete copy of the data so the fastest, and
most efficient, thing to do would be to use the local data file as a source and
filter it.  I tried that repeatedly with dumpcap and tshark.  Each finished
processing the local source file in about 8 minutes, far less time than it
would take to try to filter and export from the forensic system again.  The
only problem is that, due to this bug, dumpcap and tshark simply gave a copy of
the original source file instead of a filtered, reduced file.

A posible alternative, at least for small data sets / source files, would be to
use a display filter in Wireshark and then save only the displayed packets to a
new file.  When the source file is more than a few hundred MB though the time
required to both open and filter the file is very significant.  When working
with multi-GB source files (12.5 GB / 25 million packets most recently) that
method simply isn't a viable alternative.  Instead, that is where dumpcap /
tshark come into play.  They should be able to take an input file (stdin / pipe
/ socket), apply a filter, and write the filtered data to an output file, which
is exactly what the man pages imply should be happening already.


You are receiving this mail because:
  • You are the assignee for the bug.
  • You are watching all bug changes.