Ethereal-dev: [Ethereal-dev] Stop capture trigger. RFC
Hi list.
I have been thinking recently about functionality to add capability to
ethereal/tethereal to have a trigger that would stop a live capture.
These are my ideas, please comment.
First of all, implementing a stop capture trigger as a filter string in the
display filter language
is probably not practical since this would require [t]ethereal to do a full
decode of the packet
in order to evaluate the trigger filter and thus would be slow.
It would probably not be practical or realistic for use with high speed
captures.
What about this instead:
If a "stop capture trigger" is enabled in the capture dialog
[t]ethereal would create TWO capture sessions instead of as currently one.
The first capture handle would be for the real capture and would apply the
normal
capture filter and work as capturing does today.
The second capture handle would capture from the same network interface but
specify a different
capture filter string. The stop capture trigger filter string.
All packets that are received by the first handle would work as today.
When the first packet is received on the second handle this would cause
[t]ethereal
to close the second handle and set a global variable to indicate that the
stop trigger has been
activated.
In the capture loop for the first capture handle, the code sees that the
trigger condition has
occured and thus the stop capture sequence is initiated.
This should be combined with the code for "stop capture after ..." code but
when the
stop capture trigger feature is activated, these stop capture after... are
only ativated after
the trigger has occured.
I assume delivery from the os to the application for these two handles are
asynchronous and thus
we would not be able to stop the capture at exactly x packets/seconds/kb
after the trigger was
activated, but it might be close enough. It would be better than nothing.
I do not know how this would affect capture performance.
Does opening TWO capture session from the same interface with two different
pcap filters mean that
CPU requirements increases by a factor of two?
Would this be feasible? Guy?
If it is feasible, would anyone be interested in hacking up an
implementation of this?