From: wireshark-dev-bounces@xxxxxxxxxxxxx [mailto:wireshark-dev-bounces@xxxxxxxxxxxxx]
On Behalf Of Evan Huus On Jul 1, 2014, at 3:51, Anders Broman <anders.broman@xxxxxxxxxxxx> wrote:
>I don't think there's a way to preopen the next file because it hasn't been created until packets have actually been written to it.
>We could move the close into a separate thread, but I doubt that's going to be slow anyways.
>Another possibility that might boost performance is to remove reassembled
data from the hash table and store it in per-packet-data reducing the >size of the hash
table and hopefully making lookups faster as well as destroying the hash table (wmem hash tables?) or use >g_hash_table_remove_all () >Not clear on what you mean by storing reassembled data in per-packet data (allocate it in file scope?). You are right though
>that many hash tables are currently completely destroyed and re-created where it would be more efficient to simply remove all
>elements. Currently when a PDU is reassembled it is moved from the fragments table to the reassembled table I believe. Instead we could put it in per-packet-data and
get rid of the reassembled table altogether. If wmem file scope was used it would automatically go away at file close. This might also Speed up WS in general by reducing table lookup and should we in the future store the reassembled data in file that would/might
be easier to implement.
Did you profile ring buffer switchover? ( how?). I profiled init_dissection using callgrind's --toggle-collect flag [1]. I imagine you could profile buffer switchover similarly by specifying the capture_input_new_file function. Specifying --collect-systime would probably be useful as
well to see how much time is being spent in system calls.
So in general I think you are right this *is* the expected behavior of tshark with ring buffers, but I fear that it might not be as useful as expected under heavy
traffic because of potential packet drops during file switchover of files unless we can make it more efficient. I have no data verifying that this is the case so if we could device a measurement that would be great. I don't have access to the traffic load that would trigger this. If someone else does it should be pretty easy to do a before/after comparison of how well ring-buffered tshark keeps up. Measuring the time to close the file and open a new one might be an indicator on how much of a problem this might be if the traffic involves a bit of reassembly.
Regards Anders From:
wireshark-dev-bounces@xxxxxxxxxxxxx [mailto:wireshark-dev-bounces@xxxxxxxxxxxxx]
On Behalf Of Evan Huus I was kind of expecting this change to generate more controversy, so I'll give it another few days but if nobody objects I'll merge it then. I don't currently plan on putting it in 1.12 so that we have a full dev cycle to work out any subtle implications, but I know it's a fairly heavily-requested feature so I'm willing to entertain the notion if somebody wants to argue for
it.
|
- References:
- Prev by Date: Re: [Wireshark-dev] Have tshark discard state when doing ring-buffer capture
- Next by Date: [Wireshark-dev] Initial RTT
- Previous by thread: Re: [Wireshark-dev] Have tshark discard state when doing ring-buffer capture
- Next by thread: [Wireshark-dev] Initial RTT
- Index(es):