Wireshark-dev: Re: [Wireshark-dev] [Wireshark-bugs] [Bug 9114] Memory not released when startin
On 09/20/13 18:12, Evan Huus wrote:
On 2013-09-20, at 4:40 PM, Joerg Mayer <jmayer@xxxxxxxxx> wrote:
I'd like to discuss the consequences of the remark below in a bit more depth:
On Fri, Sep 20, 2013 at 08:27:47PM +0000, bugzilla-daemon@xxxxxxxxxxxxx wrote:
https://bugs.wireshark.org/bugzilla/show_bug.cgi?id=9114
...
--- Comment #1 from Jeff Morriss <jeff.morriss.ws@xxxxxxxxx> ---
Wireshark intentionally does not free all the memory it had allocated when
closing a capture file. It uses its own memory allocator which allows it to
keep (rather large) blocks of memory around for later re-use (when that happens
the memory allocator marks all the memory as "freed" but you won't be able to
see that from any OS utilities: they will simply report Wireshark has still
having allocated however much memory). Not freeing that memory is an
optimization to avoid having to re-allocate that memory again when the next
file is read.
It this really the right strategy? If I open a huge capture file (with huge
allocations) and then open a small file, the (virtual) memory will still be gone.
How big is the performance win of not freeing/allocating in real operation?
This is one of the things I looked into with wmem. The performance benefit of not releasing the packet pool between each packet is fairly significant. The performance of releasing the file pool between files is not. Wmem memory is fully released when closing a file.
If wmem memory is not being freed when closing a file, that is an actual leak.
Ah, OK, it's good to know of that difference from emem (hopefully I can
remember it :-)).
This particular bug was reported in 1.10 where emem is still heavily
used (and emem keeps its memory allocations around like I said--at least
while it's using "chunks"). And anyway the reporter says that my
explanation doesn't quite cover the problem they're seeing...