On Sun, Mar 10, 2013 at 8:46 AM, Jakub Zawadzki
<darkjames-ws@xxxxxxxxxxxx> wrote:
> On Sat, Mar 09, 2013 at 08:03:41PM -0500, Evan Huus wrote:
>> On Sat, Mar 9, 2013 at 3:48 PM, Jakub Zawadzki
>> <darkjames-ws@xxxxxxxxxxxx> wrote:
>> > Right now no, but there was implementation of sl_free_all().
>>
>> Granted, but I don't (off the top of my head) have a potential use case for it.
>
> What about O(1) free time for proto nodes? (common case).
>
>> Regardless, without proof the other way (that emem slabs were an improvement),
>> the less code we own the better.
>
> Ok, I performed some stress test for ws_slab vs g_slices for quite big capture file.
>
> Test performed with one liner:
> for ((i=0; i< 10; i++)); do echo $i; time ./tshark -n -r /tmp/big.pcap -V > /dev/null; sleep 600; done
> one tshark run at the time (most computation was done nightly).
>
> /tmp is mounted ramfs, tshark compiled with CFLAGS="-O2 -pipe -w"
>
> I cheated a little and both tshark are based on r47905, but with patches:
> ws_slab-tshark was build with patch: http://pastebin.com/raw.php?i=rMexZBsh
> g_slices-tshark was build with patch: http://pastebin.com/raw.php?i=YdA50LKL
Your g_slice patch uses g_slice_alloc0, which zeros the returned
memory like g_malloc0 does. A fairer comparison would be with
g_slice_alloc(), since the emem slab doesn't make any guarantees about
zeroing memory.
I'm honestly not sure why g_slice_alloc0 was used when
debug_use_slices was set, since presumably in a debugging case we'd
want to blow up if possible on uninitialized memory.
> but I'd rather wait for some other results/conclusions before I repeat test with r48218 and r48217
>
> big.pcap is:
> Number of packets: 7284 k
> File encapsulation: IEEE 802.11 plus radiotap radio header
> File size: 944 MB
>
> ws_slab g_slices
> user 17m20.988s 19m30.194s
> user 16m59.147s 19m17.288s
> user 16m40.835s 19m42.880s
> user 16m43.241s 19m24.307s
> user 16m47.508s 19m25.764s
> user 16m53.047s 19m44.216s
> user 16m38.085s 19m9.822s
> user 16m57.067s 19m28.897s
> user 17m5.540s 19m29.994s
> user 17m8.343s 19m40.903s
>
> avg: 16m55.380s 19m29.426s
> +/- 25s +/- 20s
>
> 1m17.023s
> (~8% +/- 4%)
>
> To be honest, I thought the result would be on statistic error boundary,
> and still we're NOT using sl_free_all().
>
>
> Generally I'm fine with this patch even if we loss about ~1% (in normal use),
> but ws_slab is faster than g_slices :P
>
> Regards,
> Jakub.
> ___________________________________________________________________________
> Sent via: Wireshark-dev mailing list <wireshark-dev@xxxxxxxxxxxxx>
> Archives: http://www.wireshark.org/lists/wireshark-dev
> Unsubscribe: https://wireshark.org/mailman/options/wireshark-dev
> mailto:wireshark-dev-request@xxxxxxxxxxxxx?subject=unsubscribe