On Sep 2, 2015, at 10:33 AM, Robert Cragie <robert.cragie@xxxxxxxxxxxxx> wrote:
> I am trying to understand the changes to the previous use of tvb_length(). There are now two functions
There have *always* been two functions; that was not changed. They were originally:
tvb_length()
tvb_reported_length()
and the change was that we renamed the first of those to tvb_captured_length(), to make it clearer that it isn't *THE* length of the tvbuff and shouldn't be used by default.
> (and their associates):
>
> * tvb_captured_length()
> * tvb_reported_length()
>
> As far as I can tell, tvb_captured_length() is the direct replacement for tvb_length()
It's a new *name* for tvb_length().
> but tvbuff.h says "You probably want tvb_reported_length instead.".
Which was true before the rename and is still true after the rename().
> The use of both seems to be mixed throughout the files and it's difficult to follow the relationship between the two. So any guidance on this would be appreciated.
libpcap/WinPcap supports something that could be called "packet slicing". They allow a "snapshot length" to be specified when capturing, and, if a packet is longer than the "snapshot length", only the first "snapshot length" bytes of the packet are provided to the application capturing packets. They provide both the original length of the packet, before slicing is done, and the length to which it was sliced. If the snapshot length is sufficiently large, no slicing is done, and the two lengths are the same.
The pcap and pcapng file formats record both lengths, as, if slicing was done, not all of the packet's data is necessarily present in the file. tcpdump and Wireshark/TShark/dumpcap support setting the "snapshot length".
This is not unique to libpcap/WinPcap; some packet sniffers that don't use libpcap/WinPcap also support slicing, and save both the original length and the length after slicing in the file.
tvb_reported_length() gives the length of the protocol data based on the original length. tvb_captured_length() gives the amount of that protocol data that is available after slicing was done.
Telling the user that some field is N bytes long when it was, in fact, M bytes long, but sliced to N bytes in the capture process, is an error, so, when reporting the number of bytes in a field or in a protocol's payload, tab_reported_length() should always be used - tvb_length()/tvb_captured_length() shouldn't have been used and shouldn't be used.
When processing data to the end of the payload, the "to the end of the payload" loop should use tvb_reported_length(), so that if the dissection stops because you run out of *unsliced* data, and there would have been more data to dissect had slicing not been done, the dissection is stopped by an exception being thrown, and the packet is marked as having been cut short by the capture process, so the user knows that the packet actually contained more data than shows up in the dissection.