Wireshark-users: [Wireshark-users] How is bandwidth calculated in RTP Stream Analysis?
Please pardon me if this question is a repeat, I was not able to find an answer through a quick search of the list archive.
I
have been using Wireshark recently to perform some analysis of Video
RTP traffic. One thing I have noticed is that when using the Statistics
> RTP > Stream Analysis option while a video RTP packet is
selected in the main Wireshark window, is that the "bandwidth" column
sometimes displays values that are far greater than should have been
possible for a given video session. For example, in a video call which
negotiates a call rate of 384k, let us assume that the negotiated audio
codec between the 2 endpoints is G.722-64k.
Whatever the negotiated video codec turns out to be (let's say
H.263+), there should be 320k of available video bandwidth to use
between the 2 endpoints. This means that when analyzing the video RTP
stream, we should not expect to see bandwidth values in excess of about
320k. However, over the life of a given call, we often see values far
in excess of this number, sometimes over 400k and even over 420 to 450k
occasionally. So, what I am looking for is whether there is an
explanation of exactly how Wireshark is performing the calculation of
the bandwidth. We believe the very high numbers shown to be indicative
of a possible problem with the endpoints, however we need to understand
the calculation before we can be certain of that. I tried looking at
the RTP statistics page on the WS wiki (
http://wiki.wireshark.org/RTP_statistics) and it explained the jitter calculation, but not bandwidth.
Thanks in advance for any information about this!
-Dave