Wireshark-dev: Re: [Wireshark-dev] Voice (RTP stream) quality - mos, delay, bandwidth, ...
Just a note folks...
Even considering that writing code to qualitatively compare two audio
streams was feasible, we have an issue MOS samples are not just
against any call: there is specific recordings that you have to use.
Channel quality tests involve a loop where a single machine calls
itself from one "line" to another, what do I compare the payload of a
given RTP stream against?
What's the good standard that allows me to compare against?
I do not think that just tapping the channel of a given call will
allow me to say what good is that channel for the end users... What do
we use for reference to verify that whatever going through is good or
bad? Voice degradation can be quantitatively measured by comparing
against a given recording not by just peaking on whatever is going
through the wire...
Voice quality problems in my experience are very hard to locate and
other than some ticks, noise or "metalization" of voice that are
usually due to excessive jitter or packet loss cannot be diagnosed by
analyzing traffic between nodes involved in the call... userplane
problems often depend more on a crappy codec or a broken mic in one
terminal or a twisted echo canceler in the way than in anything that
has to do with packets and their transport.
L
On 11/6/07, Newton, Don <dnewton@xxxxxxxxxxxxxx> wrote:
> > > daniel zhang wrote:
> > >
> > > > 1. MOS
> > > > Voice quality was traditionally reported as a Mean Opinion
> > > Score (MOS)
> > > > on a scale from 1-5 where 1 is the lowest and 5 the highest.
> > > > I am not an expert on this but I think probably we could
> > > add the MOS
> > > > into Ethereal.
> > >
> > > At least according to the Wikipedia article on the Mean Opinion
> Score:
> > >
> > > http://en.wikipedia.org/wiki/Mean_Opinion_Score
> > >
> > > the score comes from people listening to particular phrases
> > > and giving their 1-to-5 rating of the quality of the sound,
> > > not from a calculation you could perform on a raw digital
> > > signal.
> > > [...]
> >
> > That's true, but there have been many attempts to model MOS from raw
> data
> > directly.
> > The most prominent being the ITU G.107 "E-Model".
> > But still, the model includes parameters like the audio
> characteristics of
> > the end devices, jitter buffer implementations and so on, so MOS
> cannot be
> > calculated from a network trace without making specific assumptions
> on
> > the
> > end devices and audio path.
> >
> > Best regards,
> > Lars Ruoff
> This is a pretty interesting idea;
> Jitter buffer size is probably the biggest feature that causes MOS to
> differ from what it looks like on the network. Jitter buffer size could
> easily be configurable even to point of picking the cpe and having the
> defaults of that device.
>
> Another possibility would be to "apply" the jitter buffer settings to
> both the packet list and the rendered audio file so late packets get
> tossed just like they do on the phone. This would allow operators to
> pick the ideal buffer size for the network. Generally the buffer should
> be as small as possible but no smaller.
>
> Troubleshooting a voip network would be enhanced by charting MOS slips
> at various points i.e. "this stream was a solid 4.4 until it went
> through this router and took a nose dive".
> _______________________________________________
> Wireshark-dev mailing list
> Wireshark-dev@xxxxxxxxxxxxx
> http://www.wireshark.org/mailman/listinfo/wireshark-dev
>
--
This information is top security. When you have read it, destroy yourself.
-- Marshall McLuhan