Ethereal-dev: Re: [Ethereal-dev] [PATCH] fix for multiple dcerpc connections over single socke

Note: This archive is from the project's previous web site, ethereal.com. This list is no longer active.

From: Todd Sabin <tas@xxxxxxxxxxx>
Date: 19 Nov 2001 23:15:18 -0500
Guy Harris <guy@xxxxxxxxxx> writes:

> > For the most part, they don't,.  For connection oriented dcerpc, all
> > the dcerpc layer cares about is having a reliable byte-stream.
> 
> Well, it also cares about knowing which connection it's dealing with,
> presumably.  That stuff is probably buried in:
> 
> > The
> > only difference between protseqs is having different functions to
> > send/receive PDUs.
> 
> ...the functions in question, as I assume those connections deal with
> establishing and tearing down connections, as well as sending and
> receiving PDUs.
> 
> As such, it doesn't bother me to have multiple pieces of code in the DCE
> RPC dissector that know about different transports, as there's multiple
> pieces of code in the DCE RPC *implementation* that know about different
> transports.
> 

I'm fairly surprised by that.  The obvious corollary would be that the
TCP layer should use the same approach to allow the dcerpc dissector
to distinguish between port pairs.  Surely you wouldn't want to do
that?  Also, dcerpc isn't the only thing that's done with SMB named
pipe transactions.  There're very likely to be other dissectors which
will want to be able to distinguish between SMB
fids/ports/conversations/whatever.  It seems clear to me that adding
SMB support to the conversation layer (in some fashion or another) is
the way to go.  It keeps the SMB stuff where it (IMO) belongs, in the
SMB dissector, while making it available for use by any dissector that
wants it, via the conversation code.

> If DCE RPC treats *all* underlying transports as byte streams (if, for
    ^ connection-oriented
> example, it treats record-oriented transports such as NetBIOS as byte
> streams, with no guarantees that DCE RPC PDUs start or end on
> lower-level protocol PDU boundaries), then, arguably, DCE RPC shouldn't
> rely on TCP's desegmentation code either - it should do its own
> desegmentation to desegment the pieces of a fragment (over and above

There are separate issues here.  As I understand it, the TCP
desegmenting code is for the use of dissectors which can tell "Oh, I
don't have a complete blob to work on.  Let me get more data, first."
If so, then it's a natural fit for dcerpc over TCP, and indeed, it
works well on my captures, here.  Now, I agree that it'd be better if
there were a more generic way of doing this, so that a dissector could
ask its caller (assuming the caller provides a byte stream), "I need N
more bytes, please.", and not just the TCP layer.

The other issue is that dcerpc, itself, has the ability to fragment a
request or response into multiple PDUs.  In theory, ethereal could
(should?) reassemble them before handing off the stub data to the
child dcerpc dissectors.  Naturally, this would have to be done in the
dcerpc dissector.

> reassembly of fragments), and shouldn't treat the stuff handed to it by
> the dissector that calls it as necessarily containing one and only one
> DCE RPC PDU, regardless of whether that dissector is the TCP dissector
> or not.
> 

For one dcerpc PDU over multiple lower-level segments, see above.  For
multiple dcerpc PDUs in a single lower-level segment, yes, it
shouldn't assume that it doesn't happen.  No normal client I've seen
does that, but I know a guy who wrote a custom client recently which
does.

There was talk recently about adding some generic support for doing
this by having a dissector return the number of bytes used, or
something like that.  Something along those lines sounds like the way
to go.


Todd