Ethereal-dev: Re: [Ethereal-dev] How is HTTP desegmentation supposed to work?

Note: This archive is from the project's previous web site, ethereal.com. This list is no longer active.

From: Ulf Lamping <ulf.lamping@xxxxxx>
Date: Fri, 06 Aug 2004 09:21:12 +0200
Guy Harris wrote:

The lower-level TCP desegmentation option has to be enabled for reassembly of *any* protocol running directly atop TCP to work. Most reassembly takes place at the lower protocol layer, e.g. IP defragmentation; this allows reassembly for that protocol to be turned off (if, for example, there's a reassembly bug that causes crashes or other problems, or attempting to do reassembly would cause Ethereal to run out of memory) or on with one switch. TCP knows nothing about higher-level protocol boundaries, so it can't do all the work of reassembly. We generally have preference settings in both the higher-level protocol and TCP, where the latter lets you turn reassembly off for *all* TCP-based protocols. Perhaps the higher-level protocols should have no reassembly preference (some already don't), with the TCP preference being the only one, along the lines of other forms of reassembly.

If it's a preference setting only because in case of a problem you can switch if off, why is the default to off?

I was thinking, that this is disabled for performance reasons. If it's not significantly the case, I would really vote to enable all the desegmentation/reassembly stuff by default.

I've often talked to novice Ethereal user's, which didn't know that there's TCP segments out there, and of course didn't know about such an option to switch it on.

Regards, ULFL