Wireshark-dev: Re: [Wireshark-dev] [Wireshark-commits] rev 32929 - (ENC_*_ENDIAN vs FI_*_ENDIAN
From: Guy Harris <guy@xxxxxxxxxxxx>
Date: Tue, 1 Jun 2010 11:19:05 -0700
On Jun 1, 2010, at 4:20 AM, Jakub Zawadzki wrote:

> Hi,
> 
>> -       FI_SET_FLAG(new_fi, (little_endian) ? FI_LITTLE_ENDIAN : FI_BIG_ENDIAN);
>> +       /*
>> +        * XXX - this should just check the REP_*_ENDIAN bit, with
>> +        * those fields for which we treat any non-zero value of
>> +        * "encoding" checking the rest of the bits.
>> +        */
>> +       FI_SET_FLAG(new_fi, (encoding) ? FI_LITTLE_ENDIAN : FI_BIG_ENDIAN);
> 
> I'm not sure if I understand this comment properly...
> 
> I could instead of using (FI_LITTLE_ENDIAN, FI_BIG_ENDIAN) use (ENC_LITTLE_ENDIAN, ENC_BIG_ENDIAN, ENC_NA),
> and do FI_SET_FLAG(new_fi, encoding) (if it is what comment suggest)

That's not what it suggests.  The FI_

It suggests doing

	FI_SET_FLAG(new_fi, (encoding & ENC_LITTLE_ENDIAN) ? FI_LITTLE_ENDIAN : FI_BIG_ENDIAN);

instead of

	FI_SET_FLAG(new_fi, (encoding) ? FI_LITTLE_ENDIAN : FI_BIG_ENDIAN);

"Should", in this context, means "in the best of all possible worlds, this is what would be done", not "do this now".  Perhaps we'll be able to do it for 1.6, but it's not something for 1.4.

> but ENC_LITTLE_ENDIAN should have different value than ENC_NA
> (preferably ENC_LITTLE_ENDIAN != 0)

It *does* have a different value, because it *is* != 0:

$ egrep ENC_ epan/proto.h
 * So, for now, we define ENC_BIG_ENDIAN and ENC_LITTLE_ENDIAN as
 * the same.  We therefore define ENC_BIG_ENDIAN as 0x00000000 and
 * ENC_LITTLE_ENDIAN as 0x80000000 - we're using the high-order bit
 * have ENC_NA (for "Not Applicable").
#define ENC_BIG_ENDIAN		0x00000000
#define ENC_LITTLE_ENDIAN	0x80000000
#define ENC_NA			0x00000000

Note the "8" after the "0x".

> I know there's big problem with it, cause many dissectors are
> using encoding as little_endian (with TRUE/FALSE values)

That's why no code in the libwireshark core tests the ENC_LITTLE_ENDIAN bit - it just tests whether the encoding field is zero or non-zero.  It is tested as a bit in the CIGI and MQ dissectors, but those dissectors are now using ENC_BIG_ENDIAN and ENC_LITTLE_ENDIAN rather than their own homegrown #defines in place of TRUE and FALSE.

> (I'm unsure if you want to break API/ABI)

Not for 1.4.

> It's quite big change so I think we should wait till 1.5 branch,

For 1.4 my main goals were to

	1) get people to start using the new #defines in new dissectors, so we can move away from just "big-endian/little-endian";

	2) get people to stop adding their own #defines (the fact that several dissectors had their own #defines for the endianness argument suggests that at least some people wanted something that made it a bit clearer whether the field was big-endian or little-endian than TRUE or FALSE - and the BSSGP dissector indicates that a little clarity helps; see the comment early in the dissector).

> anyway if we want to make so big change maybe we could put encoding in header_field_info?

That's one thing I was considering - for 1.6.  Fields in *most* protocols have a standard representation, so it could be convenient to have the encoding there and have a new proto_add_item_default_encoding() (or whatever) that takes no encoding argument; proto_tree_add_item() would override whatever encoding would be inferred from the header_field_info.

> Btw. what about proto_tree_add_bitmask*()? 
> It still has: (gboolean little_endian) instead of encoding, do you plan to change it?

At some point.