Why does forcing this define to be unsigned, cause a signed/unsigned comparison warning?

Go To Last Post
5 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Odd 'case' of a GCC warning I've come across tonight, the simple classic "comparison between signed and unsigned integer expressions [-Wsign-compare]". No harm, no foul - best I go check my types and make sure I'm not comparing signed and unsigned types. Easy.

 

Except, not. I'm working in a codebase with a MISRA-C compliance requirement, and so cannot use the standard abs library function to take an absolute value of what could be a maximum negative value (for which the standard lib has undefined behaviour). I have two numbers where I need the difference between them, and to avoid a negative result - we check first which is larger such that we can subtract in the order that always results in a positive value.

 

Here's a minimum reproduceable example:

 


#include <stdint.h>
#include <stdio.h>

#define CONSTANT 100u

static struct {

	uint16_t example_var_a;
	uint16_t example_var_b;

} demo = {

	.example_var_a = 2600,
	.example_var_b = 2400
};

int main(void)
{
	if (demo.example_var_a > demo.example_var_b){
		if ((demo.example_var_a - demo.example_var_b) > CONSTANT){
			printf("%s\n", "A is greater than B, and the difference is greater than CONSTANT");
		}
	}
	else {
		if ((demo.example_var_b - demo.example_var_a) > CONSTANT){
			printf("%s\n", "A is less than B, and the difference is greater than CONSTANT");
		}
	}
}

 

This compiles and runs on Windows using GCC (MinGW.org GCC-6.3.0-1) - with no warnings or messages, and prints the correct message depending on the values of a and b.

 

If I compile that same code for an ATSAMC21 (only with the real application calls, in place of printf...), using GCC (gcc version 6.3.1 20170620 (release) [ARM/embedded-6-branch revision 249437] (Atmel build: 508)) - I get a "comparison between signed and unsigned integer expressions [-Wsign-compare]" warning. But why? both example_var_a, and _b are unsigned, and CONSTANT is forced to be unsigned with a u suffix (said suffix also being a MISRA requirement).

 

Most confusingly, removal of the u suffix get's rid of the warning!

 

So, what is it about Cortex M0+ data types combined with C's rules of integer promotion that I don't understand that's causing this warning to occur? I would expect #define CONSTANT 100 would default to int, and the u suffix should make CONSTANT an unsigned int. At which point, we're comparing unsigned shorts (uint16_t) with unsigned ints, so the shorts get promoted to ints?

 

Setting the value of CONSTANT to > 255 to force it to short, makes no difference. Including a cast in the definition of CONSTANT however, as in #define CONSTANT (uint8_t)100, does remove the warning. But then, so does (int8_t)100.

 

Edit:

 

  • checking the output of the pre-processor, CONSTANT is being correctly substituted for 100u
  • compiling with -funsigned-char makes no difference
Last Edited: Sat. May 7, 2022 - 09:21 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Think about it.   Subtracting two unsigned values produces a signed result.

Compare this signed result with a signed constant.

 

Put in some example values e.g. -1 < 10u

0xffff < 0x000a is false.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

david.prentice wrote:

Think about it.   Subtracting two unsigned values produces a signed result.

 

...does it? I've spent the last decade believing and witnessing that unsigned subtraction in C is always performed modulo 2n, such that (uint16_t)0 - 1 == 65,535, not -1.

 

david.prentice wrote:

Put in some example values e.g. -1 < 10u

0xffff < 0x000a is false.

 

I agree completely with the example, the signed value is just implicitly converted to unsigned, and is thus of course not less than 10. But referring to the above, since when does subtracting two unsigned integers convert the result to signed?

Last Edited: Sun. May 8, 2022 - 06:01 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Go  on.   what is the result of 5u - 8u ?

What is the result of 8u - 5u ?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

david.prentice wrote:

Go  on.   what is the result of 5u - 8u ?

What is the result of 8u - 5u ?

 

-3, and 3. Well I'll be damned.

 

Conversion to int doesn't happen with an increment, or decrement though - nor does it happen with addition, or multiplication,  (uint16_t)60000u + 10000u = 4464.

 

Which leaves me just as confused:

 

  • Under what rules does subtraction cause the conversion to int?
  • Why does the same code not cause the same warnings when compiling for Windows, vs compiling for Cortex M0+?
  • Why does including a cast in the define, result in different behaviour to the u suffix? I thought a u suffix causes the defined integer to be forced to be an unsigned type, as would a cast to an unsigned type?
  • Edit - and even more so, C's usual arithmetic conversion rules (https://docs.microsoft.com/en-us...) say "if either operand is of type unsigned int, the other operand is converted to type unsigned int.", so even though the result of the subtraction is signed, comparing it to a unsigned integer should promote it to unsigned anyway? (As per https://stackoverflow.com/questi... - "The same arithmetic conversions apply to comparison operators too.")
Last Edited: Sun. May 8, 2022 - 06:07 PM