c - Understanding implicit conversions for printf -



c - Understanding implicit conversions for printf -

the c99 standard differentiate between implicit , explicit type conversions (6.3 conversions). guess, not found, implicit casts performed, when target type of greater precision source, , can represent value. [that consider happen int double]. given that, @ next example:

#include <stdio.h> // printf #include <limits.h> // int_min #include <stdint.h> // endianess #define is_big_endian (*(uint16_t *)"\0\xff" < 0x100) int main() { printf("sizeof(int): %lu\n", sizeof(int)); printf("sizeof(float): %lu\n", sizeof(float)); printf("sizeof(double): %lu\n", sizeof(double)); printf( is_big_endian == 1 ? "big" : "little" ); printf( " endian\n" ); int = int_min; printf("int_min: %i\n", a); printf("int_min double (or float?): %e\n", a); }

i surprised find output:

sizeof(int): 4 sizeof(float): 4 sizeof(double): 8 little endian int_min: -2147483648 int_min double (or float?): 6.916919e-323

so float value printed subnormal floating point number near minimal subnormal positive double 4.9406564584124654 × 10^−324. unusual things happen when comment out 2 printf endianess, value double:

#include <stdio.h> // printf #include <limits.h> // int_min #include <stdint.h> // endianess #define is_big_endian (*(uint16_t *)"\0\xff" < 0x100) int main() { printf("sizeof(int): %lu\n", sizeof(int)); printf("sizeof(float): %lu\n", sizeof(float)); printf("sizeof(double): %lu\n", sizeof(double)); // printf( is_big_endian == 1 ? "big" : "little" ); printf( " endian\n" ); int = int_min; printf("int_min: %i\n", a); printf("int_min double (or float?): %e\n", a); }

output:

sizeof(int): 4 sizeof(float): 4 sizeof(double): 8 int_min: -2147483648 int_min double (or float?): 4.940656e-324 gcc --version: (ubuntu 4.8.2-19ubuntu1) 4.8.2 uname: x86_64 gnu/linux compiler options where: gcc -o x x.c -wall -wextra -std=c99 --pedantic and yes there 1 warning: x.c: in function ‘main’: x.c:15:3: warning: format ‘%e’ expects argument of type ‘double’, argument 2 has type ‘int’ [-wformat=] printf("int_min double (or float?): %e\n", a); ^

but still cannot understand happening.

in little endianess consider min_int as: 00...0001 , min_dbl (subnormal) 100..00#, starting mantissa, followed exponent , conclude # sign bit. is form of applying "%e" format specifier on int, implicit cast?, reinterpret cast?

i lost, please enlight me.

printf("int_min double (or float?): %e\n", a);

above line has problem can not utilize %e print ints. behavior undefined.

you should use

printf("int_min double (or float?): %e\n", (double)a);

or

double t = a; printf("int_min double (or float?): %e\n", t);

related post: post explains how using wrong print specifiers in printf can lead ub.

c gcc type-conversion implicit-conversion

Comments

Popular posts from this blog

Delphi change the assembly code of a running process -

json - Hibernate and Jackson (java.lang.IllegalStateException: Cannot call sendError() after the response has been committed) -

C++ 11 "class" keyword -