What is the maximum size of signed long long data type in 32 bit Linux
I am wondering how to handle large range of values in 32 bit Linux platform using the C++ program. Thanks for you help.
std::numeric_limits
@juanchopanza, The call std::numeric_limits
BTW, your program does not perform any output: it is lacking a call to some output function like printf
4 Answers 4
The problem is that the type of an unsuffixed integer constant (like 42 ) is the smallest of int , long int , long long int that can hold its value, and the type of an expression is determined by the expression itself, not by the context in which it appears.
So if int happens to be 32 bits on your system, then in this:
unsigned long long val = 140417 * 100000 + 92 + 1;
the constants 140417 and 100000 (which both fit in 32 bits) are of type int , and the multiplication is a 32-bit multiplication — which overflows because the product of those two numbers doesn’t fit in 32 bits. (The type of a standalone literal is adjusted based on its value; the type of a larger expression is not.)
The most straightforward way to avoid this is to use constants of type unsigned long long :
unsigned long long val = 140417ULL * 100000ULL + 92ULL + 1ULL;
(It happens that not all the ULL suffixes are necessary, but it doesn’t hurt to apply them to all the constants in the expression.)
Why «long int» has same size as «int»? Does this modifier works at all?
Ehm.. I kind’ of though this modifiers like long / short expands / reduces amount of memory allocated when variable are created, but.
#include #define test_int int #define long_int long int #define long_long_int long long int void main() < printf("%i\n", sizeof (test_int)); //output 4 printf("%i\n", sizeof (long_int)); //output 4. Why? wasn't I modified it's size? printf("%i\n", sizeof (long_long_int)); //output 8 >
For unknown reasons, it prints the size of int and long int as same. I use vc++ 2010 express edition. Sorry, hard to find answer in google, it always shows long and int as separate types.
Why shouldn’t the sizes be the same?! The types are different, that’s all that matters. Each type can represent a set of values that’s prescribed by the standard, but it’s free to be able to represent more than that.
As far as i know standard only says that long is at least as long as integer. It can’t be shorter, but doesn’t have to be more than int. Everything else is machine dependent.
The mandatory ranges of representable values are provided in this answer. Noteworthy: short is at least 16 bits, long int at least 32, and long long int at least 64 bits. Everything else is unspecified. For example, a platform could very well have every type be 256 bits long, and thus sizeof every type would be 1.
6 Answers 6
The reason that MS choose to make long 32 bits even on a 64-bit system is that the existing Windows API, for historical reasons use a mixture of int and long for similar things, and the expectation is that this is s 32-bit value (some of this goes back to times when Windows was a 16-bit system). So to make the conversion of old code to the new 64-bit architecture, they choose to keep long at 32 bits, so that applications mixing int and long in various places would still compile.
There is nothing in the C++ standard that dictates that a long should be bigger than int (it certainly isn’t on most 32-bit systems). All the standard says is that the size of short
Indeed, it mentions that short should at least hold from -32767 to +32767. Note that the standard doesn’t mandate -32768 (lowest value for a 16-bit number in 2’s complement) because specify how negative numbers should be handled.
Thanks alot Mats, but It seems long is used as an another alias for int for the past 30 years! if it is not going to be greater than that, why bother mentioning it in the standard at all! It just makes no sense to have some thing called long, and not in a single time in 30 years, in major compilers and system, not providing more space than int! This is ridiculous, All new languages have meaningful semantics, C++ is a exception in just every thing it seems!
@Hossein: 30 years ago, many compilers had int as a 16-bit value, and long was 32. These days, on non-Windows platforms that have 64-bit processors, long is indeed 64 bits, where int is 32 bits. But the main point is that there is no GUARANTEE that the type is any particular size for long , other than a minimum of 32 bits. It is then up to the compiler if it’s larger than that or not.
@Hossein C and C++ are both used on many different systems, there are many non-windows systems, where a long is 64 bit. There are systems where an int is 16 bit, and a long is 32 bit.
All that the standard requires is that:
(and that the corresponding unsigned types have the same size as the signed types).
In addition, there are minimum sizes for each type, indirectly specified by limits on the values of INT_MAX , etc.: a char must be at least 8 bits, a short and an int 16, a long 32 and a long long 64.
On 16 bit platforms, it is usual for both short and int to be 16 bits; on 32 bit platforms (and the 36 and 48 bit platforms that still exist), int and long are almost always the same size. On modern 64 bit platforms (with byte addressing), the rational solution would be to make all four types have different sizes (although one could argue that according to the standard, int should be 64 bits, which would mean that int , long and long long all had the same size).
Long int size in linux
Q: How big is an int, long int etc. in C?
A: It depends. (The standard leaves it completely up to the compiler, which also means the same compiler can make it depend on options and target architecture.)
In practice I have not used anything else but gcc on Linux for a couple of years, so for myself the answer is a bit easier. However, because I don’t program C/C++ that often these days, each time I do so I soon tend to hit the question how big was that integer again, especially if interfacing with some low-level stuff and the code should work correctly on both 32 bit and 64 bit machines. At the moment I mostly use Intel architecture, so let me limit this post to Intel. (I have used a lot of ARM in the past and this week glibc with support for AArch64 came out, maybe the results can be checked against ARM later.)
type \ executable[1] | 32 bit | 64 bit |
short int | 16 bit | 16 bit |
int | 32 bit | 32 bit |
long int | 32 bit | 64 bit |
long long int | 64 bit | 64 bit |
size_t | 32 bit | 64 bit |
void* [2] | 32 bit | 64 bit |
[1] A 32 bit executable can be used in a 64 bit user space (supposed a 32 bit loader and required shared libraries have been installed), a 32 bit user space can run on a 64 bit kernel and a 32 bit kernel can run on a 64 bit processor. So it’s really the word length of the executable that counts.
[2] In exotic cases pointers can have different lengths, http://stackoverflow.com/questions/6751749/size-of-a-pointer So, I’m not sure whether sizeof (void *) isn’t in fact undefined by the C standard. At least gcc compiles it without warning and returns a value, which looks correct for gcc on the Intel systems covered here.
The results where produced by the following piece of code:
Slightly related the code also shows 2 features for 32/64 bit portable usage of printf. The “z” length modifier refers to size_t, see printf(3) for a couple of similar ones. The PRIu32 macro makes sure that a constant word length is used regardless of the compiler specific length of the integer types. This and several similar macros are in fact standardized in C99, they are defined in header inttypes.h .
P.S. A previous version of this post contained stupid copy paste errors resulting in wrong results. Hopefully all of them are fixed now.