Linux detect if 64 bit

Detecting 64bit compile in C

is there a C macro or some kind of way that i can check if my c program was compiled as 64bit or 32bit at compile time in C? Compiler: GCC Operating systems that i need to do the checks on: Unix/Linux Also how could i check when running my program if the OS is capable of 64bit?

Do you want to examine a binary executable file and determine what compiler options were used to create that file?

Wait a sec. you mean you already have the binary and then want to check it? (Since you mentioned «was compiled») Or during compile time (Since you mentioned C macro) ?

@Daniel: I understand what you want to do, the question is just why. Your question isn’t entirely valid since «64-bit architecture» isn’t a very well-defined term (do you want 64-bit registers, 64-bit data bus, 64-bit pointers), are your programming only for x86 or portably.

9 Answers 9

Since you tagged this «gcc», try

#if __x86_64__ /* 64-bit */ #endif 

@R. No, it’s almost surely the right answer. The macros beginning with _[A-Z] or __ are reserved by the implementation (i.e. the compiler/preprocessor), which means you can’t define them yourself, but you can certainly test their existence to query the implementation.

@Adam: And the result will only be meaningful on some implementations. If you instead test a standard macro like UINTPTR_MAX , it’s reliable across all implementations. (Hint: A valid implementation could happily predefine __LP64__ on 32-bit machines, or as an even more extreme example, it could treat all macro names beginning with __ as defined unless they’re explicitly undefined.)

@R. OTOH, the C99 standard guarantees that uintptr_t is large enough to hold a pointer, but it doesn’t guarantee that it is not larger than needed. An implementation could use a 64-bit uintptr_t even though all pointers are 32 bits. Or, for that matter, since uintptr_t is optional in C99 your «standard» macro may not be defined anyway.

Here is the correct and portable test which does not assume x86 or anything else:

#include #if UINTPTR_MAX == 0xffffffff /* 32-bit */ #elif UINTPTR_MAX == 0xffffffffffffffff /* 64-bit */ #else /* wtf */ #endif 

I know this question is for C, but since it’s mixed with (or included from) C++ a lot of the time, so here is a C++ caveat: C99 requires that to get limit macros defined in C++, you have to have __STDC_LIMIT_MACROS defined before you include the header. As it may have been already included, the only way to ensure the correct definition is to force the client to always include it as a first header in the source file, or add -D__STDC_LIMIT_MACROS to your compile options for all files.

Portability is theoretically limited by the fact that uintptr_t is an optional type. I suspect it would be perverse though for a 64 bit implementation to omit it, since unsigned long long is a big enough integer type.

My view is that a system that omits uintptr_t probably has very good reason for doing so (a very pathological or at least atypical memory model, for instance) and that any assumptions made on the basis that this is «a 32-bit system» or «a 64-bit system» would be invalid on such an implementation. As such, the «wtf» case in my answer should probably either contain #error or else hyper-portable code that’s completely agnostic to traditional assumptions about memory models, type sizes, etc.

Читайте также:  Linux last logged in

This doesn’t work on Linux PAE kernels. Kernels with PAE activated, are 32 bit but can address RAM like a 64 bit system. This code determines the architecture by checking the maximum addressable RAM. A 32 bit PAE kernel machine would be seen as 64 bit with this, so the inserted source code (possible some inline assembler instruction) would not work.

from my perspective any architectures that can do 64-bit arithmetics natively is a 64-bit architecture. And there are several architectures with only 24-bit address bus but still called «32-bit» because their registers are 32 bits. The same to 8-bit MCUs, although their address buses are often 14 to 16 bits or more

A compiler and platform neutral solution would be this:

// C #include // C++ #include #if INTPTR_MAX == INT64_MAX // 64-bit #elif INTPTR_MAX == INT32_MAX // 32-bit #else #error Unknown pointer size or missing size macros! #endif 

Avoid macros that start with one or more underscores. They are not standard and might be missing on your compiler/platform.

Just thought it was a good idea to mention. it took Microsoft 11 years to add stdint.h to its c99 support. If _MSC_VER is less than 1600, it doesn’t exist. (Granted it’s old, but it may still be encountered)

An easy one that will make language lawyer squeem.

if(sizeof (void *) * CHARBIT == 64) < . >else

As it is a constant expression an optimizing compiler will drop the test and only put the right code in the executable.

It usually is true, but please, please stop making assertions like «. so an optimizing compiler will . «. Preprocessor is preprocessor, and often the code following «else» will not compile when the condition is true.

I don’t see what the preprocessor has to do with anything? The OP asked for a method to detect the mem model used (64 or 32 bit), he didn’t ask for a preprocessor solution. Nobody asked a way to replace conditionnal compilation. Of course my solution requires that both branches are syntactically correct. The compiler will compile them always. If the compiler is optimizing it will remove the generated code, but even if it doesn’t there’s no problem with that. Care to elaborate what you mean?

OK, you’re right. The exact wording was «a C macro or some kind of way». I didn’t notice the «some kind of way» at first.

@ShelbyMooreIII: Ummmmm. excuse me? The distinction of a 32-bit vs 64-bit target has absolutely nothing to do with the size of int (indeed, its size differs e.g. in LP64 as used in Linux/BSD vs. LLP64 as used in Windows, while both are very clearly 64-bit). It also has nothing to do with how fast a compiler might optimize a particular operation (or how fast Javascript performs).

Doesn’t detect ILP32 ABIs on 64-bit architectures, e.g. the Linux x32 ABI or the AArch64 ILP32 ABI. That’s 32-bit pointers in 64-bit mode. So long long is still efficient on those targets, unlike on 32-bit CPUs where 64-bit integers take 2 instructions per operation, and 2 registers.

Читайте также:  Операционная система линукс дистрибутивы

I don’t know what architecture you are targeting, but since you don’t specify it, I will assume run-of-the-mill Intel machines, so most likely you are interested in testing for Intel x86 and AMD64.

#if defined(__i386__) // IA-32 #elif defined(__x86_64__) // AMD64 #else # error Unsupported architecture #endif 

However, I prefer putting these in the separate header and defining my own compiler-neutral macro.

@R.. Yes, I know of that one, and it breaks with C++ code, so I usually stick with compiler-specific ones.

Then use ULONG_MAX instead of UINTPTR_MAX . On any real-world unixy system they’ll be the same. It’s surely a lot more portable to assume long and pointers are the same size than to assume some particular compiler’s macros are present.

@R.. And it’s still wrong on 64-bit Windows. I prefer that my code fails to compile, rather than silently compile the wrong thing.

I am downvoting. See my answer for the reason. Generally speaking none of these scenarios can be relied upon to give any reliable indication of whether a 64-bit address space and non-emulated 64-bit arithmetic is available, thus they are basically useless except in the context of a build system that is not agnostic. Thus it is preferred to set build macros that so the build system can select which variant is compiled.

GLIBC itself uses this (in inttypes.h ):

Use this UINTPTR_MAX value to check build type.

#include #include #if UINTPTR_MAX == 0xffffffffffffffffULL # define BUILD_64 1 #endif int main(void)

The same program source can (and should be able to) be compiled in 64-bit computers, 32-bit computers, 36-bit computers, .

So, just by looking at the source, if it is any good, you cannot tell how it will be compiled. If the source is not so good, it may be possible to guess what the programmer assumed would be used to compile it under.

There is a way to check the number of bits needed for a source file only for bad programs.

You should strive to make your programs work no matter on how many bits they will be compiled for.

If you need to use inline assembly, knowing the number of bits is not helpful. You need to know the name of the arch and adjust your build system/macros/etc. accordingly.

@R. Meh, often the number of bits is good enough. Especially if you know your app is destined exclusively for x86 hardware, then knowing whether the compiler is 32 or 64 bit is often all you need to code the correct assembly source.

@deltamind106: Are you really still producing x86-only products in 2015? How long do you expect that line of business to be around? 🙂

The question is ambiguous because it doesn’t specify whether the requirement is for 64-bit pointers or 64-bit native integer arithmetic, or both.

Some other answers have indicated how to detect 64-bit pointers. Even though the question literally stipulates «compiled as», note this does not guarantee a 64-bit address space is available.

For many systems, detecting 64-bit pointers is equivalent to detecting that 64-bit arithmetic is not emulated, but that is not guaranteed for all potential scenarios. For example, although Emscripten emulates memory using Javascript arrays which have a maximum size of 2 32 -1, to provide compatibility for compiling C/C++ code targeting 64-bit, I believe Emscripten is agnostic about the limits (although I haven’t tested this). Whereas, regardless of the limits stated by the compiler, Emscripten always uses 32-bit arithmetic. So it appears that Emscripten would take LLVM byte code that targeted 64-bit int and 64-bit pointers and emulate them to the best of Javascript’s ability.

Читайте также:  Редактор исходного кода linux

I had originally proposed detecting 64-bit «native» integers as follows, but as Patrick Schlüter pointed out, this only detects the rare case of ILP64:

#include #if UINT_MAX >= 0xffffffffffffffff // 64-bit "native" integers #endif 

So the correct answer is that generally you shouldn’t be making any assumptions about the address space or arithmetic efficiency of the nebulous «64-bit» classification based on the values of the limits the compiler reports. Your compiler may support non-portable preprocessor flags for a specific data model or microprocessor architecture, but given the question targets GCC and per the Emscripten scenario (wherein Clang emulates GCC) even these might be misleading (although I haven’t tested it).

Generally speaking none of these scenarios can be relied upon to give any reliable indication of whether a 64-bit address space and non-emulated 64-bit arithmetic is available, thus they are basically useless (w.r.t. to said attributes) except in the context of a build system that is not agnostic. Thus for said attributes, it is preferred to set build macros that so the build system can select which variant is compiled.

Источник

How to check for 32-bit / 64-bit kernel for Linux

I need to write a bash script where i have to check whether the Linux kernel is 32 bit or 64 bit. I am using uname -a command, and it gives me x86_64 result. But I believe I can not use it in the generic way because result may differ if some one using non x86 architecture. How to check for 32-bit / 64-bit kernel for Linux?

When you say «installation», do you mean the kernel? Because you can use a 64-bit kernel with a 32-bit installation.

7 Answers 7

The question is rather: what do you intend to achieve by knowing whether you are on 32 or 64? What are the consequences of being on a hypothetical 128-bit environment? And what part actually is being tested for N-bitness? A CPU may support running in 64-bit mode, but the environment be 32-bit. Furthermore, the environment itself may be a mixed-mode; consider running a 64-bit kernel with a 32-bit userspace (done on a handful of classic RISCs). And then, what if the userspace is not of a homogenous bitness/executable format? That is why getconf LONG_BIT is equally pointless to use, because it depends on how it was compiled.

$ /rt64/usr/bin/getconf LONG_BIT 64 $ /usr/bin/getconf LONG_BIT 32 $ file /usr/bin/getconf /rt64/usr/bin/getconf /usr/bin/getconf: ELF 32-bit MSB executable, SPARC32PLUS, V8+ Required, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.4, not stripped /rt64/usr/bin/getconf: ELF 64-bit MSB executable, SPARC V9, relaxed memory ordering, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.4, not stripped $ uname -m sparc64 

Источник

Оцените статью
Adblock
detector