difference between i386:x64-32 vs i386 vs i386:x86_64
Can someone explain the difference between the three architectures? Actually when I built a 64-bit application in Linux, I got a link error saying:
skipping incompatible library.a when searching for library.a
a.o: file format elf32-x86-64 architecture: i386:x64-32, flags 0x00000011: HAS_RELOC, HAS_SYMS start address 0x00000000
Note that static libraries are generally far more trouble than they are worth. Use dynamic libraries, with an $ -relative -rpath if you really need to and don’t need setuid or any other capabilities .
2 Answers 2
There are 3 common ABIs usable on standard Intel-compatible machines (not Itanium).
- The classic 32-bit architecture, often called «x86» for short, which has triples like i686-linux-gnu . Registers and pointers are both 32 bits.
- The 64-bit extension originally from AMD, often called «amd64» for short, which has GNU triple of x86_64-linux-gnu . Registers and pointers are both 64 bits.
- The new «x32» ABI, with a triple of x86_64-linux-gnux32 . Registers are 64 bits, but pointers are only 32 bits, saving a lot of memory in pointer-heavy workflows. It also ensures all the other 64-bit only processor features are available.
Each of the above has its on system call interface, own ld.so , own complete set of libraries, etc. But it is possible to run all 3 on the same kernel.
On Linux, their loaders are:
% objdump -f /lib/ld-linux.so.2 /lib64/ld-linux-x86-64.so.2 /libx32/ld-linux-x32.so.2 /lib/ld-linux.so.2: file format elf32-i386 architecture: i386, flags 0x00000150: HAS_SYMS, DYNAMIC, D_PAGED start address 0x00000a90 /lib64/ld-linux-x86-64.so.2: file format elf64-x86-64 architecture: i386:x86-64, flags 0x00000150: HAS_SYMS, DYNAMIC, D_PAGED start address 0x0000000000000c90 /libx32/ld-linux-x32.so.2: file format elf32-x86-64 architecture: i386:x64-32, flags 0x00000150: HAS_SYMS, DYNAMIC, D_PAGED start address 0x00000960
Now, if you’re getting the message about «skipping incompatible library», that means something is messed up with your configuration. Make sure you don’t have bad variables in the environment or passed on the command line, or files installed outside of your package manager’s control.
ARM vs x86: What’s the difference?
Computers designed around ARM processors and those designed around Intel or AMD are not interchangeable. There are two foundational questions that each approaches in different ways:
- How do you balance transistor count and program complexity?
- How do you prioritize speed, power consumption, and cost?
The answers to these questions have guided technology innovation and software development in everything from smartphones to supercomputers over the past four decades.
Defining x86 and ARM processors
To set some context, let’s briefly define x86 and ARM processors.
x86 processors are familiar to many in IT because this is the type of processor used in most computer and server hardware. In an architectural sense, the hardware components within an x86 system–like sound and graphics cards, memory, storage, and the CPU–are all independent of each other. Most components have separate chips called controllers. Components can be changed or expanded without affecting connectivity or the overall hardware platform.
ARM processors do not have a separate CPU. Instead, the processing unit is on the same physical substrate as the other hardware controllers; this is an integrated circuit. Additionally, unlike Intel or AMD CPUs, there is no ARM processor manufacturer. Instead, Arm Holdings licenses chip designs to other hardware manufacturers which then incorporate the ARM processor chip into their hardware designs. Unlike a traditional x86-based computer, ARM chips are not interchangeable and are highly application specific. These processors are manufactured together in what’s called a system-on-a-chip (SoC).
RISC, CISC, and the effect on development
An age-old debate among early programmers is what led to the divergence between two main philosophies in computer science: to simplify the programmer’s job, or simplify the microprocessor’s job.
To do anything productive with a computer, an operating system and the programs it executes need to interact with the Central Processing Unit (CPU), as well as other hardware like memory, storage, and network cards. The CPU mediates between the operating system (and running programs) and these pieces of hardware. To simplify the life of programmers, the CPU has a set of predefined actions and calculations called the instruction set or ISA (instruction set architecture). The operating system and the programs it executes (which are both written by programmers) rely on these instructions to perform low-level functions like:
- Interactions between the CPU and hardware (memory, storage, network, etc)
- Arithmetic functions (addition, subtraction, etc)
- Data manipulation (binary shifts, etc).
The original x86 CPUs had (and still have) a very rich instruction set. A single instruction can complete an entire calculation (like multiplication) or move a chunk of data directly from one place in memory to another. This might not sound like much, but multiplication and moving data between places in memory requires a lot of instructions at this low level. With x86 computers, this complex series of operations can be executed with a single cycle. Processing units with this type of instruction set are called complex instruction set computers (CISC).
However, the powerful instructions in a CISC computer mean that it needs more transistors, which eat up space and power.
This led to several projects in the early 1980’s, exploring energy efficiency and ways to simplify the instruction sets in CPU architecture. Researchers discovered that in real life, most computers used only a small subset of the huge set of instructions provided in a CISC computer. This ultimately led to the design of reduced instruction set computer (RISC) processors. RISC processors have an instruction set where each instruction represents only a simple operation using lower power. This makes the assembly language programmer’s job more complex, but simplifies the processor’s job. With RISC processors and advanced RISC machines, complex operations are performed either by running multiple instructions or by pushing the complexity over to the compiler rather than the CPU core.
There are trade offs here. x86 CPUs tend to have very fast computing power and allow for more clarity or simplicity in the programming and number of instructions, but it comes at the expense of a larger, more expensive chip with a lot of transistors. ARM processors can be very fast for certain types of operations, but repeated cycles for individual instructions can slow it down as operations are more complex and more of the effort for defining and executing operations is pushed to the programming (and programmer) rather than the instruction set.
Also, because of these differences, MIPS (million instructions per second)—an approximate measure of a computer’s raw power—can be difficult to calculate since the different types of processors require different sets of instructions to perform the same activity.
ARM vs x86 for energy usage
RISC architecture was inspired by a need to make smaller chips with better performance for smaller computers—or microcomputers—which eventually became PCs. This introduced a second fundamental design question: Whether to focus on chip performance (processing speed—or clock speed) or energy consumption (power efficiency).
Since ARM processors are integrated as a SoC, the emphasis has long been on overall resource management, including low energy consumption and lower heat production. For example, ARM architectures (like ARMv8) tend not to have simplified cooling systems (no fans on a cell phone). However, x86 CPUs have tended to favor high-end processing speed over low power consumption.
While both CPU designs can still have high performance (both ARM- and x86-architecture supercomputers compete for the fastest in the world), ARM designs tend to focus on smaller form factors, battery life, size, eliminating cooling requirements, and—perhaps most importantly—cost. This is why ARM processors dominate small electronics and mobile devices like smartphones, tablets, and even Raspberry Pi systems. x86 architectures are more common in servers, PCs, and even laptops where there is a preference for speed and flexibility in real time, and fewer constraints on cooling and size.
Why choose Red Hat Enterprise Linux for ARM?
ARM chips are becoming popular in high performance computing (HPC), and even cloud use cases (such as AWS Graviton and Azure)—two places where Red Hat Enterprise Linux provides a great platform for computation and compatibility, as well as app development, deployment, and optimization.
ARM CPUs have a long history with Linux systems, from Android-based systems for smartphones to custom systems for Raspberry Pi. Now, even one of the fastest supercomputers in the world uses Red Hat Enterprise Linux with ARM architecture (Red Hat powers Fugaku).
Unlike the x86 CPUs, the hardware for each ARM design is unique. This is where the power of a broad, open source community can help. Red Hat Enterprise Linux has a large community of hundreds of hardware vendors who are all tested and certified—and this includes ARM hardware manufacturers and designers. Red Hat has an early access program with our ARM partners to allow customers to collaborate and evaluate new ARM technologies.
Part of the power of Red Hat Enterprise Linux is its performance across different footprints, from cloud to servers to edge. The combination of technology, ecosystem, and consistent stability allows organizations to innovate and adapt wherever their IT needs to go.