Command to determine whether Ubuntu is running in a virtual machine?
Is there a command or tool that can be used to determine whether Ubuntu is running as a guest within a virtualization program such as VirtualBox or Qemu or whether it is running on the bare metal?
3 Answers 3
virt-what is a shell script which can be used to detect if the program is running in a virtual machine.
The program prints out a list of «facts» about the virtual machine, derived from heuristics. One fact is printed per line.
If nothing is printed and the script exits with code 0 (no error), then it can mean either that the program is running on bare-metal or the program is running inside a type of virtual machine which we don’t know about or cannot detect.
aws Amazon Web Services cloud guest. Status: contributed by Qi Guo. bhyve This is a bhyve (FreeBSD hypervisor) guest. Status: contributed by Leonardo Brondani Schenkel. docker This is a Docker container. Status: confirmed by Charles Nguyen hyperv This is Microsoft Hyper-V hypervisor. Status: confirmed by RWMJ ibm_power-kvm This is an IBM POWER KVM guest. Status: contributed by Adrian Likins. ibm_power-lpar_shared ibm_power-lpar_dedicated This is an IBM POWER LPAR (hardware partition) in either shared or dedicated mode. Status: contributed by Adrian Likins. ibm_systemz This is an IBM SystemZ (or other S/390) hardware partitioning system. Additional facts listed below may also be printed. ibm_systemz-direct This is Linux running directly on a IBM SystemZ hardware partitioning system. This is expected to be a highly unusual configuration - if you see this result you should treat it with suspicion. Status: not confirmed ibm_systemz-lpar This is Linux running directly on an LPAR on an IBM SystemZ hardware partitioning system. Status: confirmed by Thomas Huth ibm_systemz-zvm This is a z/VM guest running in an LPAR on an IBM SystemZ hardware partitioning system. Status: confirmed by RWMJ using a Fedora guest running in z/VM ibm_systemz-kvm This is a KVM guest running on an IBM System Z hardware system. Status: contributed by Thomas Huth ldoms The guest appears to be running on an Linux SPARC system with Oracle VM Server for SPARC (Logical Domains) support. Status: contributed by Darren Kenny ldoms-control The is the Oracle VM Server for SPARC (Logical Domains) control domain. Status: contributed by Darren Kenny ldoms-guest The is the Oracle VM Server for SPARC (Logical Domains) guest domain. Status: contributed by Darren Kenny ldoms-io The is the Oracle VM Server for SPARC (Logical Domains) I/O domain. Status: contributed by Darren Kenny ldoms-root The is the Oracle VM Server for SPARC (Logical Domains) Root domain. Status: contributed by Darren Kenny linux_vserver This is printed for backwards compatibility with older virt-what which could not distinguish between a Linux VServer container guest and host. linux_vserver-guest This process is running in a Linux VServer container. Status: contributed by Barış Metin linux_vserver-host This process is running as the Linux VServer host (VxID 0). Status: contributed by Barış Metin and Elan Ruusamäe lxc This process is running in a Linux LXC container. Status: contributed by Marc Fournier kvm This guest is running on the KVM hypervisor using hardware acceleration. Note that if the hypervisor is using software acceleration you should *not* see this, but should see the "qemu" fact instead. Status: confirmed by RWMJ. lkvm This guest is running on the KVM hypervisor using hardware acceleration, and the userspace component of the hypervisor is lkvm (a.k.a kvmtool). Status: contributed by Andrew Jones openvz The guest appears to be running inside an OpenVZ or Virtuozzo container. Status: contributed by Evgeniy Sokolov ovirt The guest is running on an oVirt node. (See also "rhev" below). Status: contributed by RWMJ, not confirmed parallels The guest is running inside Parallels Virtual Platform (Parallels Desktop, Parallels Server). Status: contributed by Justin Clift powervm_lx86 The guest is running inside IBM PowerVM Lx86 Linux/x86 emulator. Status: data originally supplied by Jeffrey Scheel, confirmed by Yufang Zhang and RWMJ qemu This is QEMU hypervisor using software emulation. Note that for KVM (hardware accelerated) guests you should *not* see this. Status: confirmed by RWMJ. rhev The guest is running on a Red Hat Enterprise Virtualization (RHEV) node. Status: confirmed by RWMJ uml This is a User-Mode Linux (UML) guest. Status: contributed by Laurent Léonard virt Some sort of virtualization appears to be present, but we are not sure what it is. In some very rare corner cases where we know that virtualization is hard to detect, we will try a timing attack to see if certain machine instructions are running much more slowly than they should be, which would indicate virtualization. In this case, the generic fact "virt" is printed. virtage This is Hitachi Virtualization Manager (HVM) Virtage hardware partitioning system. Status: data supplied by Bhavna Sarathy, not confirmed virtualbox This is a VirtualBox guest. Status: contributed by Laurent Léonard virtualpc The guest appears to be running on Microsoft VirtualPC. Status: not confirmed vmm This is a vmm (OpenBSD hypervisor) guest. Status: contributed by Jasper Lievisse Adriaanse. vmware The guest appears to be running on VMware hypervisor. Status: confirmed by RWMJ xen The guest appears to be running on Xen hypervisor. Status: confirmed by RWMJ xen-dom0 This is the Xen dom0 (privileged domain). Status: confirmed by RWMJ xen-domU This is a Xen domU (paravirtualized guest domain). Status: confirmed by RWMJ xen-hvm This is a Xen guest fully virtualized (HVM). Status: confirmed by RWMJ
Sorry but this is your GUY. 😉
Detecting VMM on linux
I’m trying to detect whether I am running on a virtual environment (vmware, virtualbox, etc)
On Windows I’ve use several ASM but I can’t use them in Linux, mainly because the code might be compiled and run on either 32 or 64 bit Linux. The following code works on both Windows 32 and 64 and was tested on VMWare, virtualbox and other virtual machines:
#include int idtCheck () < unsigned char m[2]; __asm sidt m; printf("IDTR: %2.2x %2.2x\n", m[0], m[1]); return (m[1]>0xd0) ? 1 : 0; > int gdtCheck() < unsigned char m[2]; __asm sgdt m; printf("GDTR: %2.2x %2.2x\n", m[0], m[1]); return (m[1]>0xd0) ? 1 : 0; > int ldtCheck() < unsigned char m[2]; __asm sldt m; printf("LDTR: %2.2x %2.2x\n", m[0], m[1]); return (m[0] != 0x00 && m[1] != 0x00) ? 1 : 0; >int main(int argc, char * argv[])
now, GCC complains about the __asm on all the functions. I tried with asm(), asm and other forms that I used in the past but none work. Any ideas?
I suggest you read the whole document, specifically the text after NOTE:. Then you would be able to ask a relevant question or figure it out yourself.
SIDT produces a 6 (32-bit) or 10 (64-bit) byte result, don’t shove it into a 2-byte char array, unless you like overwriting random bits of your stack.
@bdonlan Noted! thanks for the comment. I fixed that. However I still need to figure how to write __asm in linux. Any ideas?
Please edit the title of your question. You are asking about gcc’s inline assembler feature not about VM, linux or anything
1 Answer 1
Well, I haven’t disassembled the machine code in there, but here’s a version using GCC inline assembler:
This should ‘work’ even for 64-bit, but I’ve not tested it there.
HOWEVER! This isn’t guaranteed to give the result you want, at all. First, it won’t work with hardware virtualization, as you can’t see the true IDT. Second, it relies on an implementation detail of VMWare and Virtual PC, which could probably be changed quite easily. It might even kick off false alarms if your OS decides to put its IDT at a high address. So I don’t recommend this approach at all.
For virtual machines using the VMX hardware support, probably your best bet would be to do something which should be fast in hardware but require a trap in a virtual machine, and check the timing. Something like CPUID would be a good bet; benchmark it in a VM and on real hardware (compared against a dummy loop that does an ADD or something to deal with differing clock rates), and see which profile the test machine more closely matches. Since each CPUID will have to exit the VM to ask the host kernel what capabilities it wants to present, it will take a LOT longer than on real hardware.
For other kinds of virtual machines, you could do a similar timing check simply by loading a control register or debug register — the VM will have to trap it, or replace it with emulated code. If you’re on something like VMware, it might replace the trap with some emulated code — in this case, you might be able to identify it by timing modifying the code containing the control register or debug register operation. This will force the VM to invalidate its cached code, necessitating an expensive trap to the VM, which will show up on your benchmark.
Note that both of these approaches will require help from the OS kernel — it is going to be very hard to determine if you’re in a VM if you don’t have control of the emulated kernel at least. And if the VM is really sophisticated, it might fake timing as well, at which point things get really difficult — but this tends to kill performance, and result in clock drift (easy to identify if you can connect to the internet and query a time server somewhere!), so most commercial VMs don’t do it.
How to detect if a machine is virtualized or not
I have Linux, AIX, SunOs & HP-UX machines, I want to detect if the machine is virtual or not. So far I found this article which helped me get this information on Linux:
dmesg | grep -i virtual //On Linux Machines
But I also need commands for AIX, HP-UX & SunOs. Any help?
That dmesg Linux trick only works if the kernel buffer that holds dmesg messages hasn’t been overwritten by more recent messages. If you do any kind of iptables logging, those log messages will wipe out your dmesg boot messages fairly quickly. Some systems save the initial dmesg output under /var/log/ and you might find it there.
5 Answers 5
Try http://people.redhat.com/~rjones/virt-what/. Correct me if I’m mistaken, but I think Red Hat uses this in their Satellite to detect and group virtualized systems.
The definition of ‘virtual’ is too loose. If we assume that for AIX you mean any AIX image which is an LPAR (or a micro-partition, or any other terminology IBM chooses) then you can use uname -L , for example,
nonlpar# umame -L -1 NULL lparhost# uname -L 5 lparhost
If you mean WPAR, you can use uname -W and a result of 0 means you’re not in a WPAR, a result of anything higher than 0 is a WPAR.
If you mean, does the AIX instance rely on a VIO server, then there’s no solid reliable mechanism for knowing that other than looking at the devices and working out if they’re presented via VIO servers.
It’s worth remembering that for pSeries hardware running AIX, just about everything these days is an LPAR, and so essentially virtual, even if it’s the only OS instance using the hardware.