- Docker Tip — How to use the host’s IP Address inside a Docker container on macOS, Windows, and Linux
- Docker Networking on macOS and Windows vs. Linux
- Setup docker-compose
- References
- Use host.docker.internal on linux (docker-compose required)
- What is linux equivalent of «host.docker.internal» [duplicate]
- 11 Answers 11
Docker Tip — How to use the host’s IP Address inside a Docker container on macOS, Windows, and Linux
Once in a while, you may need your Docker host’s IP address. For instance, you need to be able to connect to the host network from inside a Docker container to access your app or database running locally on the host. Debugging or reverse proxies running on your host are two additional example use-cases. I’ll show you how to easily make this work simultaneously for macOS, Windows, and Linux — because their docker networking settings differ.
Docker Networking on macOS and Windows vs. Linux
The host has a changing IP address (or none if you have no network access). From 18.03 onwards our recommendation is to connect to the special DNS name host.docker.internal , which resolves to the internal IP address used by the host. This is for development purpose and will not work in a production environment outside of Docker Desktop for Mac/Windows. The gateway is also reachable as gateway.docker.internal . Source: https://docs.docker.com/docker-for-mac/networking/#use-cases-and-workarounds
Source: https://docs.docker.com/docker-for-windows/networking/#use-cases-and-workarounds
On Docker for Linux, the IP address of the gateway between the Docker host and the bridge network is 172.17.0.1 if you are using default networking. Do you see the problem already? They are different, so you cannot simply run docker-compose up -d and all operating systems behave the same. But I got you covered, there’s an easy approach to make this work.
Setup docker-compose
I’ve seen some suggestions, like creating a Linux-specific config file docker-compose.override.yml (docs), but the solution a co-worker of mine came up with seems more elegant and less complex to me.
# docker-compose.yml version: '3.7' services: app: image: your-app:latest ports: - "8080:8080" environment: DB_UPSTREAM: http://$:3000
So, what is happening here? The DB_UPSTREAM should point to the host’s IP and port 3000. $
export DOCKER_GATEWAY_HOST=172.17.0.1
Now you can start the stack from macOS, Windows, and Linux without further configuration or overwrites. If you stick to this pattern — as we do — this works for every project of your company. Great, isn’t it? I hope this saves you some time!
References
- compose-file#extra_hosts: an entry with the ip address and hostname is created in /etc/hosts inside containers at build-time.
Use host.docker.internal on linux (docker-compose required)
If the firewall is running, you may need to allow access from the above subnet. Here is a command example of firewalld.
sudo firewall-cmd --permanent --zone=trusted --add-source=172.101.0.0/16
A special domain for accessing the host’s network from a container, supported by docker for Mac and docker for Windows. Since mac has become the mainstream of developers, it is easy to use in projects, and linux users have troubles every time.
With these issues, host.docker.internal will eventually be available on linux, but it will take some time before it is released, so I’ll mention Workaround. Rather, the work content is as described above, so I will write a commentary.
This file is automatically read when you execute the docker-compose command, so it is useful when writing your own settings that you do not want to include in the repository. However, when you use the -f option, it is not read automatically, so you need to specify this file as well.
Add the contents to / etc / hosts of the container.
All containers belong to the network named default unless otherwise specified, so if you write the settings here, they will be automatically reflected in all containers. The subnet set here is the default network subnet. Normally, the network subnet is automatically selected so that it does not overlap with the existing network, but this time I want to fix it, so I manually select and set it. The host belongs to all networks, and its IP seems to be «the lowest bit of the subnet mask is set to 1». (I didn’t find such a statement in the documentation just because I tried it and said it was, so maybe it doesn’t happen in some cases.) So, if you specify this IP in extra_hosts, you can assign a domain.
What is linux equivalent of «host.docker.internal» [duplicate]
On Mac and Windows it is possible to use host.docker.internal (Docker 18.03+) inside container. Is there one for Linux that will work out of the box without passing env variables or extracting it using various CLI commands?
There is open PR which add «host.docker.internal» feature to Linux. Wait until it will be accepted, and now as a workaround, you can use special container which add unified «dockerhost» host and you can use this from docker.
It should be noted that docker-for-windows is a specific product line and will not cover docker on windows in general. For example I use docker on windows, using docker-toolbox (OG) so that it has less conflicts with the rest of my setup and I don’t need HyperV. There is an answer in this thread using grep, awk and netstat, which works for me; although generally, mixed network environments can also be solved with LAN or WAN level hostnames, than machine hostnames. This is then more explicit and flexible / composable than hacking at docker VM’s
11 Answers 11
Depends what you’re trying to do. If you’re running with —net=host , localhost should work fine. If you’re using default networking, use the static IP 172.17.0.1 . I suspect neither will behave quite the same as those domains.
Wow! 172.17.0.1 actually works! I found this nowhere in the documentation or any of the forums complaining about host.docker.internal not working. Is this IP guaranteed to always link to the host machine?
@JulesColle It is «guaranteed» as long as you are on the default network. 172.17.0.1 is no magic trick, but simply the gateway of the network bridge , which happens to be the host. All containers will be connected to bridge unless specified otherwise. See here
This DOES NOT work on all cases. If you have other networks, a new interface is going to be created: 172.17.0.1, 172.18.0.1, 172.19.0.1 and so on (try ifconfig to list all interfaces). You have to manually obtain the IP for your network.
For linux systems, you can – starting from major version 20.04 of the docker engine – now also communicate with the host via host.docker.internal . This won’t work automatically, but you need to provide the following run flag:
--add-host=host.docker.internal:host-gateway
See also this answer below to add to a docker-compose file — https://stackoverflow.com/a/67158212/243392
Is there a way to enable this in daemon.json or something? I’m thinking about test environments of Rancher and Kubernetes, where I don’t want to take care of every single one of the many containers.
When running —add-host=host.docker.internal:host-gateway on CentOS I received the error invalid argument «host.docker.internal:host-gateway» for «—add-host» flag: invalid IP address in add-host: «host-gateway» Are you expecting to need to replace host-gateway with the actual host IP?
this works for me too. 172.17.0.1 is bridge network’s gateway address in my case. if anyone has different network settings , they can get this by doing docker inspect.
Only newer docker versions have the magical string host-gateway , that converts to the docker default bridge network ip (or host’s virtual IP when using docker desktop). You can test running: docker run —rm —add-host=host.docker.internal:host-gateway ubuntu:18.04 cat /etc/hosts , then see if it works and show the ip in the hosts file (there should be a line like 172.17.0.1 host.docker.internal in it).
If you are using Docker Compose + Linux , you have to add it manually (at least for now). Use extra_hosts on your docker-compose.yaml file:
version: '3.7' services: fpm: build: context: . extra_hosts: - "host.docker.internal:host-gateway"
Do not forget to update Docker since this only works with Docker v20.10+.
if i want to access port 3000 on host, is this how i access it from container: http://host.docker.internal:3000/ ?
One solution is to use a special container which redirects traffic to the host. You can find such a container here: https://github.com/qoomon/docker-host. The idea is to grab the default route from within the container and install that as a NAT gateway for incoming connections.
An imaginary example usage:
docker-host: image: qoomon/docker-host cap_add: [ 'NET_ADMIN', 'NET_RAW' ] restart: on-failure environment: - PORTS=999 some-service: image: . environment: SERVER_URL: "http://docker-host:999" command: . depends_on: - docker-host
IP_ADDRESS=$(ip addr show | grep "\binet\b.*\bdocker0\b" | awk '' | cut -d '/' -f 1)
The above ip route example prints the gateway, not the docker0 io. The below should work: # ip route | awk ‘/docker0/
For linux there isn’t a default DNS name for the host machine. This can be verified by running the command:
docker run -it alpine cat /etc/hosts
This feature has been requested, however wasn’t implemented. You can check this issue. As discussed you can use the following command to find the IP of the host from the container.
netstat -nr | grep '^0\.0\.0\.0' | awk ''
Alternatively, you can provide the host ip to the run command via docker run —add-host dockerHost: .
That’s not equivalent by any means. Having something that will resolve with dns gives you the ability to put it in config files without evaluating or sed’ing, or other funky stuff.
More often than not grep | awk can be just awk: awk ‘/^0\.0\.0\.0/
Well I would like to say thank you. This worked on my windows setup which uses docker-machine (I know OG). Normally I run a pass-through nginx so that I can talk to docker via a single container, but talking back to the host seems to be very OS / setup specific. It worked for me and I’m ecstatic for that. Thank you!
IP=$(ip -4 route list match 0/0 | awk '') echo "Host ip is $IP" echo "$IP host.docker.internal" | sudo tee -a /etc/hosts
It will add host.docker.internal to your hosts. Then you can use it in xdebug config.
Here is example of env variable in docker-compose.yml
XDEBUG_CONFIG: remote_host=host.docker.internal remote_autostart=On remote_enable=On idekey=XDEBUG remote_log=/tmp/xdebug.log remote_port=9999
tldr; Access the host via the static IP 172.17.0.1
Doing HTTP request towards the host:
- Run the following command to get the static IP: ip addr show | grep «\binet\b.*\bdocker0\b» | awk » | cut -d ‘/’ -f 1
- Add the new IP to allowed hosts
- Use the IP address just found in your requests: req = requests.get(‘http://172.17.0.1:8000/api/YOUR_ENDPOINT’)
host.docker.internal exists only in Windows WSL because Docker Desktop for Windows runs Docker daemon inside the special WSL VM Docker-Desktop. It has its own localhost and its own WSL2 interface to communicate with Windows. This VM has no static IP. The IP is generated every time when VM is created and passed via host.docker.internal in the generated /etc/hosts to every distro. Although there is no bridge or real v-switch all ports opened on eth0 of VM internal network are mapped on the host Local network, BUT NOT ON THE ETH0 OF THE HOST. There is no real bridge and port mapping — nothing to configure. Inside WSL VM its Localhost is the same as the localhost of the Linux machine. 2 processes inside WSL VM can communicate via localhost. Cross-distro IPC must use host.docker.internal. It is possible to create bridge inside WSL VM -Docker does it.
Another hint from the docs: This (using host.docker.internal ) is for development purpose and does not work in a production environment outside of Docker Desktop.
Using the docker0 interface ip, say 172.17.0.1, could be a good workaround.
Just be sure that the service you need to reach listens to external connections. A typical example is Mysql who binds to 127.0.0.1 by default, resulting unreachable until you allow external connections (es. binding to 0.0.0.0)