I have these two simple containers in a fresh installation of Arch Linux (made same test also with Alpine):
alpine1:
version: "3"
services:
app1:
image: alpine:latest
container_name: alpine1
restart: unless-stopped
command: sleep infinity
ports:
- "8001:8001"
networks:
dnet:
networks:
dnet:
driver: bridge
alpine2:
version: "3"
services:
app1:
image: alpine:latest
container_name: alpine2
restart: unless-stopped
command: sleep infinity
ports:
- "8002:8002"
networks:
dnet:
networks:
dnet:
driver: bridge
alpine1_dnet:
- IP: 172.18.0.2/16
- Gateway: 172.18.0.1
They should be isolated from each other, but if I execute:
docker exec alpine2 nc -vz 172.18.0.1 8001
It returns open.
While it hangs if I use alpine1 IP address:
docker exec alpine2 nc -vz 172.18.0.2 8001
What am I missing? I was expecting that no communication were possible between the two containers since they are attached to different networks (as specified here).
I tried to use the default bridge and the behavior is exactly the same, so it seems that there is no difference between default bridge and user-define bridge.
The address
172.18.0.1is an address of your host (specifically, of the virtual bridge device associated with thealpine1docker network), and you have explicitly published container port 8001 as host port 8001.If you don't want containers in
alpine2to access services inalpine1, don't publish ports for those services on the host. Port publishing means, in general, "I want this service to be accessible from everywhere".