Docker 网络
Summary: Author: 张亚飞 | Read Time: 3 minute read | Published: 2020-02-16
Filed under
—
Categories:
MarkDown
—
Tags:
Tag,
Docker 网络
当我们使用 ifconfig 命令查看系统网络的时候,除了常见的 eth0 之外,会发现很多 br-
和 veth
开头的网络接口,例如
$ ifconfig
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.18.0.1 netmask 255.255.0.0 broadcast 172.18.255.255
ether 02:42:0b:95:1a:be txqueuelen 0 (Ethernet)
RX packets 954918 bytes 155774490 (148.5 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3020131 bytes 10005328259 (9.3 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.16.30.38 netmask 255.255.240.0 broadcast 10.16.31.255
ether 1a:21:1b:e4:bf:8d txqueuelen 1000 (Ethernet)
RX packets 1330880923 bytes 179781559695 (167.4 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 748313990 bytes 140181311616 (130.5 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 39870636 bytes 10570150312 (9.8 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 39870636 bytes 10570150312 (9.8 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
br-6819e94dc842: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.10.1.1 netmask 255.255.255.0 broadcast 10.10.1.255
ether 02:42:63:b5:e0:cf txqueuelen 0 (Ethernet)
RX packets 3 bytes 158 (158.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 52560 bytes 2838248 (2.7 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vethb561627: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 9a:cf:e3:0a:99:9c txqueuelen 0 (Ethernet)
RX packets 3040507 bytes 368674712 (351.5 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 7326510 bytes 599425691 (571.6 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
...
可以看到 br-
开头的都是 ipv4
局域网地址, veth
开头的都是 ipv6
地址
这些网络是如何创建的,以及这些网络关联的容器
当我们启动一个docker容器的时候,比如使用 docker run
,系统会自动创建一对 br-
以及 veth
的网络接口
那如何找到这些网络和容器的关系呢?如果容器少,还可以找,多了之后就不好匹配了
首先看下 br-
于 docker
网络容器的关系
使用 docker network ls 查看docker网络
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
345cc188fc64 bridge bridge local
226ed112f259 host host local
255fbbf894f6 none null local
6819e94dc842 vconsole_vcs_net bridge local
...
例如 6819e94dc842 就是网络的ID,网络名字 vconsole_vcs_net
那如何找到网络关联的容器呢?
docker network inspect 6819e94dc842
[
{
"Name": "vconsole_vcs_net",
"Id": "6819e94dc84203a91cce1c59cd0e702710745bfaae60a9d56fddb0711c2dc71d",
"Containers": {
"5fb89fb0ee12d0730261cbc72ebe83d9f2ab25c48ddea499454008cfd8172221": {
"Name": "vcs.api",
"EndpointID": "aeb29a292b22c5d5f7b5b37d312e153d511df3e33a27149e64ebd3cfbbf325b4",
"MacAddress": "02:42:0a:0a:01:02",
"IPv4Address": "10.10.1.2/24",
"IPv6Address": ""
}
}
}
]
可以看到 Containers 下有个 5fb89fb0ee12d0730261cbc72ebe83d9f2ab25c48ddea499454008cfd8172221 的容器
使用 docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5fb89fb0ee12 harbor.baijiayun.com/bdata/vconsole:release-test "/app/vcs -env=test …" 23 hours ago Up 23 hours 0.0.0.0:8181->8000/tcp vcs.api
...
可以看到网络 br-6819e94dc842
关联的网络为 6819e94dc842
,网络 6819e94dc842
关联的容器为 5fb89fb0ee12
即为 vcs.api
容器
再看下 veth-
于 docker
网络容器的关系
可以使用 ip a
命令找到 veth
与 br-
的关系
$ ip a | egrep veth
1872: vethb561627@if1871: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-6819e94dc842 state UP group default
...
也可以 dockerveth
脚本
$ sudo ./dockerveth.sh
CONTAINER ID VETH NAMES
5fb89fb0ee12 vethb561627 vcs.api
...
Docker 网段配置
Docker
网段配置主要在 /etc/docker/daemon.json
的 bip
和 default-address-pools
配置,其中 bip
指定通过 docker 命令启动的容器,对 docker-compose
启动的容器不生效。
需要加 default-address-pools
配置子网段,其中 base
表示该地址池的全局子网掩码, size
表示每个网段的子网掩码,以下配置 10.250.0.1/16
表示地址池子网掩码是 16,可以分 256 个网段。
- /etc/docker/daemon.json
{
"bip":"192.168.250.1/24",
"default-address-pools": [{"base": "10.250.0.1/16","size": 24 }],
"registry-mirrors": ["http://hub-mirror.c.163.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver":"json-file",
"log-opts": {"max-size":"100m", "max-file":"5"}
}
Docker buildx 构建失败
一次使用 docker buildx
构建加载镜像出现 DNS
解析错误
ERROR: failed to solve: harbor.baijiayun.com/docker-proxy/library/ubuntu:23.10: failed to do request: Head "https://harbor.baijiayun.com/v2/docker-proxy/library/ubuntu/manifests/23.10": dial tcp: lookup harbor.baijiayun.com on 202.106.0.20:53: read udp 192.168.0.2:53851->202.106.0.20:53: i/o timeout
当前是 Ubuntu 系统,之前构建都是好好的,系统使用的是 systemd-resolved
服务管理 DNS
解析
查看配置 systemd-resolved
解析配置 /run/systemd/resolve/resolv.conf
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients directly to
# all known uplink DNS servers. This file lists all configured search domains.
#
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.
nameserver 202.106.196.115
nameserver 202.106.0.20
使用 ping 检测 DNS 服务器状态失败。
$ ping 202.106.0.20
PING 202.106.0.20 (202.106.0.20) 56(84) bytes of data.
怀疑 DNS
服务器有故障,尝试更换 DNS
服务器配置均无效。
使用 dig
命令指定 DNS
服务器解析验证 DNS
服务正常:
dig -t A harbor.baijiayun.com @202.106.0.20
; <<>> DiG 9.16.1-Ubuntu <<>> -t A harbor.baijiayun.com @202.106.0.20
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 10140
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;harbor.baijiayun.com. IN A
;; ANSWER SECTION:
harbor.baijiayun.com. 60 IN A 114.67.227.74
;; Query time: 3 msec
;; SERVER: 202.106.0.20#53(202.106.0.20)
;; WHEN: Fri Nov 03 15:20:35 CST 2023
;; MSG SIZE rcvd: 54
最后重启 docker
解决问题。
Difference Between resolve.conf, systemd-resolve, and Avahi
Comments