docker容器网络技术

1,786 阅读17分钟

关于容器网络思考

学习docker零零散散的时间加起来已经好大半年时间了,使用容器像是在变魔术,简单的几个命令就可以将应用启动并可以访问,对于知道内部实现原理的,这种方式很不错,但是对于不清楚原理的同学,这种方式就很危险,可能永远无法弄清楚内部原理。我们知道容器只是linux上受限和孤立的linx进程,运行容器并不需要镜像,相反构建镜像的时候需要运行容器。 建议在阿里云或者其他云上买台服务器,真的很方便,大幅度提升学习效率,不用担心环境有问题,平时搞活动2c4g一年就80左右,我双11买的阿里云2c4g一年79.

弄清楚容器网络,首先需要思考如下几个问题:

  • 如何虚拟化网络资源,让容器都有一个专用的网络栈?
  • 如何将容器变成友好的邻居,防止容器干扰,并教会他们良好的沟通?
  • 如何从容器内部访问外部世界(例如 Internet)?
  • 如何从外部世界(又名端口发布)访问机器上运行的容器?

在学习linux虚拟机网络中涉及到以下几点,和容器网络相关

  • 网络命名空间network namespace(nets)
  • 虚拟以太网设备(veth)
  • 虚拟网络交换机(网桥)
  • IP路由和网络地址转换(NAT)

基础环境

[root@centos-master ~]# uname -a
Linux centos-master 3.10.0-514.26.2.el7.x86_64 #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

使用网络命名空间隔离容器

linux网络由哪些组成?网络设备集合,还有路由规则集,同时还有iptables规则定义,即防火墙。

查看网络相关信息信息

[root@centos-master ~]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
    link/ether 00:16:3e:18:86:62 brd ff:ff:ff:ff:ff:ff
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT 
    link/ether 02:42:08:26:fb:4e brd ff:ff:ff:ff:ff:ff
4: br-f17c4065f2b9: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT 
    link/ether 02:42:a4:62:62:84 brd ff:ff:ff:ff:ff:ff
6: veth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT qlen 1000
    link/ether c6:64:9c:a5:93:af brd ff:ff:ff:ff:ff:ff link-netnsid 0
[root@centos-master ~]# 

ip route

[root@centos-master ~]# ip route
default via 172.26.63.253 dev eth0 
169.254.0.0/16 dev eth0  scope link  metric 1002 
172.17.0.0/16 dev docker0  proto kernel  scope link  src 172.17.0.1 
172.18.0.0/24 dev veth0  proto kernel  scope link  src 172.18.0.11 
172.18.0.0/16 dev br-f17c4065f2b9  proto kernel  scope link  src 172.18.0.1 
172.26.0.0/18 dev eth0  proto kernel  scope link  src 172.26.2.154 
[root@centos-master ~]# 

iptables

[root@centos-master ~]# iptables --list-rule
-A FORWARD -i br-f17c4065f2b9 ! -o br-f17c4065f2b9 -j ACCEPT
-A FORWARD -i br-f17c4065f2b9 -o br-f17c4065f2b9 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i br-f17c4065f2b9 ! -o br-f17c4065f2b9 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -o br-f17c4065f2b9 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
[root@centos-master ~]# 

输出虚拟机上的网络信息主要是为了验证后面我你们创建的容器都有一个独立的网络。

Linux命名空间是一种隔离容器网络技术,网络命名空间在逻辑上是网络堆栈的另一个副本,具有自己的路由、防火墙规则和网络设备

如何创建网络命名空间?可通过ip工具创建

[root@centos-master ~]# ip netns add netns1
[root@centos-master ~]# ip netns
netns1
netns0 (id: 0)
[root@centos-master ~]# 

如何使用刚创建的命名空间netns1?linux提供一个nsenter命令,可以进入指定的命名空间,使用完成后需要exit退出命名空间

[root@centos-master ~]# nsenter --net=/var/run/netns/netns1 
[root@centos-master ~]# ip link
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
[root@centos-master ~]# ip route
[root@centos-master ~]# iptables --list-rule
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
[root@centos-master ~]# 

进入到命名空间中输出上面信息可以看出,netns1是一个完全独立的网络,没有路由规则,没有自定义iptables,仅有一个loopback的回环网络设备lo

ns1.png

使用虚拟以太网(veth)将容器连接到主机

如果构建好的网络不能访问,那这个网络一般没有实际意义。linx提供了一个工具-以太网veth, “veth 设备是虚拟以太网设备。它们可以充当网络命名空间之间的隧道,为另一个命名空间中的物理网络设备创建桥接,但也可以用作独立的网络设备。”

以太网设备是以成对出现,创建以太网设备veth1

[root@centos-master ~]# ip link add veth1 type veth peer name ceth1
[root@centos-master ~]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
    link/ether 00:16:3e:18:86:62 brd ff:ff:ff:ff:ff:ff
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT 
    link/ether 02:42:08:26:fb:4e brd ff:ff:ff:ff:ff:ff
4: br-f17c4065f2b9: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT 
    link/ether 02:42:a4:62:62:84 brd ff:ff:ff:ff:ff:ff
6: veth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT qlen 1000
    link/ether c6:64:9c:a5:93:af brd ff:ff:ff:ff:ff:ff link-netnsid 0
7: ceth1@veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether 1a:f9:c5:ac:d7:aa brd ff:ff:ff:ff:ff:ff
8: veth1@ceth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether ca:53:19:f9:84:ce brd ff:ff:ff:ff:ff:ff
[root@centos-master ~]# 

已经创建了一对可以互连的以太网veth1和ceth1,但是创建的都在主机的网络中,如何将主机网络命名空间与创建的netns1命名空间关联?可以将刚创建的以太网虚拟设备一个留在主机网络中,另外一个迁移到netns1中:

[root@centos-master ~]# ip link set ceth1 netns netns1
[root@centos-master ~]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
    link/ether 00:16:3e:18:86:62 brd ff:ff:ff:ff:ff:ff
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT 
    link/ether 02:42:08:26:fb:4e brd ff:ff:ff:ff:ff:ff
4: br-f17c4065f2b9: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT 
    link/ether 02:42:a4:62:62:84 brd ff:ff:ff:ff:ff:ff
6: veth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT qlen 1000
    link/ether c6:64:9c:a5:93:af brd ff:ff:ff:ff:ff:ff link-netnsid 0
8: veth1@if7: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether ca:53:19:f9:84:ce brd ff:ff:ff:ff:ff:ff link-netnsid 1

通过ip link可以发现ceth1已经不见了。

启动以太网设备并分配ip,当前veth1状态为DOWN

[root@centos-master ~]# ip addr add 170.10.0.11/24 dev veth1
[root@centos-master ~]# ip link set veth1 up
[root@centos-master ~]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
    link/ether 00:16:3e:18:86:62 brd ff:ff:ff:ff:ff:ff
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT 
    link/ether 02:42:08:26:fb:4e brd ff:ff:ff:ff:ff:ff
4: br-f17c4065f2b9: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT 
    link/ether 02:42:a4:62:62:84 brd ff:ff:ff:ff:ff:ff
6: veth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT qlen 1000
    link/ether c6:64:9c:a5:93:af brd ff:ff:ff:ff:ff:ff link-netnsid 0
8: veth1@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT qlen 1000
    link/ether ca:53:19:f9:84:ce brd ff:ff:ff:ff:ff:ff link-netnsid 1

进入netns1命名空间

[root@centos-master ~]# nsenter --net=/var/run/netns/netns1 
[root@centos-master ~]# ip link
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
7: ceth1@if8: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
    link/ether 1a:f9:c5:ac:d7:aa brd ff:ff:ff:ff:ff:ff link-netnsid 0

启动ceth1 并设置ip

[root@centos-master ~]# ip link set ceth1 up
[root@centos-master ~]# ip link
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
7: ceth1@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT qlen 1000
    link/ether 1a:f9:c5:ac:d7:aa brd ff:ff:ff:ff:ff:ff link-netnsid 0
[root@centos-master ~]# ip addr add 170.10.0.10/24 dev ceth1

此时网络图如下

ns2.png

连通性检查,容器内访问主机veth1网络

[root@centos-master ~]# ping -c 2 170.10.0.11
PING 170.10.0.11 (170.10.0.11) 56(84) bytes of data.
64 bytes from 170.10.0.11: icmp_seq=1 ttl=64 time=0.070 ms
64 bytes from 170.10.0.11: icmp_seq=2 ttl=64 time=0.086 ms
​
--- 170.10.0.11 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.070/0.078/0.086/0.008 ms
[root@centos-master ~]# 

主机访问容器内网络

[root@centos-master ~]# ping -c 3 170.10.0.10
PING 170.10.0.10 (170.10.0.10) 56(84) bytes of data.
64 bytes from 170.10.0.10: icmp_seq=1 ttl=64 time=0.079 ms
64 bytes from 170.10.0.10: icmp_seq=2 ttl=64 time=0.044 ms
64 bytes from 170.10.0.10: icmp_seq=3 ttl=64 time=0.084 ms
​
--- 170.10.0.10 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.044/0.069/0.084/0.017 ms
[root@centos-master ~]# 

从netns1访问其他任何网络是否可以?

[root@centos-master ~]# nsenter --net=/var/run/netns/netns1
[root@centos-master ~]# ping -c 1 10.2.1.1
connect: Network is unreachable
[root@centos-master ~]# 

为什么netns1中无法访问其他网络?其实主要是由于路由表没有配置,我们刚创建netns1后是没有route,添加网络后

[root@centos-master ~]# ip route
170.10.0.0/24 dev ceth1  proto kernel  scope link  src 170.10.0.10 

Linux 有很多方法来填充路由表。其中之一是从直接连接的网络接口中提取路由。请记住,在netns1命名空间创建之后,路由表 in是空的。但后来我们在ceth1那里添加了设备并为其分配了 IP 地址170.10.0.10/16。由于我们使用的不是简单的 IP 地址,而是地址和网络掩码的组合,因此网络堆栈设法从中提取路由信息。每个发往170.10.0.0/16网络的数据包都将通过ceth0`设备发送。但是任何其他数据包都将被丢弃

使用虚拟网络交换机(网桥)与容器互连

如果同一个主机中添加多个容器回出现什么问题?

[root@centos-master ~]# ip netns add netns2
[root@centos-master ~]# ip netns
netns2
netns1 (id: 1)
netns0 (id: 0)
# root network namespace
[root@centos-master ~]# ip link add veth2 type veth peer name ceth2
[root@centos-master ~]# ip link set ceth2 netns netns2
[root@centos-master ~]# ip link set veth2 up
[root@centos-master ~]# ip addr add 170.10.0.21/24 dev veth2
#netns2 network namespace
[root@centos-master ~]# nsenter --net=/var/run/netns/netns2
[root@centos-master ~]# ip link set ceth2 up
[root@centos-master ~]# ip addr add 170.10.0.20/24 dev ceth2
[root@centos-master ~]# ip link
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
9: ceth2@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT qlen 1000
    link/ether 5e:84:44:7c:b5:38 brd ff:ff:ff:ff:ff:ff link-netnsid 0

连通性检查

  • netns2容器内访问主机
[root@centos-master ~]# ping -c 2 170.10.0.21
PING 170.10.0.21 (170.10.0.21) 56(84) bytes of data.
​
--- 170.10.0.21 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 999ms
​
#root ns namespace
[root@centos-master ~]# ping -c 2 170.10.0.20
PING 170.10.0.20 (170.10.0.20) 56(84) bytes of data.
From 170.10.0.11 icmp_seq=1 Destination Host Unreachable
From 170.10.0.11 icmp_seq=2 Destination Host Unreachable
​
--- 170.10.0.20 ping statistics ---
2 packets transmitted, 0 received, +2 errors, 100% packet loss, time 999ms
pipe 2
[root@centos-master ~]# 
​

比较有意思的事出现了,容器内与主机无法连通,这是怎么回事?创建的netns1为什么可以连通,同样创建了netns2却不行

ns3.png

查看主机ip route

[root@centos-master ~]# ip route
default via 172.26.63.253 dev eth0 
169.254.0.0/16 dev eth0  scope link  metric 1002 
170.10.0.0/24 dev veth1  proto kernel  scope link  src 170.10.0.11 
170.10.0.0/24 dev veth2  proto kernel  scope link  src 170.10.0.21 
172.17.0.0/16 dev docker0  proto kernel  scope link  src 172.17.0.1 
172.18.0.0/24 dev veth0  proto kernel  scope link  src 172.18.0.11 
172.18.0.0/16 dev br-f17c4065f2b9  proto kernel  scope link  src 172.18.0.1 
172.26.0.0/18 dev eth0  proto kernel  scope link  src 172.26.2.154 
[root@centos-master ~]# 

170.10.0.0/24 dev veth1 proto kernel scope link src 170.10.0.11 170.10.0.0/24 dev veth2 proto kernel scope link src 170.10.0.21

ip route冲突,添加veth2时,已经存在一条路由,容器2访问时选择了第一个路由,从而破坏了连接,删除veth1路由信息 ip route delete 170.10.0.0/24 dev veth1 proto kernel scope link src 170.10.0.11

连通性检查

[root@centos-master ~]# ip route delete 170.10.0.0/24 dev veth1  proto kernel  scope link  src 170.10.0.11 
[root@centos-master ~]# ping -c 2 170.10.0.21
PING 170.10.0.21 (170.10.0.21) 56(84) bytes of data.
64 bytes from 170.10.0.21: icmp_seq=1 ttl=64 time=0.075 ms
64 bytes from 170.10.0.21: icmp_seq=2 ttl=64 time=0.042 ms
​
--- 170.10.0.21 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.042/0.058/0.075/0.018 ms

netns2正常,此时netns1网络出现问题。

需要使用另外一种方式解决该问题

Linux 网桥——又一个虚拟化的网络设施!Linux 网桥的行为类似于网络交换机。它在连接到它的接口之间转发数据包。因为它是一个交换机,所以它是在 L2(即以太网)级别上进行的。

恢复已经创建的以太网

[root@centos-master ~]# ip netns delete netns0
[root@centos-master ~]# ip netns delete netns1
[root@centos-master ~]# ip netns delete netns2

快速创建两个容器,不用指定ip,新创建veth1和veth2设备

[root@centos-master ~]# ip netns add netns1
[root@centos-master ~]# ip link add veth1 type veth peer name ceth1
[root@centos-master ~]# ip link add veth1 type veth peer name ceth1
[root@centos-master ~]# ip link set veth1 up
[root@centos-master ~]# ip link set ceth1 netns netns1
[root@centos-master ~]# nsenter --net=/var/run/netns/netns1
[root@centos-master ~]# ip link set lo up
[root@centos-master ~]# ip link set ceth1 up
[root@centos-master ~]# ip addr add 172.10.0.10/16 dev ceth1
[root@centos-master ~]# exit
#add netns2
[root@centos-master ~]# ip netns add netns2
[root@centos-master ~]# ip link add veth2 type veth peer name ceth2
[root@centos-master ~]# ip link set veth2 up
[root@centos-master ~]# ip link set ceth2 netns netns2
[root@centos-master ~]# nsenter --net=/var/run/netns/netns2
[root@centos-master ~]# ip link set lo up
[root@centos-master ~]# ip link set ceth2 up
[root@centos-master ~]# ip addr add 172.10.0.20/16 dev ceth2
[root@centos-master ~]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
13: ceth2@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT qlen 1000
    link/ether ba:00:4a:c1:a4:13 brd ff:ff:ff:ff:ff:ff link-netnsid 0

主机上没有对应的路由

[root@centos-master ~]# ip route
default via 172.26.63.253 dev eth0 
169.254.0.0/16 dev eth0  scope link  metric 1002 
172.17.0.0/16 dev docker0  proto kernel  scope link  src 172.17.0.1 
172.18.0.0/16 dev br-f17c4065f2b9  proto kernel  scope link  src 172.18.0.1 
172.26.0.0/18 dev eth0  proto kernel  scope link  src 172.26.2.154 
[root@centos-master ~]# 

创建网桥接口

[root@centos-master ~]# ip link add br0 type bridge
[root@centos-master ~]# ip link set br0 up
[root@centos-master ~]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
    link/ether 00:16:3e:18:86:62 brd ff:ff:ff:ff:ff:ff
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT 
    link/ether 02:42:08:26:fb:4e brd ff:ff:ff:ff:ff:ff
4: br-f17c4065f2b9: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT 
    link/ether 02:42:a4:62:62:84 brd ff:ff:ff:ff:ff:ff
12: veth1@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT qlen 1000
    link/ether a2:ac:85:f4:11:09 brd ff:ff:ff:ff:ff:ff link-netnsid 0
14: veth2@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT qlen 1000
    link/ether 4e:65:95:64:91:77 brd ff:ff:ff:ff:ff:ff link-netnsid 2
15: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1000
    link/ether 66:7d:2c:52:9f:b3 brd ff:ff:ff:ff:ff:ff

现在将以太网设备veth1与veth2绑定到网桥

[root@centos-master ~]# ip link set veth1 master br0
[root@centos-master ~]# ip link set veth2 master br0
[root@centos-master ~]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
    link/ether 00:16:3e:18:86:62 brd ff:ff:ff:ff:ff:ff
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT 
    link/ether 02:42:08:26:fb:4e brd ff:ff:ff:ff:ff:ff
4: br-f17c4065f2b9: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT 
    link/ether 02:42:a4:62:62:84 brd ff:ff:ff:ff:ff:ff
12: veth1@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP mode DEFAULT qlen 1000
    link/ether a2:ac:85:f4:11:09 brd ff:ff:ff:ff:ff:ff link-netnsid 0
14: veth2@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP mode DEFAULT qlen 1000
    link/ether 4e:65:95:64:91:77 brd ff:ff:ff:ff:ff:ff link-netnsid 2
15: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT qlen 1000
    link/ether 4e:65:95:64:91:77 brd ff:ff:ff:ff:ff:ff
[root@centos-master ~]# 

ns4.png

此时两个容器可以互相通信

有了这种新方法,我们还没有配置veth1,和veth2在所有。我们分配的仅有的两个 IP 地址位于ceth1ceth2两端

与外界互连(IP路由)

容器直接可以互通,但是与主机之间却无法通信,主要是还没有路由

为了在根命名空间和容器命名空间之间建立连接,我们需要为桥接网络接口分配 IP 地址:

[root@centos-master ~]# ip addr add 172.10.0.1/16 dev br0

如果容器要向外部世界发送数据包,目标服务器将无法将数据包发送回容器,因为容器的 IP 地址是私有的。即,该特定 IP 的路由规则只有本地网络才知道。世界上许多容器共享完全相同的私有 IP 地址172.18.0.10。这个问题的解决方案称为网络地址转换(NAT). 在进入外部网络之前,由容器发起的数据包会将其源 IP 地址替换为主机的外部接口地址。主机还将跟踪所有现有映射,并在到达时恢复 IP 地址,然后将数据包转发回容器,通过命令解决:

[root@centos-master ~]# sudo iptables -t nat -A POSTROUTING -s 172.10.0.0/16 ! -o br0 -j MASQUERAD

我们正在添加新规则nat的表`POSTROUTING 要求来伪装所有的数据包起源于172.10.0.0/16网络,而不是通过桥接接口

外部访问容器-端口发布

容器端口发布到主机的部分(或全部)接口是一种众所周知的做法。但是端口发布的真正含义是什么?

需要将容器端口与主机端口绑定对外服务