Docker知识对应验证

830 阅读6分钟

一起养成写作习惯!这是我参与「掘金日新计划 · 4 月更文挑战」的第24天,点击查看活动详情

简介

通过前面一系列的docker相关的代码编写,对于docker的基础也有了一定的了解,这边文章的主要目的就是验证下之前的相关知识在docker中的应用

基础单机部分

我们在机器人上跑一个MongoDB,查看当前运行容器如下:

➜  ~ docker ps
CONTAINER ID   IMAGE                 COMMAND                  CREATED        STATUS        PORTS                                           NAMES
c05db355423d   mongo                 "docker-entrypoint.s…"   6 months ago   Up 6 months   0.0.0.0:27017->27017/tcp, :::27017->27017/tcp   mongo

我们看到它的id是:c05db355423d,在我们编写docker demo的过程中,我们是将容器id作为各种路径的标识的,我们用这个容器id进行搜索看看:

➜  ~ find / -name "*c05db355423d*"
/run/docker/runtime-runc/moby/c05db355423dc43607cea24dcb13a4d38c924ce27801b7b230a1d290ec6bb668
/run/docker/containerd/c05db355423dc43607cea24dcb13a4d38c924ce27801b7b230a1d290ec6bb668
/run/containerd/io.containerd.runtime.v2.task/moby/c05db355423dc43607cea24dcb13a4d38c924ce27801b7b230a1d290ec6bb668
/sys/fs/cgroup/hugetlb/docker/c05db355423dc43607cea24dcb13a4d38c924ce27801b7b230a1d290ec6bb668
/sys/fs/cgroup/blkio/docker/c05db355423dc43607cea24dcb13a4d38c924ce27801b7b230a1d290ec6bb668
/sys/fs/cgroup/memory/docker/c05db355423dc43607cea24dcb13a4d38c924ce27801b7b230a1d290ec6bb668
/sys/fs/cgroup/cpuset/docker/c05db355423dc43607cea24dcb13a4d38c924ce27801b7b230a1d290ec6bb668
/sys/fs/cgroup/pids/docker/c05db355423dc43607cea24dcb13a4d38c924ce27801b7b230a1d290ec6bb668
/sys/fs/cgroup/freezer/docker/c05db355423dc43607cea24dcb13a4d38c924ce27801b7b230a1d290ec6bb668
/sys/fs/cgroup/perf_event/docker/c05db355423dc43607cea24dcb13a4d38c924ce27801b7b230a1d290ec6bb668
/sys/fs/cgroup/devices/docker/c05db355423dc43607cea24dcb13a4d38c924ce27801b7b230a1d290ec6bb668
/sys/fs/cgroup/cpu,cpuacct/docker/c05db355423dc43607cea24dcb13a4d38c924ce27801b7b230a1d290ec6bb668
/sys/fs/cgroup/net_cls,net_prio/docker/c05db355423dc43607cea24dcb13a4d38c924ce27801b7b230a1d290ec6bb668
/sys/fs/cgroup/systemd/docker/c05db355423dc43607cea24dcb13a4d38c924ce27801b7b230a1d290ec6bb668
/var/lib/containerd/io.containerd.runtime.v2.task/moby/c05db355423dc43607cea24dcb13a4d38c924ce27801b7b230a1d290ec6bb668
/var/lib/docker/containers/c05db355423dc43607cea24dcb13a4d38c924ce27801b7b230a1d290ec6bb668
/var/lib/docker/containers/c05db355423dc43607cea24dcb13a4d38c924ce27801b7b230a1d290ec6bb668/c05db355423dc43607cea24dcb13a4d38c924ce27801b7b230a1d290ec6bb668-json.log
/var/lib/docker/image/overlay2/layerdb/mounts/c05db355423dc43607cea24dcb13a4d38c924ce27801b7b230a1d290ec6bb668

资源限制

哦豁,一看确实有不少我们熟悉的东西,比如内存命名空间、CPU命名空间:

/sys/fs/cgroup/memory/docker/c05db355423dc43607cea24dcb13a4d38c924ce27801b7b230a1d290ec6bb668
/sys/fs/cgroup/cpuset/docker/c05db355423dc43607cea24dcb13a4d38c924ce27801b7b230a1d290ec6bb668
/sys/fs/cgroup/cpu,cpuacct/docker/c05db355423dc43607cea24dcb13a4d38c924ce27801b7b230a1d290ec6bb668

之前的demo,我们也是在这个路径下用mydocker+容器名做的子目录,可以的,熟悉的味道

还是其他的,目前不知道有啥用,感兴趣的后面查一查,我们继续看看这个目录下是不是还有其他容器的

 ~ tree /sys/fs/cgroup/memory/docker
/sys/fs/cgroup/memory/docker
├── 0cf1ae82601a1fafa31ccaa50e7416f7beaa3764e39375c85103d135b07d7677
│   ├── cgroup.clone_children
│   ├── cgroup.event_control
│   ├── cgroup.procs
│   ├── memory.failcnt
│   ├── memory.force_empty
│   ├── memory.kmem.failcnt
│   ├── memory.kmem.limit_in_bytes
│   ├── memory.kmem.max_usage_in_bytes
│   ├── memory.kmem.slabinfo
│   ├── memory.kmem.tcp.failcnt
│   ├── memory.kmem.tcp.limit_in_bytes
│   ├── memory.kmem.tcp.max_usage_in_bytes
│   ├── memory.kmem.tcp.usage_in_bytes
│   ├── memory.kmem.usage_in_bytes
│   ├── memory.limit_in_bytes
│   ├── memory.max_usage_in_bytes
│   ├── memory.memsw.failcnt
│   ├── memory.memsw.limit_in_bytes
│   ├── memory.memsw.max_usage_in_bytes
│   ├── memory.memsw.usage_in_bytes
│   ├── memory.move_charge_at_immigrate
│   ├── memory.numa_stat
│   ├── memory.oom_control
│   ├── memory.pressure_level
│   ├── memory.soft_limit_in_bytes
│   ├── memory.stat
│   ├── memory.swappiness
│   ├── memory.usage_in_bytes
│   ├── memory.use_hierarchy
│   ├── notify_on_release
│   └── tasks
├── 74571d1498bdba0e7e1a684af1e2bb7bc77cee0583147543a33591dd0ffe033e
│   ├── cgroup.clone_children
│   ├── cgroup.event_control
│   ├── cgroup.procs
│   ├── memory.failcnt
│   ├── memory.force_empty
│   ├── memory.kmem.failcnt
│   ├── memory.kmem.limit_in_bytes
│   ├── memory.kmem.max_usage_in_bytes
│   ├── memory.kmem.slabinfo
│   ├── memory.kmem.tcp.failcnt
│   ├── memory.kmem.tcp.limit_in_bytes
│   ├── memory.kmem.tcp.max_usage_in_bytes
│   ├── memory.kmem.tcp.usage_in_bytes
│   ├── memory.kmem.usage_in_bytes
│   ├── memory.limit_in_bytes
│   ├── memory.max_usage_in_bytes
│   ├── memory.memsw.failcnt
│   ├── memory.memsw.limit_in_bytes
│   ├── memory.memsw.max_usage_in_bytes
│   ├── memory.memsw.usage_in_bytes
│   ├── memory.move_charge_at_immigrate
│   ├── memory.numa_stat
│   ├── memory.oom_control
│   ├── memory.pressure_level
│   ├── memory.soft_limit_in_bytes
│   ├── memory.stat
│   ├── memory.swappiness
│   ├── memory.usage_in_bytes
│   ├── memory.use_hierarchy
│   ├── notify_on_release
│   └── tasks

确实不出所料

容器日志

我们再继续看看,我们看到这么一个东西:

/var/lib/docker/containers/c05db355423dc43607cea24dcb13a4d38c924ce27801b7b230a1d290ec6bb668/c05db355423dc43607cea24dcb13a4d38c924ce27801b7b230a1d290ec6bb668-json.log

感觉像是日志一样的东西,我们看看内容:

cat /var/lib/docker/containers/c05db355423dc43607cea24dcb13a4d38c924ce27801b7b230a1d290ec6bb668/c05db355423dc43607cea24dcb13a4d38c924ce27801b7b230a1d290ec6bb668-json.log

{"t":{"$date":"2022-04-24T21:37:27.878+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1650836247:878115][1:0x7fbb84078700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 684686, snapshot max: 684686 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 1"}}

结果一看和docker logs mongo的内容是一样,说明这是个日志文件

其下应该还是其他的容器的日志文件,我们继续看一手

  ~ tree /var/lib/docker/containers
/var/lib/docker/containers
├── 0cf1ae82601a1fafa31ccaa50e7416f7beaa3764e39375c85103d135b07d7677
│   ├── 0cf1ae82601a1fafa31ccaa50e7416f7beaa3764e39375c85103d135b07d7677-json.log
│   ├── checkpoints
│   ├── config.v2.json
│   ├── hostconfig.json
│   ├── hostname
│   ├── hosts
│   ├── mounts
│   ├── resolv.conf
│   └── resolv.conf.hash
└── c4be1d94fef7d177e5c5a72236d5d55881338222ef03acf4d151bb6167f3bed1
    ├── c4be1d94fef7d177e5c5a72236d5d55881338222ef03acf4d151bb6167f3bed1-json.log
    ├── checkpoints
    ├── config.v2.json
    ├── hostconfig.json
    ├── hostname
    ├── hosts
    ├── mounts
    │   └── shm
    ├── resolv.conf
    └── resolv.conf.hash

从上面的结构看到,确实是

网络部分

docker run -tid --name centos8 centos:8 bash

$ docker exec -ti centos8 traceroute 114.114.114.114
traceroute to 114.114.114.114 (114.114.114.114), 30 hops max, 60 byte packets
 1  172.17.0.1 (172.17.0.1)  0.046 ms  0.007 ms  0.005 ms
 2  192.168.1.1 (192.168.1.1)  4.210 ms  4.245 ms  4.235 ms
 3  100.64.0.1 (100.64.0.1)  7.034 ms  7.052 ms  7.044 ms
 4  118.116.49.185 (118.116.49.185)  12.602 ms  12.624 ms *
 5  171.208.196.5 (171.208.196.5)  7.995 ms 171.208.199.249 (171.208.199.249)  9.718 ms 61.139.121.49 (61.139.121.49)  8.169 ms
 6  202.97.76.138 (202.97.76.138)  36.394 ms * *
 7  * * *
 8  * * *
 9  * * *
10  * * *
11  * * *
12  * * *
13  * * *
14  * * *
15  * * *
16  * * *
17  * * *
$ ip addr
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:d4:fb:76:d9 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:d4ff:fefb:76d9/64 scope link
       valid_lft forever preferred_lft forever
6: veth0cf8b9a@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether 42:da:8b:c0:20:7f brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::40da:8bff:fec0:207f/64 scope link
       valid_lft forever preferred_lft forever

$ docker exec -ti centos8 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
5: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
$ iptables  -t  nat  -nL --line-numbers
Chain PREROUTING (policy ACCEPT)
num  target     prot opt source               destination
1    DOCKER     all  --  0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
num  target     prot opt source               destination
1    DOCKER     all  --  0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT)
num  target     prot opt source               destination
1    MASQUERADE  all  --  172.17.0.0/16        0.0.0.0/0

Chain DOCKER (2 references)
num  target     prot opt source               destination
1    RETURN     all  --  0.0.0.0/0            0.0.0.0/0

总结

如上,我们简单探索了下docker在主机上运行时的一些东西,和我们写的demo进行一定的比对,发现基本上共通