containerlab介绍
Containerlab提供了一个CLI,用于编排和管理基于容器的网络labs。它启动容器,在容器之间构建虚拟连接,以创建用户选择的实验室拓扑,并管理labs的生命周期。
安装
本次测试使用的环境是Ubuntu22.04
添加containerlab软件源并安装
echo "deb [trusted=yes] https://apt.fury.io/netdevops/ /" | \
sudo tee -a /etc/apt/sources.list.d/netdevops.list
sudo apt update && sudo apt install containerlab
使用containerlab部署网络
containerlab安装好后会有一些默认的示例文件,文件如下, 使用下面的示例配置可以启动网络环境
root@node1:~# tree /etc/containerlab/lab-examples/
/etc/containerlab/lab-examples/
├── br01
│ └── br01.clab.yml
├── cert01
│ └── cert01.clab.yml
├── clos01
│ └── clos01.clab.yml
├── clos02
│ ├── clos02.clab.yml
│ ├── configs
│ │ ├── client1.sh
│ │ ├── client2.sh
│ │ ├── client3.sh
│ │ ├── client4.sh
│ │ ├── leaf1.yaml
│ │ ├── leaf2.yaml
│ │ ├── leaf3.yaml
│ │ ├── leaf4.yaml
│ │ ├── spine1.yaml
│ │ ├── spine2.yaml
│ │ ├── spine3.yaml
│ │ ├── spine4.yaml
│ │ ├── superspine1.yaml
│ │ └── superspine2.yaml
│ ├── README.md
│ ├── setup.clos02.clab.yml
│ └── setup.sh
├── clos03
│ ├── cfg-clos.clab.yml
│ ├── cfg-clos__srl.tmpl
│ ├── cfg-clos__vr-sros.tmpl
│ └── README.md
├── cvx01
│ ├── README.md
│ ├── sw1
│ │ └── interfaces
│ ├── sw2
│ │ └── frr.conf
│ └── topo.clab.yml
├── cvx02
│ ├── h1
│ │ └── interfaces
│ ├── README.md
│ ├── sw1
│ │ └── interfaces
│ └── topo.clab.yml
├── frr01
│ ├── frr01.clab.yml
│ ├── PC-interfaces.sh
│ ├── README.md
│ ├── router1
│ │ ├── daemons
│ │ └── frr.conf
│ ├── router2
│ │ ├── daemons
│ │ └── frr.conf
│ ├── router3
│ │ ├── daemons
│ │ └── frr.conf
│ └── run.sh
├── ftdv01
│ └── ftdv01.yml
├── ixiac01
│ ├── go.mod
│ ├── go.sum
│ ├── ipv4_forwarding.go
│ ├── ixiac01.clab.yml
│ └── srl.cfg
├── openbsd01
│ └── openbsd01.yml
├── sonic01
│ └── sonic01.clab.yml
├── srl01
│ └── srl01.clab.yml
├── srl02
│ ├── srl02.clab.yml
│ ├── srl1.cfg
│ └── srl2.cfg
├── srl03
│ └── srl03.clab.yml
├── srlceos01
│ └── srlceos01.clab.yml
├── srlcrpd01
│ └── srlcrpd01.clab.yml
├── srlfrr01
│ ├── daemons
│ ├── frr.cfg
│ ├── srl.cfg
│ └── srlfrr01.clab.yml
├── srl-quickstart
│ ├── srl01.clab.yml
│ └── srl02.clab.yml
├── srlvjunos01
│ ├── srl.cli
│ ├── srlvjunos01.clab.yml
│ └── vjunos.cfg
├── srlvjunos02
│ ├── srl.cli
│ ├── srlvjunos02.clab.yml
│ └── vjunos.cfg
├── srlxrd01
│ ├── srl.cfg
│ ├── srlxrd01.clab.yml
│ └── xrd.cfg
├── templated01
│ ├── configure.sh
│ ├── templated01.clab.gotmpl
│ ├── templated01.clab_vars.yaml
│ └── topology_config.gotmpl
├── templated02
│ ├── configure.sh
│ ├── templated02.clab.gotmpl
│ ├── templated02.clab_vars.yaml
│ └── topology_config.gotmpl
├── vr01
│ ├── srl.cfg
│ ├── sros.cfg
│ └── vr01.clab.yml
├── vr02
│ ├── srl.cfg
│ ├── vmx.cfg
│ └── vr02.clab.yml
├── vr03
│ ├── srl.cfg
│ ├── vr03.clab.yml
│ └── xrv.cfg
├── vr04
│ ├── srl.cfg
│ ├── vr04.clab.yml
│ └── xrv9k.cfg
├── vr05
│ ├── sros4.clab.yml
│ └── vr01.clab.yml
├── vsrx01
│ ├── srx1.txt
│ └── vsrx01.yml
└── vxlan01
├── vxlan-sros.clab.yml
└── vxlan-vmx.clab.yml
使用示例yaml文件创建网络
# 先把示例yaml拷贝到当前用户目录
root@node1:~# cp -ra /etc/containerlab/lab-examples/ ./clab-examples
部署一个lab
root@node1:~# cd clab-examples/srl-quickstart/
root@node1:~/clab-examples/srl-quickstart# containerlab deploy -t srl02.clab.yml
INFO[0000] Containerlab v0.49.0 started
INFO[0000] Parsing & checking topology file: srl02.clab.yml
INFO[0000] Creating docker network: Name="clab", IPv4Subnet="172.20.20.0/24", IPv6Subnet="2001:172:20:20::/64", MTU='ל'
INFO[0000] Pulling ghcr.io/nokia/srlinux:latest Docker image
INFO[0186] Done pulling ghcr.io/nokia/srlinux:latest
INFO[0186] Creating lab directory: /root/clab-examples/srl-quickstart/clab-srl2
INFO[0186] Creating container: "srl1"
INFO[0186] Creating container: "srl2"
INFO[0187] Creating link: srl1:e1-1 <--> srl2:e1-1
INFO[0188] Creating link: srl1:e1-2 <--> srl2:e1-2
INFO[0189] Running postdeploy actions for Nokia SR Linux 'srl2' node
INFO[0189] Running postdeploy actions for Nokia SR Linux 'srl1' node
INFO[0209] Adding containerlab host entries to /etc/hosts file
INFO[0209] Adding ssh config for containerlab nodes
+---+------+--------------+------------------------------+---------------+---------+----------------+----------------------+
| # | Name | Container ID | Image | Kind | State | IPv4 Address | IPv6 Address |
+---+------+--------------+------------------------------+---------------+---------+----------------+----------------------+
| 1 | srl1 | 1657b69fed19 | ghcr.io/nokia/srlinux:latest | nokia_srlinux | running | 172.20.20.2/24 | 2001:172:20:20::2/64 |
| 2 | srl2 | 6bdbcb359b31 | ghcr.io/nokia/srlinux:latest | nokia_srlinux | running | 172.20.20.3/24 | 2001:172:20:20::3/64 |
+---+------+--------------+------------------------------+---------------+---------+----------------+----------------------+
销毁创建的lab
root@node1:~/clab-examples/srl-quickstart# containerlab destroy -t srl02.clab.yml
INFO[0000] Parsing & checking topology file: srl02.clab.yml
INFO[0000] Destroying lab: srl2
INFO[0000] Removed container: srl2
INFO[0000] Removed container: srl1
INFO[0000] Removing containerlab host entries from /etc/hosts file
INFO[0000] Removing ssh config for containerlab nodes
创建一个路由相互连接的实验
#veth.yaml
name: veth
topology:
nodes:
server:
kind: linux
image: registry.cn-beijing.aliyuncs.com/postkarte/k8sutils:v1
exec:
- ip addr add 10.1.5.10/24 dev net0
server2:
kind: linux
image: registry.cn-beijing.aliyuncs.com/postkarte/k8sutils:v1
exec:
- ip addr add 10.1.5.11/24 dev net0
links:
- endpoints: ["server:net0", "server2:net0"]
应用yaml文件
root@node1:~/my-clab# clab deploy -t srlceos01.clab.yml
INFO[0000] Containerlab v0.49.0 started
INFO[0000] Parsing & checking topology file: srlceos01.clab.yml
INFO[0000] Pulling registry.cn-beijing.aliyuncs.com/postkarte/k8sutils:v1 Docker image
INFO[0049] Done pulling registry.cn-beijing.aliyuncs.com/postkarte/k8sutils:v1
INFO[0049] Creating lab directory: /root/my-clab/clab-veth
INFO[0049] Creating container: "server"
INFO[0049] Creating container: "server2"
INFO[0051] Creating link: server:net0 <--> server2:net0
INFO[0051] Adding containerlab host entries to /etc/hosts file
INFO[0051] Adding ssh config for containerlab nodes
INFO[0051] Executed command "ip addr add 10.1.5.10/24 dev net0" on the node "server". stdout:
INFO[0051] Executed command "ip addr add 10.1.5.11/24 dev net0" on the node "server2". stdout:
+---+-------------------+--------------+--------------------------------------------------------+-------+---------+----------------+----------------------+
| # | Name | Container ID | Image | Kind | State | IPv4 Address | IPv6 Address |
+---+-------------------+--------------+--------------------------------------------------------+-------+---------+----------------+----------------------+
| 1 | clab-veth-server | 88dc12fe984f | registry.cn-beijing.aliyuncs.com/postkarte/k8sutils:v1 | linux | running | 172.20.20.3/24 | 2001:172:20:20::3/64 |
| 2 | clab-veth-server2 | cc6f5e6b2faa | registry.cn-beijing.aliyuncs.com/postkarte/k8sutils:v1 | linux | running | 172.20.20.2/24 | 2001:172:20:20::2/64 |
+---+-------------------+--------------+--------------------------------------------------------+-------+---------+----------------+----------------------+
查看docker容器
root@node1:~/my-clab# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cc6f5e6b2faa registry.cn-beijing.aliyuncs.com/postkarte/k8sutils:v1 "sleep infinity" 4 minutes ago Up 4 minutes clab-veth-server2
88dc12fe984f registry.cn-beijing.aliyuncs.com/postkarte/k8sutils:v1 "sleep infinity" 4 minutes ago Up 4 minutes clab-veth-server
进入容器运行命令查看ip
root@node1:~/my-clab# docker exec -it clab-veth-server bash
[root@server /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
54: eth0@if55: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:14:14:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.20.20.3/24 brd 172.20.20.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 2001:172:20:20::3/64 scope global nodad
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe14:1403/64 scope link
valid_lft forever preferred_lft forever
57: net0@if56: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9500 qdisc noqueue state UP group default # 自己的网卡编号是57,对端的网卡编号是56
link/ether aa:c1:ab:3b:c2:8f brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet 10.1.5.10/24 scope global net0
valid_lft forever preferred_lft forever
inet6 fe80::a8c1:abff:fe3b:c28f/64 scope link
valid_lft forever preferred_lft forever
检测创建的两个容器之间是否可以ping通
[root@server /]# ping 10.1.5.11
PING 10.1.5.11 (10.1.5.11) 56(84) bytes of data.
64 bytes from 10.1.5.11: icmp_seq=1 ttl=64 time=0.051 ms
64 bytes from 10.1.5.11: icmp_seq=2 ttl=64 time=0.083 ms
64 bytes from 10.1.5.11: icmp_seq=3 ttl=64 time=0.063 ms
^C
--- 10.1.5.11 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2030ms
rtt min/avg/max/mdev = 0.051/0.065/0.083/0.016 ms
查看容器中新创建的网卡的信息
[root@server /]# ethtool -S net0
NIC statistics:
peer_ifindex: 56 # 对端的网卡编号为56
rx_queue_0_xdp_packets: 0
rx_queue_0_xdp_bytes: 0
rx_queue_0_drops: 0
rx_queue_0_xdp_redirect: 0
rx_queue_0_xdp_drops: 0
rx_queue_0_xdp_tx: 0
rx_queue_0_xdp_tx_errors: 0
tx_queue_0_xdp_xmit: 0
tx_queue_0_xdp_xmit_errors: 0