Linux专栏—双网卡bonding配置

1,664 阅读5分钟

bonding介绍

bonding(绑定)是一种linux系统下的网卡绑定技术,可以把服务器上n个物理网卡在系统内部抽象(绑定)成一个逻辑上的网卡,能够提升网络吞吐量、实现网络冗余、负载等功能,有很多优势。bonding技术是linux系统内核层面实现的,它是一个内核模块(驱动)。使用它需要系统有这个模块, 我们可以modinfo命令查看下这个模块的信息, 一般来说都支持.

bonding的七种模式

bonding技术提供了七种工作模式,在使用的时候需要指定一种,每种有各自的优缺点.


    balance-rr (mode=0)       默认, 有高可用 (容错) 和负载均衡的功能,  需要交换机的配置,每块网卡轮询发包 (流量分发比较均衡).
    active-backup (mode=1)  只有高可用 (容错) 功能, 不需要交换机配置, 这种模式只有一块网卡工作, 对外只有一个mac地址。缺点是端口利用率比较低
    balance-xor (mode=2)     不常用
    broadcast (mode=3)        不常用
    802.3ad (mode=4)          IEEE 802.3ad 动态链路聚合,需要交换机配置,没用过
    balance-tlb (mode=5)      不常用
    balance-alb (mode=6)     有高可用 ( 容错 )和负载均衡的功能,不需要交换机配置  (流量分发到每个接口不是特别均衡)

centos7.5 配置bonding

  • 查看UP的网卡口
[root@t71 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp61s0f2: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 08:94:ef:65:3a:9a brd ff:ff:ff:ff:ff:ff
3: enp61s0f3: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 08:94:ef:65:3a:9b brd ff:ff:ff:ff:ff:ff
4: enp175s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether e8:61:1f:13:71:44 brd ff:ff:ff:ff:ff:ff
5: enp175s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether e8:61:1f:13:71:44 brd ff:ff:ff:ff:ff:ff
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether e8:61:1f:13:71:44 brd ff:ff:ff:ff:ff:ff
    inet 192.168.4.71/22 brd 192.168.7.255 scope global bond0
       valid_lft forever preferred_lft forever
    inet6 fe80::ea61:1fff:fe13:7144/64 scope link 
       valid_lft forever preferred_lft forever
[root@t71 ~]# 

这是配置bonding完成之后的情况。但是在配置之前,也可以看到enp175s0f0和enp175s0f1处于UP状态

  • 修改/etc/sysconfig/network-scripts/ifcfg-enp175s0f0和/etc/sysconfig/network-scripts/ifcfg-enp175s0f1文件,具体修改内容如下:
DEVICE=enp175s0f0    # ifcfg-enp175s0f1文件对应的位置修改为enp175s0f1
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
  • 创建bond文件:在/etc/sysconfig/network-scripts/创建文件ifcfg-bond0,具体内容如下:
DEVICE=bond0
TYPE=Bond
BOOTPROTO=none
USERCTL=no
IPADDR=192.168.4.74
NETMASK=255.255.252.0
GATEWAY=192.168.7.254
BONDING_OPTS="mode=4 miimon=100" #起初使用的是mode=6,后面修改为mode=4,需要添加具体配置
ONBOOT=yes
BONDING_MASTER=yes
  • 加载bonding模块
modprobe bonding
  • 查看bonding模块加载情况
[root@t74 ~]# lsmod | grep bonding
bonding               149864  0 
[root@t74 ~]# 
  • 关闭NetworkManager
systemctl stop NetworkManager
systemctl disable NetworkManager
  • 重启network服务
systemctl restart network
  • 查看配置情况
[root@t74 ~]# cat /proc/net/bonding/bond0 
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: e8:61:1f:26:e5:78
Active Aggregator Info:
	Aggregator ID: 1
	Number of ports: 2
	Actor Key: 15
	Partner Key: 833
	Partner Mac Address: 34:a2:a2:7a:5b:c0

Slave Interface: enp24s0f0
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: e8:61:1f:26:e5:78
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: e8:61:1f:26:e5:78
    port key: 15
    port priority: 255
    port number: 1
    port state: 61
details partner lacp pdu:
    system priority: 32768
    system mac address: 34:a2:a2:7a:5b:c0
    oper key: 833
    port priority: 32768
    port number: 6
    port state: 61

Slave Interface: enp24s0f1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: e8:61:1f:26:e5:79
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: e8:61:1f:26:e5:78
    port key: 15
    port priority: 255
    port number: 2
    port state: 61
details partner lacp pdu:
    system priority: 32768
    system mac address: 34:a2:a2:7a:5b:c0
    oper key: 833
    port priority: 32768
    port number: 5
    port state: 61

因为选用的mode=4模式,所以需要进一步配置交换机,使用的交换机为华为S6720 具体配置如下:

system-view
interface eth-trunk 1
port link-type trunk
port trunk allow-pass vlan 10
mode lacp
trunkport XGigabitEthernet 0/0/1 to 0/0/2 #将交换机上的端口1和2聚合到trunk 1中

具体命令的作用,目前我也说不清楚,但是这样配置可以正常工作。不过经测试,io的能力并没有提高,这个问题仍需进一步关注

参考:

www.cnblogs.com/huangweimin…

blog.51cto.com/iceyao/1572…