chassis-冲突
170
E1211 15:42:56.481014 1 ovn-sb-chassis.go:98] found more than one Chassis with with host name=xxx170
E1211 15:42:56.481053 1 node.go:863] failed to list chassis found more than one Chassis with with host name=xxx170
E1211 15:42:59.481507 1 ovn-sb-chassis.go:98] found more than one Chassis with with host name=xxx170
E1211 15:42:59.686046 1 node.go:863] failed to list chassis found more than one Chassis with with host name=xxx170
I1211 15:42:59.843395 1 node.go:594] handle update node xxx162
E1211 15:43:02.686551 1 ovn-sb-chassis.go:98] found more than one Chassis with with host name=xxx170
E1211 15:43:02.686592 1 node.go:863] failed to list chassis found more than one Chassis with with host name=xxx170
E1211 15:43:05.687102 1 ovn-sb-chassis.go:98] found more than one Chassis with with host name=xxx170
E1211 15:43:05.687144 1 node.go:863] failed to list chassis found more than one Chassis with with host name=xxx170
E1211 15:43:08.688492 1 ovn-sb-chassis.go:98] found more than one Chassis with with host name=xxx170
E1211 15:43:08.688539 1 node.go:863] failed to list chassis found more than one Chassis with with host name=xxx170
E1211 15:43:08.688554 1 node.go:890] exhausting all attempts
E1211 15:43:08.688644 1 node.go:149] error syncing 'xxx170': exhausting all attempts, requeuing
I1211 15:43:37.695860 1 node.go:594] handle update node xxx069
[deployer@xxx069 ~]$ k ko sbctl list chassis | grep -C 3 xxx170
_uuid : 51b2c240-baad-404f-a00b-ca367e4d4c5e
encaps : [7034be48-429c-411c-9ee3-6f638070ea64]
external_ids : {vendor=kube-ovn}
hostname : xxx170
name : "afa34d1c-97ed-4ca9-be68-73f893d0dbb8"
nb_cfg : 0
other_config : {ct-no-masked-label="true", datapath-type=netdev, iface-types="bareudp,erspan,geneve,gre,gtpu,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan", is-interconn="false", mac-binding-timestamp="true", ovn-bridge-mappings="", ovn-chassis-mac-mappings="", ovn-cms-options="", ovn-ct-lb-related="true", ovn-enable-lflow-cache="true", ovn-limit-lflow-cache="", ovn-memlimit-lflow-cache-kb="", ovn-monitor-all="false", ovn-trim-limit-lflow-cache="", ovn-trim-timeout-ms="", ovn-trim-wmark-perc-lflow-cache="", port-up-notif="true"}
--
_uuid : b4e6bcf1-912e-49a0-a9d8-19507ffd976a
encaps : [289d9136-0b17-41d3-9bf6-7dcff86a67d5]
external_ids : {vendor=kube-ovn}
hostname : xxx170
name : "fdf8685f-e186-49ea-b9e9-6be77f8d130a"
nb_cfg : 0
other_config : {ct-no-masked-label="true", datapath-type=system, iface-types="bareudp,erspan,geneve,gre,gtpu,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan", is-interconn="false", mac-binding-timestamp="true", ovn-bridge-mappings="", ovn-chassis-mac-mappings="", ovn-cms-options="", ovn-ct-lb-related="true", ovn-enable-lflow-cache="true", ovn-limit-lflow-cache="", ovn-memlimit-lflow-cache-kb="", ovn-monitor-all="false", ovn-trim-limit-lflow-cache="", ovn-trim-timeout-ms="", ovn-trim-wmark-perc-lflow-cache="", port-up-notif="true"}
有两个 chassis
先看下 node的是啥,保留和node上的一致的,如果node上不存在sb show 的,则都删掉,把node的chassis也清理
[deployer@xxx069 ~]$ k get node xxx170 -o yaml | grep chassis
ovn.kubernetes.io/chassis: fdf8685f-e186-49ea-b9e9-6be77f8d130a
kube-ovn 处理 chassis 冲突, 一个 node 有两个 chassis 记录
1012 kubectl ko sbctl chassis-del afa34d1c-97ed-4ca9-be68-73f893d0dbb8
1013 history
[deployer@xxx069 ~]$ k ko sbctl list chassis | grep -C 3 xxx170
_uuid : b4e6bcf1-912e-49a0-a9d8-19507ffd976a
encaps : [289d9136-0b17-41d3-9bf6-7dcff86a67d5]
external_ids : {vendor=kube-ovn}
hostname : xxx170
name : "fdf8685f-e186-49ea-b9e9-6be77f8d130a"
nb_cfg : 0
other_config : {ct-no-masked-label="true", datapath-type=system, iface-types="bareudp,erspan,geneve,gre,gtpu,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan", is-interconn="false", mac-binding-timestamp="true", ovn-bridge-mappings="", ovn-chassis-mac-mappings="", ovn-cms-options="", ovn-ct-lb-related="true", ovn-enable-lflow-cache="true", ovn-limit-lflow-cache="", ovn-memlimit-lflow-cache-kb="", ovn-monitor-all="false", ovn-trim-limit-lflow-cache="", ovn-trim-timeout-ms="", ovn-trim-wmark-perc-lflow-cache="", port-up-notif="true"}
[deployer@xxx069 ~]$