我的环境
▶ openstack --os-cloud=kolla-admin server list
+--------------------------------------+-------+--------+---------------------+--------+---------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+-------+--------+---------------------+--------+---------+
| c47973e0-21a4-4ef9-8ef6-055e0b7e0b2c | demo1 | ACTIVE | demo-net=10.0.0.180 | cirros | m1.tiny |
+--------------------------------------+-------+--------+---------------------+--------+---------+
(kolla-venv)
root@debian:~
▶ ping 10.0.0.180
PING 10.0.0.180 (10.0.0.180) 56(84) bytes of data.
^C^C
--- 10.0.0.180 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2048ms
(kolla-venv)
root@debian:~ ⍉
▶ # 我是用 neutron ovn vpc 模式
(kolla-venv)
root@debian:~ ⍉
▶ docker ps -a | grep -e neutron -e ovn
4a0cb00131d7 quay.io/openstack.kolla/neutron-metadata-agent:master-ubuntu-noble "dumb-init --single-…" 45 hours ago Up 45 hours neutron_ovn_metadata_agent_haproxy_ovnmeta-bb295799-5f77-49ac-a25f-8938a98b1d21
ba2cd7cf5aba quay.io/openstack.kolla/neutron-metadata-agent:master-ubuntu-noble "dumb-init --single-…" 45 hours ago Up 45 hours (healthy) neutron_ovn_metadata_agent
1a0b255f3fb1 quay.io/openstack.kolla/neutron-server:master-ubuntu-noble "dumb-init --single-…" 45 hours ago Up 45 hours (healthy) neutron_ovn_maintenance_worker
548cbc0bea42 quay.io/openstack.kolla/neutron-server:master-ubuntu-noble "dumb-init --single-…" 45 hours ago Up 45 hours (healthy) neutron_periodic_worker
eaf9ea291101 quay.io/openstack.kolla/neutron-server:master-ubuntu-noble "dumb-init --single-…" 45 hours ago Up 45 hours (healthy) neutron_rpc_server
f8e53459155b quay.io/openstack.kolla/neutron-server:master-ubuntu-noble "dumb-init --single-…" 45 hours ago Up 45 hours (healthy) neutron_server
ce3f291a474b kindest/node:v1.35.1 "/usr/local/bin/entr…" 45 hours ago Up 45 hours 127.0.0.1:46397->6443/tcp kube-ovn-control-plane
bc2f8ce6c6f7 kindest/node:v1.35.1 "/usr/local/bin/entr…" 45 hours ago Up 45 hours kube-ovn-worker
3705ab74e082 quay.io/openstack.kolla/nova-novncproxy:master-ubuntu-noble "dumb-init --single-…" 45 hours ago Up 45 hours (healthy) nova_novncproxy
de16b5eb0c2f quay.io/openstack.kolla/ovn-northd:master-ubuntu-noble "dumb-init --single-…" 45 hours ago Up 45 hours ovn_northd
859512a7606c quay.io/openstack.kolla/ovn-sb-db-relay:master-ubuntu-noble "dumb-init --single-…" 45 hours ago Up 45 hours ovn_sb_db_relay_1
aa02ec6d119f quay.io/openstack.kolla/ovn-sb-db-server:master-ubuntu-noble "dumb-init --single-…" 45 hours ago Up 45 hours ovn_sb_db
33c0e687f6f7 quay.io/openstack.kolla/ovn-nb-db-server:master-ubuntu-noble "dumb-init --single-…" 45 hours ago Up 45 hours ovn_nb_db
42e36409d8f0 quay.io/openstack.kolla/ovn-controller:master-ubuntu-noble "dumb-init --single-…" 45 hours ago Up 45 hours ovn_controller
(kolla-venv)
root@debian:~
▶ virsh console c47973e0-21a4-4ef9-8ef6-055e0b7e0b2c
Connected to domain 'instance-00000001'
Escape character is ^] (Ctrl + ])
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc pfifo_fast qlen 1000
link/ether fa:16:3e:83:eb:0e brd ff:ff:ff:ff:ff:ff
inet 10.0.0.180/24 brd 10.0.0.255 scope global dynamic noprefixroute eth0
valid_lft 33147sec preferred_lft 27747sec
inet6 fe80::f816:3eff:fe83:eb0e/64 scope link
valid_lft forever preferred_lft forever
$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.0.1 0.0.0.0 UG 1002 0 0 eth0
10.0.0.0 0.0.0.0 255.255.255.0 U 1002 0 0 eth0
169.254.169.254 10.0.0.2 255.255.255.255 UGH 1002 0 0 eth0
$ curl http://169.254.169.254/openstack/2018-08-27/meta_data.json
{"uuid": "c47973e0-21a4-4ef9-8ef6-055e0b7e0b2c", "public_keys": {"mykey": "ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHQ71Hy19YRJ7xi7DfvOu7mZFx/bwV01dIvA1F0721DzqW7CHyHYbZI7YvBgMt+RudWDVJfZ1JZ5zgH7vIiCCMo= root@debian\n"}, "keys": [{"name": "mykey", "type": "ssh", "data": "ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHQ71Hy19YRJ7xi7DfvOu7mZFx/bwV01dIvA1F0721DzqW7CHyHYbZI7YvBgMt+RudWDVJfZ1JZ5zgH7vIiCCMo= root@debian\n"}], "hostname": "demo1.novalocal", "name": "demo1", "launch_index": 0, "availability_zone": "nova", "random_seed": "RB7ozd0xVgt2b2AL39QwFIwrTBxDyfFJSDMuvaZopEUFOzvzYYuoIehqWKNi/ji5H8sW54eiBDsisqLsGLBnRxMqYBzBjeWrHXkxXSF79ha/bS7wVW4xP6150gSkZZpW2rYZwfs+YWSGDfmG6eYdlvGP7IIllTZwNwlniRKGbtXj2R+a5M0Bo1rbC1BmiNdFqDxHCGSgf9xAi+U+yxFbzkrGWrWGzoayOM3pV1tP92DiZq/eEChLuMcpk6VDWwznskTdSuP5ebCq1C61CTilV+WUwRXhOnlxDxFdZVzo8wyNRJ5bdAEZJDj8h8qt+3iQzb44Bowy8/wVEeKrjQpgFaqploK4wp8q/qEA1cqMVwtAEAxCank2Ye7QPh1jGDE+npWdz68vFJIkw7ZnB3Rde6zd/LcsgF1TbmknsRWmRbDDse/OWJvqgc+zpimOOYBLZSUZTTIDnclIJAPfwlGTJNFwWaFIS8II2hTxtEnXUBvVGb3/I7DhQrl2txxf9DPe1e4/NKIIU/mcBrK80ccTxMqSOITLzNaj57MschLWpD0PijXvRibmFvzY/JPzKPbk4jWYxzfDlIh+D3K9YeFOjQNZ7zTNMPJ9rcu64BsW5M4DlnxvbwXbYw0DkF3k2KNcvW1+pfUNsKhfNsMprYS01anxcQs4jAKKD66qcPfmKcI=", "project_id": "c6bbfc7fd5b54be8bb080258e9ada156", "devices": []}$
(kolla-venv)
root@debian:~
▶
▶ ip netns
ovnmeta-bb295799-5f77-49ac-a25f-8938a98b1d21 (id: 5)
(v.v)
root@debian:~
▶ ip netns exec ovnmeta-bb295799-5f77-49ac-a25f-8938a98b1d21 bash
root@debian:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: tapbb295799-51@if58: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether fa:16:3e:12:de:24 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 169.254.169.254/32 brd 169.254.169.254 scope global tapbb295799-51
valid_lft forever preferred_lft forever
inet 10.0.0.2/24 brd 10.0.0.255 scope global tapbb295799-51
valid_lft forever preferred_lft forever
root@debian:~# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 tapbb295799-51
root@debian:~#
exit
(v.v)
root@debian:~
▶ ps -aux | grep haproxy
root 66100 0.0 0.0 2292612 29076 ? Ssl Mar10 0:02 /usr/bin/docker start -a haproxy
root 66620 0.0 0.0 96864 10120 ? S Mar10 0:00 /usr/sbin/haproxy -W -db -p /run/haproxy.pid -f /etc/haproxy/haproxy.cfg -f /etc/haproxy/services.d/
42454 67485 0.0 0.0 106668 13904 ? S Mar10 1:33 /usr/sbin/haproxy -W -db -p /run/haproxy.pid -f /etc/haproxy/haproxy.cfg -f /etc/haproxy/services.d/
root 219197 0.0 0.0 2548 1056 ? Ss Mar10 0:00 dumb-init --single-child -- /bin/bash -c HAPROXY="$(if [ -f /usr/sbin/haproxy-systemd-wrapper ]; then echo "/usr/sbin/haproxy -Ds"; else echo "/usr/sbin/haproxy -Ws"; fi)"; exec ip netns exec ovnmeta-bb295799-5f77-49ac-a25f-8938a98b1d21 $HAPROXY -f /var/lib/neutron/kolla/ovn-metadata-proxy/bb295799-5f77-49ac-a25f-8938a98b1d21.conf
root 219209 0.0 0.0 96864 10728 ? S Mar10 0:00 /usr/sbin/haproxy -Ws -f /var/lib/neutron/kolla/ovn-metadata-proxy/bb295799-5f77-49ac-a25f-8938a98b1d21.conf
42435 219250 0.0 0.0 98852 9392 ? Sl Mar10 0:00 /usr/sbin/haproxy -Ws -f /var/lib/neutron/kolla/ovn-metadata-proxy/bb295799-5f77-49ac-a25f-8938a98b1d21.conf
root 3584110 0.0 0.0 6340 2100 pts/3 S+ 14:37 0:00 grep --color=auto --exclude-dir=.bzr --exclude-dir=CVS --exclude-dir=.git --exclude-dir=.hg --exclude-dir=.svn --exclude-dir=.idea --exclude-dir=.tox --exclude-dir=.venv --exclude-dir=venv haproxy
(v.v)
root@debian:~
▶ ip netns exec ovnmeta-bb295799-5f77-49ac-a25f-8938a98b1d21 bash
root@debian:~# cat /var/lib/neutron/kolla/ovn-metadata-proxy/bb295799-5f77-49ac-a25f-8938a98b1d21.conf
cat: /var/lib/neutron/kolla/ovn-metadata-proxy/bb295799-5f77-49ac-a25f-8938a98b1d21.conf: No such file or directory
root@debian:~# docker ps -a | grep neutron-metadata
4a0cb00131d7 quay.io/openstack.kolla/neutron-metadata-agent:master-ubuntu-noble "dumb-init --single-…" 45 hours ago Up 45 hours neutron_ovn_metadata_agent_haproxy_ovnmeta-bb295799-5f77-49ac-a25f-8938a98b1d21
ba2cd7cf5aba quay.io/openstack.kolla/neutron-metadata-agent:master-ubuntu-noble "dumb-init --single-…" 45 hours ago Up 45 hours (healthy) neutron_ovn_metadata_agent
root@debian:~# docker exec -it neutron_ovn_metadata_agent bash
(neutron-ovn-metadata-agent)[neutron@debian /]$ cat /var/lib/neutron/kolla/ovn-metadata-proxy/bb295799-5f77-49ac-a25f-8938a98b1d21.conf
global
log /dev/log local0 info
log-tag haproxy-metadata-proxy-bb295799-5f77-49ac-a25f-8938a98b1d21
user neutron
group neutron
maxconn 1024
nbthread 2
pidfile /var/lib/neutron/kolla/external/pids/bb295799-5f77-49ac-a25f-8938a98b1d21.pid.haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
option http-server-close
option forwardfor
retries 3
timeout http-request 30s
timeout connect 30s
timeout client 32s
timeout server 32s
timeout http-keep-alive 30s
listen listener
bind 169.254.169.254:80
server metadata /var/lib/neutron/kolla/metadata_proxy
http-request add-header X-OVN-Network-ID bb295799-5f77-49ac-a25f-8938a98b1d21
(neutron-ovn-metadata-agent)[neutron@debian /]$
openstack neutron network(vpc) 对应一个 本地的 netns
▶ openstack --os-cloud=kolla-admin network list
+--------------------------------------+----------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+----------+--------------------------------------+
| 4b634aa1-5ef5-410d-89a0-a20138961444 | public1 | 82cd3357-951f-492a-804e-2fd2167be833 |
| bb295799-5f77-49ac-a25f-8938a98b1d21 | demo-net | ba79ebd5-9c60-4fd6-8831-0ea7bf366398 |
+--------------------------------------+----------+--------------------------------------+
(kolla-venv)
root@debian:~
▶ ip netns | grep bb295799-5f77-49ac-a25f-8938a98b1d21
ovnmeta-bb295799-5f77-49ac-a25f-8938a98b1d21 (id: 5)
(kolla-venv)
root@debian:~
▶
▶ openstack --os-cloud=kolla-admin network show bb295799-5f77-49ac-a25f-8938a98b1d21
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2026-03-10T09:47:26Z |
| description | |
| dns_domain | None |
| id | bb295799-5f77-49ac-a25f-8938a98b1d21 |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| is_default | None |
| is_vlan_qinq | None |
| is_vlan_transparent | False |
| mtu | 1442 |
| name | demo-net |
| port_security_enabled | True |
| project_id | c6bbfc7fd5b54be8bb080258e9ada156 |
| provider:network_type | geneve |
| provider:physical_network | None |
| provider:segmentation_id | 1876 |
| qinq | False |
| qos_policy_id | None |
| revision_number | 2 |
| router:external | Internal |
| segments | None |
| shared | False |
| status | ACTIVE |
| subnets | ba79ebd5-9c60-4fd6-8831-0ea7bf366398 | # 目前只有一个 subnet
| tags | |
| updated_at | 2026-03-10T09:47:27Z |
+---------------------------+--------------------------------------+
(kolla-venv)
root@debian:~
Metadata API 完整访问链路
VM (10.0.0.180)
│
│ curl http://169.254.169.254/openstack/...
│ route: 169.254.169.254 → gateway 10.0.0.2 (metadata port in namespace)
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ Network Namespace: ovnmeta-bb295799-... │
│ │
│ ① HAProxy (per-network, 独立容器/进程) │
│ bind 169.254.169.254:80 │
│ 添加 header: X-OVN-Network-ID: bb295799-... │
│ 添加 header: X-Forwarded-For: <VM IP> (option forwardfor) │
│ 转发到 → Unix Socket: /var/lib/neutron/kolla/metadata_proxy │
│ │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ ② Neutron MetadataProxyHandler (监听 Unix Socket) │
│ 文件: neutron/agent/ovn/metadata/server_socket.py │
│ │
│ UnixDomainMetadataProxy.run() 启动 MetadataProxyServer │
│ (socketserver.UnixStreamServer) 监听 metadata_proxy_socket │
│ │
│ 收到请求后处理流程: │
│ a. 从 header 提取 X-OVN-Network-ID → network_id │
│ b. 从 header 提取 X-Forwarded-For → remote_address (VM IP) │
│ c. 调用 sb_idl.get_network_port_bindings_by_ip() │
│ 查询 OVN SB DB, 根据 network_id + IP 找到对应的 port │
│ d. 从 port.external_ids 提取: │
│ - instance_id (OVN_DEVID_EXT_ID_KEY) │
│ - project_id (OVN_PROJID_EXT_ID_KEY) │
│ e. 构造请求转发给 Nova: │
│ headers: │
│ X-Forwarded-For: <VM IP> │
│ X-Instance-ID: <instance uuid> │
│ X-Tenant-ID: <project uuid> │
│ X-Instance-ID-Signature: HMAC(shared_secret, instance_id)│
│ │
└─────────────────────────────────────────────────────────────────┘
│
│ HTTP 请求 → nova_metadata_host:nova_metadata_port
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ ③ Nova Metadata API │
│ 验证 X-Instance-ID-Signature (shared_secret 签名) │
│ 根据 X-Instance-ID 查询实例元数据 │
│ 返回 metadata JSON │
└─────────────────────────────────────────────────────────────────┘
关键代码路径
┌──────────────────┬───────────────────────────────────────────────────┬───────────────────────────────────────────────────────────────┐
│ 层 │ 文件 │ 作用 │
├──────────────────┼───────────────────────────────────────────────────┼───────────────────────────────────────────────────────────────┤
│ HAProxy 配置生成 │ neutron/agent/metadata/driver_base.py:91-151 │ 生成 HAProxy 配置,添加 X-OVN-Network-ID header │
├──────────────────┼───────────────────────────────────────────────────┼───────────────────────────────────────────────────────────────┤
│ HAProxy 进程管理 │ neutron/agent/ovn/metadata/agent.py:860-864 │ spawn_monitored_metadata_proxy() 在 namespace 中启动 HAProxy │
├──────────────────┼───────────────────────────────────────────────────┼───────────────────────────────────────────────────────────────┤
│ Unix Socket 服务 │ neutron/agent/metadata/proxy_base.py:44-80 │ MetadataProxyServer 基于 socketserver.UnixStreamServer │
├──────────────────┼───────────────────────────────────────────────────┼───────────────────────────────────────────────────────────────┤
│ OVN 请求处理 │ neutron/agent/ovn/metadata/server_socket.py:27-63 │ MetadataProxyHandler.get_port() 查 OVN SB DB 获取 instance_id │
├──────────────────┼───────────────────────────────────────────────────┼───────────────────────────────────────────────────────────────┤
│ 转发到 Nova │ neutron/common/metadata.py:197-261 │ _proxy_request() 构造带签名的 HTTP 请求转发给 Nova │
├──────────────────┼───────────────────────────────────────────────────┼───────────────────────────────────────────────────────────────┤
│ 入口启动 │ neutron/agent/ovn/metadata/agent.py:466-468 │ MetadataAgent.start() 创建 UnixDomainMetadataProxy │
└──────────────────┴───────────────────────────────────────────────────┴───────────────────────────────────────────────────────────────┘
核心设计思想
- 安全隔离: HAProxy 运行在 per-network 的 namespace 中,VM 只能通过 link-local 地址 169.254.169.254 访问,无法直接访问 Nova metadata API
- 身份识别: Neutron 作为"中间人",利用 OVN SB DB 中的 port binding 信息(network_id + IP → instance_id),解决了"哪个 VM 在请求 metadata"的身份识别问题。VM 自己不知道自己的 instance_id,但 Neutron 通过网络拓扑可以推导出来
- 签名验证: Neutron 用 metadata_proxy_shared_secret 对 instance_id 做 HMAC 签名(X-Instance-ID-Signature),Nova 收到后验证签名,防止伪造请求
- 与 ML2/OVS 的区别: OVN 模式下直接查 SB DB(sb_idl.get_network_port_bindings_by_ip),不需要通过 Neutron server API 查询 port,性能更好、延迟更低
请求进入 169.254.169.254 后,经过本地 Unix socket 直到转发 Nova 这段业务逻辑按代码级别拆开。我先在仓库里定位 OVN metadata 相关入口和 handler 的实际实现,再给你一份按调用链和关键分支(成功/失败路径)组织的精确分析。先快速检查一下已有记忆笔记,避免重复记录或遗漏既有约定,然后开始读取相关源码。
我已确认 handler 里实际调用的是 OVN SB IDL 的 get_network_port_bindings_by_ip(),接下来补齐这段查询实现和 agent 启动配置(socket 路径来源),这样可以把 169.254.169.254→sock 之后每一步的输入输出对上你现场现象。
搜索文本 metadata_proxy_socket|nova_metadata_host|metadata_proxy_shared_secret (**/neutron/conf/**/*.py),5 个结果
结论:你这条链路在“169.254.169.254 → 本地 Unix socket → Neutron 解析并转发 Nova”这一段是通的,而且是标准 OVN metadata 设计。你在虚机内 curl 成功已经证明了业务主路径成功;你现在要看的核心就是 socket 之后的身份解析与签名转发逻辑。
1) 169.254.169.254 到本地 sock 后,真实业务流程
- VM 发起 HTTP 到 169.254.169.254,进入该网络对应的 metadata namespace 内 HAProxy。
- HAProxy 做两件关键事:
- 注入 X-OVN-Network-ID(网络 UUID)。
- 注入 X-Forwarded-For(VM 源 IP)。
- HAProxy 把请求转发到 Unix domain socket(不是 TCP),socket 路径来自 metadata_proxy_socket 配置。
- Unix socket 服务端读取原始 HTTP,请求被 Metadata handler 解析。
- Handler 用 X-OVN-Network-ID + X-Forwarded-For 去 OVN SB Port_Binding 反查唯一端口。
- 从 external_ids 拿到 instance_id/project_id。
- 构造到 Nova metadata 的请求头:
- X-Instance-ID
- X-Tenant-ID
- X-Instance-ID-Signature(用 metadata_proxy_shared_secret 计算 HMAC)
- 请求发往 nova_metadata_host:nova_metadata_port,返回结果再原样写回 HAProxy,再回 VM。
2) 代码对应位置(你关心的 sock 之后)
- OVN socket handler 与端口反查: server_socket.py
- Unix socket server 基类与请求处理入口: proxy_base.py
- HAProxy 配置模板与“转发到 Unix socket”配置拼装: metadata.py
- OVN metadata agent 启动 socket server: agent.py
- OVN HAProxy 头部注入模板(X-OVN-Network-ID): driver.py
- SBDB 查询 network + ip 的实现: impl_idl_ovn.py
- metadata 相关配置项(socket、nova host、shared secret): config.py
3) 关键分支(排障时最有用)
- 端口唯一匹配成功:
- 正常拿到 instance_id/project_id,继续转发 Nova。
- 0 个端口匹配:
- 返回 404 语义(实例未找到),日志会有 No port found。
- 多个端口匹配:
- 记录错误并拒绝,提示 Neutron/OVN 数据不一致(需要 db sync)。
- Nova 返回 403:
- 基本就是 shared secret 不一致。
- 到 Nova 连接失败:
- 返回 503,说明 Neutron 到 Nova metadata 通路异常。
4) 你现场信息的工程结论(事实 vs 推测)
- 事实:
- VM 内 metadata curl 成功,说明从 HAProxy 到 Unix socket 到 Nova 的主链路是通的。
- namespace 里存在 10.0.0.2 与 169.254.169.254,符合 OVN metadata 口设计。
- per-network HAProxy 进程在跑,且配置包含 X-OVN-Network-ID 与 Unix socket backend。
- 推测(高概率):
- 你的 ping 10.0.0.180 不通与 metadata 链路无关,更可能是安全组/ACL/回程或 overlay datapath 问题。
- 也可能是你从控制节点直 ping tenant IP,本身不在可达路径内(取决于你的网关、FIP、路由与安全策略)。
“sock 请求进入 Neutron 后,如何解析并转发 Nova metadata”对应的精准代码位置。
就在这些位置,按“收到 sock 请求 → 解析实例信息 → 请求 Nova → 返回给 VM”的顺序看:
-
sock 服务入口(读取 Unix socket 上的原始 HTTP 请求)
metadata.py 的 handle
这里会做: -
从 socket 读取请求并构造 webob.Request
-
调用 _get_instance_and_project_id 拿 instance_id/project_id
-
调用 _proxy_request 转发到 Nova
-
把 Nova 响应写回 socket(self.wfile.write)
-
解析请求头并定位 instance_id/project_id(核心解析逻辑)
proxy_base.py 的 _get_instance_and_project_id
它会: -
取 X-Forwarded-For
-
取网络/路由头(见 proxy_base.py)
-
调用子类 get_port(...) 做反查
-
OVN 模式下 get_port 的实现(network_id + IP 反查 Port_Binding)
server_socket.py 的 get_port
这里调用 OVN SB IDL: impl_idl_ovn.py 的 get_network_port_bindings_by_ip
查到唯一 port 后,从 external_ids 取 instance_id/project_id。 -
转发 Nova metadata 的代码(构造签名头并 requests.request)
metadata.py 的 _proxy_request
这里会加: -
X-Instance-ID
-
X-Tenant-ID
-
X-Instance-ID-Signature(metadata_proxy_shared_secret 签名)
然后请求 nova_metadata_host:nova_metadata_port,并把响应转换后返回给 VM。 -
这个 sock server 是怎么启动起来的
agent.py 创建 UnixDomainMetadataProxy
agent.py run
agent.py wait
以及 server 初始化在: server_socket.py
flowchart TD
A[handle 收到 Unix socket 请求] --> B[_get_instance_and_project_id]
B --> C{get_port 匹配结果}
C -->|0 个| D[404 Instance not found]
C -->|>1 个| E[记录不一致错误并返回404/错误]
C -->|1 个| F[_proxy_request 转发 Nova]
F --> G{Nova返回码}
G -->|200| H[返回 metadata 内容给 VM]
G -->|403| I[共享密钥不一致]
G -->|5xx/连接失败| J[503 或 500]
G -->|404/409/502/504| K[透传对应状态]