Day2: DDS标准与RTPS协议核心
产出目标
一、DDS-to-RTPS 映射关系
1.1 实体映射表
| DDS 层 (应用可见) | RTPS 层 (协议实现) | 作用 |
|---|
DomainParticipant | RTPSParticipant | 域参与者,管理通信实体 |
Publisher | - | 逻辑分组,无直接RTPS对应 |
Subscriber | - | 逻辑分组,无直接RTPS对应 |
DataWriter | RTPSWriter | 数据写入,序列化发送 |
DataReader | RTPSReader | 数据读取,反序列化接收 |
Topic | - | 主题名称映射到 RTPS GUID |
1.2 映射关系图
flowchart TB
subgraph "DDS 应用层"
DP[DomainParticipant]
PUB[Publisher]
SUB[Subscriber]
DW[DataWriter]
DR[DataReader]
TOP[Topic]
DP --> PUB
DP --> SUB
PUB --> DW
SUB --> DR
end
subgraph "RTPS 协议层"
RTP[RTPSParticipant]
RW[RTPSWriter]
RR[RTPSReader]
WH[WriterHistory]
RH[ReaderHistory]
GUID[GUID<br/>全局唯一标识]
RTP --> RW
RTP --> RR
RW --> WH
RR --> RH
end
subgraph "网络层"
NET[RTPS Messages<br/>UDP/TCP/SHM]
end
DP -.->|映射| RTP
DW -.->|映射| RW
DR -.->|映射| RR
TOP -.->|映射| GUID
WH --> NET
NET --> RH
1.3 创建流程映射
sequenceDiagram
participant App as 应用代码
participant DDS as DDS层
participant RTPS as RTPS层
App->>DDS: create_participant()
DDS->>RTPS: create RTPSParticipant
RTPS-->>DDS: return RTPSParticipant*
DDS-->>App: return DomainParticipant*
App->>DDS: create_datawriter(topic, qos)
DDS->>RTPS: create RTPSWriter<br/>(topic_name, qos, type)
RTPS->>RTPS: new WriterHistory(qos)
RTPS-->>DDS: return RTPSWriter*
DDS-->>App: return DataWriter*
App->>DDS: write(data)
DDS->>DDS: serialize(data)
DDS->>RTPS: new_change(payload)
RTPS->>RTPS: add to WriterHistory
RTPS->>RTPS: send via Transport
二、History 缓存机制
2.1 核心结构
flowchart TB
subgraph "WriterHistory<br/>发送端缓存"
WH_IN[add_change<br/>新样本入队]
WH_BUF[(m_changes<br/>缓存队列)]
WH_OUT[remove_change<br/>确认后删除]
WH_FLOW[流量控制<br/>FlowController]
WH_IN --> WH_BUF --> WH_OUT
WH_BUF -.-> WH_FLOW
end
subgraph "ReaderHistory<br/>接收端缓存"
RH_IN[receive<br/>接收样本]
RH_SORT[按序列号排序]
RH_BUF[(m_changes<br/>排序队列)]
RH_DEDUP[去重检测]
RH_OUT[deliver_to_app<br/>交付应用]
RH_IN --> RH_SORT --> RH_BUF --> RH_DEDUP --> RH_OUT
end
NET[网络传输] -.->|发送| WH_IN
NET -.->|接收| RH_IN
2.2 History 策略对比
flowchart TB
subgraph "KEEP_ALL 策略"
KA_IN[新样本] --> KA_BUF{检查资源限制}
KA_BUF -->|未满| KA_ADD[添加到队列]
KA_BUF -->|已满| KA_REJECT[拒绝/阻塞]
KA_ADD --> KA_Q[(队列: 1,2,3,4,5...)]
end
subgraph "KEEP_LAST(depth=3) 策略"
KL_IN[新样本] --> KL_BUF[(队列)]
KL_BUF -->|满3个| KL_REM[移除最老]
KL_REM --> KL_Q[(队列: 3,4,5)]
KL_IN -->|新样本6| KL_BUF
end
style KA_REJECT fill:#ffcdd2
style KL_REM fill:#fff9c4
2.3 可靠传输中的 History 作用
sequenceDiagram
participant Pub as Publisher
participant WH as WriterHistory
participant NET as 网络
participant RH as ReaderHistory
participant Sub as Subscriber
Pub->>WH: write(#1)
WH->>NET: send #1
NET->>RH: receive #1
RH->>Sub: deliver #1
Pub->>WH: write(#2)
WH->>NET: send #2
Note over NET: #2 丢失!
Pub->>WH: write(#3)
WH->>NET: send #3
NET->>RH: receive #3
RH->>RH: 缓存#3,等待#2
Sub-->>NET: NACK #2
NET-->>WH: 请求重传
WH->>WH: 查找#2(未确认)
WH->>NET: resend #2
NET->>RH: receive #2
RH->>RH: 排序: #2,#3
RH->>Sub: deliver #2
RH->>Sub: deliver #3
Sub-->>WH: ACK #2,#3
WH->>WH: 删除已确认样本
三、关键数据结构
3.1 CacheChange_t(缓存单元)
flowchart LR
CC[CacheChange_t<br/>缓存变更单元]
CC --> SN[sequenceNumber<br/>序列号]
CC --> GUID[writerGUID<br/>写入者标识]
CC --> KIND[changeKind<br/>变更类型]
CC --> DATA[serializedPayload<br/>序列化数据]
CC --> TS[sourceTimestamp<br/>时间戳]
style CC fill:#e3f2fd
3.2 History 类层次
flowchart TB
History[History<br/>基类]
WH[WriterHistory<br/>发送历史]
RH[ReaderHistory<br/>接收历史]
History --> WH
History --> RH
WH --> WHM[add_change<br/>remove_change<br/>get_change]
RH --> RHM[add_change<br/>remove_change<br/>get_change<br/>sort_changes]
style History fill:#e8f5e9
四、Day 2 调试验证
4.1 观察 History 状态
flowchart LR
GDB[GDB调试] --> B1[break WriterHistory::add_change]
GDB --> B2[break ReaderHistory::add_change]
B1 --> C1[print m_changes.size]
B2 --> C2[print m_changes.size]
C1 --> V1[观察缓存增长]
C2 --> V2[观察排序/去重]
style GDB fill:#fff3e0
4.2 调试命令流程
sequenceDiagram
participant Dev as 开发者
participant GDB as GDB
participant Hist as History
Dev->>GDB: gdb ./hello_world
Dev->>GDB: break WriterHistory::add_change
Dev->>GDB: run publisher
loop 每次写入
GDB->>Hist: 命中断点
Dev->>GDB: print m_changes.size
GDB-->>Dev: $1 = 5(当前缓存5个)
Dev->>GDB: print m_qos.history.kind
GDB-->>Dev: $2 = KEEP_LAST
Dev->>GDB: print m_qos.history.depth
GDB-->>Dev: $3 = 10(深度10)
Dev->>GDB: continue
end
五、Day 2 自检清单
flowchart TB
CHECK[Day 2 自检]
CHECK --> C1[能说出DDS实体对应的RTPS实体]
CHECK --> C2[理解KEEP_ALL vs KEEP_LAST区别]
CHECK --> C3[知道History在可靠传输中的作用]
CHECK --> C4[能在GDB中观察History状态]
C1 --> P1[通过 DataWriter到RTPSWriter]
C2 --> P2[通过 全保留 vs 滑动窗口]
C3 --> P3[通过 缓存重传排序]
C4 --> P4[通过 m_changes.size]
style CHECK fill:#e8f5e9
style P1 fill:#c8e6c9
style P2 fill:#c8e6c9
style P3 fill:#c8e6c9
style P4 fill:#c8e6c9
可靠传输深度解析:设计 + 代码 + 底层原理
一、业务/框架层面如何保证可靠
1.1 可靠性的三个维度
graph TB
REL[可靠性保证]
REL --> A[不丢数据<br>Data Preservation]
REL --> B[不乱序<br>In-Order Delivery]
REL --> C[不重复<br>Duplicate Elimination]
A --> A1[ACK确认机制]
A --> A2[超时重传]
A --> A3[持久化存储<br>History缓存]
B --> B1[序列号排序<br>SequenceNumber]
B --> B2[接收窗口<br>Sliding Window]
B --> B3[顺序交付<br>In-Order Delivery]
C --> C1[序列号去重<br>DeliveredSet]
C --> C2[幂等设计<br>Idempotent]
A1 --> A1_IMPL[StatefulWriter<br>process_acknack]
A2 --> A2_IMPL[TimingWheel<br>超时检测]
A3 --> A3_IMPL[WriterHistory<br>KEEP_ALL/KEEP_LAST]
B1 --> B1_IMPL[std::map<br>SequenceNumber排序]
B2 --> B2_IMPL[next_expected_seq<br>窗口左边界]
B3 --> B3_IMPL[连续交付<br>直到缺口]
C1 --> C1_IMPL[std::unordered_set<br>已交付集合]
C2 --> C2_IMPL[相同序列号<br>覆盖处理]
style REL fill:#e3f2fd
style A fill:#fff3e0
style B fill:#e8f5e9
style C fill:#fce4ec
style A1_IMPL fill:#c8e6c9
style A2_IMPL fill:#c8e6c9
style A3_IMPL fill:#c8e6c9
style B1_IMPL fill:#c8e6c9
style B2_IMPL fill:#c8e6c9
style B3_IMPL fill:#c8e6c9
style C1_IMPL fill:#c8e6c9
style C2_IMPL fill:#c8e6c9
核心流程拆解
1 核心策略对比
graph TB
PROD[生产者 Producer<br/>DataWriter]
subgraph "策略选择"
KEEP_ALL[KEEP_ALL<br/>全保留策略]
KEEP_LAST[KEEP_LAST<br/>滑动窗口策略]
end
subgraph "KEEP_ALL 机制"
KA_BUF[无限缓冲队列<br/>受限于物理内存]
KA_BLOCK[阻塞/拒绝策略<br/>当资源耗尽]
KA_PERSIST[持久化选项<br/>防进程崩溃]
end
subgraph "KEEP_LAST 机制"
KL_WIN[固定窗口<br/>深度=N]
KL_SLIDE[滑动淘汰<br/>FIFO]
KL_REALTIME[实时优先<br/>低延迟]
end
PROD --> KEEP_ALL
PROD --> KEEP_LAST
KEEP_ALL --> KA_BUF --> KA_BLOCK
KA_BUF -.-> KA_PERSIST
KEEP_LAST --> KL_WIN --> KL_SLIDE
KL_SLIDE --> KL_REALTIME
KA_BUF --> CONS[消费者 Consumer<br/>DataReader]
KL_REALTIME --> CONS
style KEEP_ALL fill:#e3f2fd
style KEEP_LAST fill:#e8f5e9
style KA_BLOCK fill:#ffcdd2
style KL_REALTIME fill:#c8e6c9
2 资源管理模型
graph LR
A[写入请求] --> B{资源检查}
B -->|KEEP_ALL| C{容量未满?}
B -->|KEEP_LAST| D{窗口未满?}
C -->|是| E[直接写入]
C -->|否| F[溢出处理<br/>阻塞/丢弃/持久化]
D -->|是| E
D -->|否| G[淘汰最老数据<br/>Sliding Window]
G --> E
E --> H[通知消费者]
F --> I[返回错误/等待]
style G fill:#fff9c4
style F fill:#ffcdd2
3 代码流程分解
3.1 生产流程
flowchart TD
START([开始<br/>add_change]) --> INPUT[输入: CacheChange]
INPUT --> POLICY{获取QoS策略}
POLICY -->|KEEP_ALL| CHECK_KA{检查资源限制<br/>max_samples}
POLICY -->|KEEP_LAST| CHECK_KL{检查窗口深度<br/>history.depth}
CHECK_KA -->|已满| HANDLE_OV[溢出处理]
CHECK_KA -->|未满| LOCK[获取锁<br/>mutex.lock]
CHECK_KL -->|已满| REMOVE_OLD[移除最老元素<br/>pop_front]
CHECK_KL -->|未满| LOCK
REMOVE_OLD --> LOCK
HANDLE_OV -->|阻塞| WAIT[等待条件变量]
HANDLE_OV -->|丢弃| RETURN_ERR[返回错误码]
HANDLE_OV -->|持久化| WRITE_DISK[写入磁盘]
WRITE_DISK --> LOCK
WAIT -.->|被唤醒| LOCK
LOCK --> ADD[添加到队列<br/>push_back]
ADD --> NOTIFY[通知Writer<br/>unsent_change_added]
ADD --> UPDATE_METRICS[更新统计指标<br/>size++]
NOTIFY --> UNLOCK[释放锁]
UPDATE_METRICS --> UNLOCK
UNLOCK --> RETURN_OK[返回成功]
RETURN_ERR --> END([结束])
RETURN_OK --> END
style REMOVE_OLD fill:#fff9c4
style HANDLE_OV fill:#ffcdd2
style NOTIFY fill:#c8e6c9
3.2 消费流程
flowchart TD
START([开始<br/>take_next_sample]) --> CHECK_EMPTY{队列空?}
CHECK_EMPTY -->|是| WAIT_DATA[等待数据<br/>条件变量wait]
CHECK_EMPTY -->|否| LOCK[获取读锁]
WAIT_DATA -.->|被通知| CHECK_EMPTY
LOCK --> GET[取出队首<br/>front + pop]
GET --> UPDATE_SEQ[更新期望序列号<br/>next_expected_seq++]
GET --> RELEASE[释放资源<br/>change pool]
UPDATE_SEQ --> CHECK_CONT[检查连续可用<br/>while循环]
CHECK_CONT -->|有连续数据| DELIVER[批量交付]
CHECK_CONT -->|不连续| UNLOCK
DELIVER --> UNLOCK[释放锁]
UNLOCK --> RETURN_DATA[返回数据给应用]
RETURN_DATA --> END([结束])
style GET fill:#e3f2fd
style DELIVER fill:#c8e6c9
4 核心设计模式
4.1 策略模式(Strategy Pattern)
classDiagram
class HistoryPolicy {
<<interface>>
+bool can_add(current_size, limit)
+void handle_overflow(buffer, new_item)
+void on_add(buffer, item)
}
class KeepAllPolicy {
+bool can_add(size, max)
+void handle_overflow(buf, item) 阻塞/异常
}
class KeepLastPolicy {
+int depth_
+bool can_add(size, depth)
+void handle_overflow(buf, item) 淘汰最老
}
HistoryPolicy <|-- KeepAllPolicy
HistoryPolicy <|-- KeepLastPolicy
class HistoryBuffer {
-HistoryPolicy* policy_
-vector~Item*~ buffer_
+add_change(Item*)
}
HistoryBuffer --> HistoryPolicy : uses
精妙之处
运行时切换策略,无需修改 HistoryBuffer代码
新增策略只需实现接口,符合开闭原则
策略对象可复用,减少内存分配
4.2 模板方法模式(Template Method)
flowchart TD
subgraph "History基类"
BASE[History<br/>抽象基类]
BASE_ADD[add_change: 模板方法]
BASE_HOOK1[pre_add_hook: 虚函数]
BASE_HOOK2[post_add_hook: 虚函数]
BASE --> BASE_ADD
BASE_ADD --> BASE_HOOK1
BASE_ADD --> BASE_CORE[核心逻辑: 添加到容器]
BASE_CORE --> BASE_HOOK2
end
subgraph "WriterHistory特化"
WRITER[WriterHistory<br/>派生类]
WRITER_HOOK1[pre_add: 检查Writer状态]
WRITER_HOOK2[post_add: 通知异步发送线程]
WRITER -.->|实现| WRITER_HOOK1
WRITER -.->|实现| WRITER_HOOK2
end
subgraph "ReaderHistory特化"
READER[ReaderHistory<br/>派生类]
READER_HOOK1[pre_add: 检查序列号连续性]
READER_HOOK2[post_add: 触发回调/on_data_available]
READER -.->|实现| READER_HOOK1
READER -.->|实现| READER_HOOK2
end
BASE_HOOK1 -.->|调用| WRITER_HOOK1
BASE_HOOK2 -.->|调用| WRITER_HOOK2
BASE_HOOK1 -.->|调用| READER_HOOK1
BASE_HOOK2 -.->|调用| READER_HOOK2
style BASE_CORE fill:#e3f2fd
style WRITER_HOOK1 fill:#e8f5e9
style READER_HOOK2 fill:#e8f5e9
精妙之处
核心流程固化(线程安全、容器操作)
扩展点明确(pre/post hook),子类只关注差异
避免代码重复,基类统一处理资源管理
观察者模式(Observer Pattern)
graph LR
SUBJECT[HistoryBuffer<br/>被观察者]
OBS1[StatefulWriter<br/>观察者1]
OBS2[FlowController<br/>观察者2]
OBS3[MetricsCollector<br/>观察者3]
SUBJECT -->|注册| OBS1
SUBJECT -->|注册| OBS2
SUBJECT -->|注册| OBS3
SUBJECT -.->|通知: change_added| OBS1
SUBJECT -.->|通知: buffer_full| OBS2
SUBJECT -.->|通知: statistics_update| OBS3
style SUBJECT fill:#e3f2fd
style OBS1 fill:#e8f5e9
style OBS2 fill:#e8f5e9
style OBS3 fill:#e8f5e9
分层框架拆解
graph TB
subgraph "应用层 Application"
APP[业务代码<br/>Publisher/Subscriber]
end
subgraph "接口层 Interface"
API[DataWriter/DataReader API<br/>类型安全封装]
end
subgraph "策略层 Policy"
POLICY[HistoryPolicy<br/>KEEP_ALL/KEEP_LAST]
QOS[QoS策略组合<br/>Reliability/Durability]
end
subgraph "核心层 Core"
BUFFER[HistoryBuffer<br/>线程安全容器]
CHANGE[CacheChange<br/>数据单元]
POOL[ChangePool<br/>对象池]
end
subgraph "资源层 Resource"
MEM[内存管理<br/>动态/静态/大页]
LOCK[锁机制<br/>mutex/spinlock/无锁]
DISK[持久化<br/>可选SQLite/文件]
end
APP --> API
API --> POLICY
POLICY --> BUFFER
BUFFER --> CHANGE
BUFFER --> POOL
BUFFER --> MEM
BUFFER --> LOCK
BUFFER -.-> DISK
style BUFFER fill:#e3f2fd
style POLICY fill:#e8f5e9
style DISK fill:#fff3e0
线程安全设计(Lock Strategy)
graph LR
subgraph "读写策略"
A[单生产者单消费者] --> B[无锁队列<br/>LockFreeQueue]
C[多生产者单消费者] --> D[读写锁<br/>SharedMutex]
E[多生产者多消费者] --> F[互斥锁<br/>RecursiveMutex]
end
subgraph "Fast-DDS实现"
F --> G[WriterHistory<br/>RecursiveTimedMutex]
D --> H[ReaderHistory<br/>SharedMutex+原子计数]
B --> I[异步发送队列<br/>SPSC无锁]
end
style B fill:#c8e6c9
style D fill:#e8f5e9
style F fill:#e3f2fd
零拷贝优化路径
sequenceDiagram
participant App as 应用层
participant Writer as DataWriter
participant History as WriterHistory
participant Shm as SharedMemTransport
participant Reader as DataReader
App->>Writer: write(data)
Writer->>Writer: serialize(data) 序列化到payload
Note over Writer,History: 关键:数据指针传递,非拷贝
Writer->>History: add_change(change*)
History->>History: store pointer 存入vector/deque
History->>Shm: 触发发送
Shm->>Shm: 直接发送payload指针 指向共享内存
Note over Shm,Reader: 跨进程:共享内存段映射
Shm->>Reader: 接收指针
Reader->>Reader: 反序列化 原地读取
工业级应用检查清单
| 维度 | 设计要点 | Fast-DDS实现 | 通用模板 |
|---|
| 策略配置 | 运行时切换 | QoS Policy | 策略模式接口 |
| 资源上限 | 防止OOM | max_samples/max_size | 容量检查钩子 |
| 线程安全 | 无死锁 | RecursiveTimedMutex | 锁策略模板参数 |
| 内存管理 | 零拷贝 | ChangePool+Shm | 对象池+指针传递 |
| 流控反压 | 防止过载 | FlowController | 回调通知机制 |
| 可观测性 | 监控诊断 | 统计Topic | 埋点钩子 |
| 持久化 | 防丢数据 | PersistenceService | 插件化存储后端 |