车载消息中间件FastDDS 源码解析(一)FastDDS 介绍和使用
车载消息中间件FastDDS 源码解析(二)RtpsParticipant的创建(上)
车载消息中间件FastDDS 源码解析(三)RtpsParticipant的创建(中)
车载消息中间件FastDDS 源码解析(四)RtpsParticipant的创建(下)
车载消息中间件FastDDS 源码解析(五)BuiltinProtocols(上)
车载消息中间件FastDDS 源码解析(六)BuiltinProtocols(中)EDP
车载消息中间件FastDDS 源码解析(七)BuiltinProtocols(下)WLP&TypeLookupManager
车载消息中间件FastDDS 源码解析(八)TimedEvent
车载消息中间件FastDDS 源码解析(十)发送第一条PDP消息(上)
FastDDS 源码解析(十二)发送第一条PDP消息(下)---异步发送
FastDDS 源码解析(十三)发送第一条PDP消息---跨进程发送
FastDDS 源码解析(十六)处理PDP消息——PDP匹配
FastDDS 源码解析(十七)处理PDP消息——EDP匹配
FastDDS 源码解析(十八)EDP阶段发送心跳heartbeat
之前的3篇介绍了RTPSParticipantImpl的初始化6个部分中的1-5部分,剩下一个BuiltinProtocols的初始化,内容非常多我们慢慢展开。
1.BuiltinProtocols的初始化
BuiltinProtocols看名字就知道什么意思:内置的协议。BuiltinProtocols的初始化就是内置协议的初始化。
bool BuiltinProtocols::initBuiltinProtocols(
RTPSParticipantImpl* p_part,
BuiltinAttributes& attributes)
{
mp_participantImpl = p_part;
m_att = attributes;
m_metatrafficUnicastLocatorList = m_att.metatrafficUnicastLocatorList;
m_metatrafficMulticastLocatorList = m_att.metatrafficMulticastLocatorList;
m_initialPeersList = m_att.initialPeersList;
{
std::unique_lock<eprosima::shared_mutex> disc_lock(getDiscoveryMutex());
m_DiscoveryServers = m_att.discovery_config.m_DiscoveryServers;
}
// server client相关的内容
transform_server_remote_locators(p_part->network_factory());
const RTPSParticipantAllocationAttributes& allocation = p_part->getRTPSParticipantAttributes().allocation;
// PDP
switch (m_att.discovery_config.discoveryProtocol)
{
case DiscoveryProtocol_t::NONE:
EPROSIMA_LOG_WARNING(RTPS_PDP, "No participant discovery protocol specified");
return true;
case DiscoveryProtocol_t::SIMPLE:
// new了一个PDPSimple
mp_PDP = new PDPSimple(this, allocation);
break;
case DiscoveryProtocol_t::EXTERNAL:
EPROSIMA_LOG_ERROR(RTPS_PDP, "Flag only present for debugging purposes");
return false;
case DiscoveryProtocol_t::CLIENT:
mp_PDP = new fastdds::rtps::PDPClient(this, allocation);
break;
case DiscoveryProtocol_t::SERVER:
mp_PDP = new fastdds::rtps::PDPServer(this, allocation, DurabilityKind_t::TRANSIENT_LOCAL);
break;
#if HAVE_SQLITE3
case DiscoveryProtocol_t::BACKUP:
mp_PDP = new fastdds::rtps::PDPServer(this, allocation, DurabilityKind_t::TRANSIENT);
break;
#endif // if HAVE_SQLITE3
case DiscoveryProtocol_t::SUPER_CLIENT:
mp_PDP = new fastdds::rtps::PDPClient(this, allocation, true);
break;
default:
EPROSIMA_LOG_ERROR(RTPS_PDP, "Unknown DiscoveryProtocol_t specified.");
return false;
}
//
if (!mp_PDP->init(mp_participantImpl))
{
EPROSIMA_LOG_ERROR(RTPS_PDP, "Participant discovery configuration failed");
delete mp_PDP;
mp_PDP = nullptr;
return false;
}
// WLP
if (m_att.use_WriterLivelinessProtocol)
{
mp_WLP = new WLP(this);
mp_WLP->initWL(mp_participantImpl);
}
// TypeLookupManager
if (m_att.typelookup_config.use_client || m_att.typelookup_config.use_server)
{
tlm_ = new fastdds::dds::builtin::TypeLookupManager(this);
tlm_->init_typelookup_service(mp_participantImpl);
}
return true;
}
//BuiltinProtocols::initBuiltinProtocols
BuiltinProtocols 看名字就明白,这个类是管理所有的内置的协议,分别是pdp发现协议,edp发现协议,wlp(writer liveliness protocol),TypeLookupManager(类型管理,数据解析相关),这一节主要介绍pdp发现协议的初始化。
这个函数主要干了这几件事情
这个函数主要干了这几件事情
-
PDP相关的初始化 代码20-67行
PDP阶段是participant discovery阶段,在这个阶段,传输participant 自身的信息。
-
EDP相关的初始化 (这个在pdp内部进行)
EDP 全称是Endpoint Discovery Protocol,就是端点发现协议,这个端点在这儿指的是Writer 和 Reader。在pdp阶段(participant 发现)之后,就是EDP阶段,writer 和 reader的互相发现。
-
WLP初始化 代码70-75行, wlp:writer liveness protocol,这个协议主要是为了通知远端的participant,有哪几个writer还活着
WLP 是管理本地和远程participant中writer 和 reader 存活状态的类。
-
TypeLookupManager的初始化 代码77-81行
这是管理数据传输类型的类,我们传输数据会有一定个格式,序列化发序列化能够成功的前提是双方能够约定一个格式类型。
BuiltinProtocols 看名字就明白,这个类是管理所有的内置的协议。内置的协议包括pdp发现协议,edp发现协议,wlp(writer liveliness protocol),TypeLookupManager(类型管理,数据解析相关)。
1.1初始化大概步骤
graph TD
A(PDP的初始化)-->B(EDP初始化)
B-->c(WLP的初始化)
c-->d(TypeLookupManager的初始化)
对于BuiltinProtocols 初始化来说就是4个部分
- PDP初始化
- EDP初始化
- WLP初始化
- TypeLookupManager的初始化
BuiltinProtocol字面意思是什么哪,就是内置的协议,在这儿就是PDP协议,EDP协议,WLP协议。
每个协议都有一些执行这些协议的实体,BuiltinProtocol初始化就是出事化这些协议的执行对象
1.2BuiltinProtocols的架构
我们可以看一下下面这张图:可以有相对比较清晰的认识:
一个participant 一般由 4个部分组成,PDP,EDP ,WLP,User,其中PDP,EDP ,WLP 是builtin的也就是内置的。
user是在PDP,EDP,WLP的基础上,使用PDP,EDP ,WLP的应用程序。
user 可以有多个Publisher 和 Subscriber,一个Publisher 对应一个 UserWriter,一个Subscriber对应一个UserReader。
我们在第一篇文章车载消息中间件FastDDS 源码解析(一)FastDDS 介绍和使用中曾经以一个例子介绍了如何使用Fast dds。可以看一下3.1节和3.2节:最终创建的Writer 和 Reader就是我们这边的UserWriter和UserReader。
PDP阶段是participant discovery阶段,在这个阶段,传输participant 自身的信息。pdp 有SPDPWriter 和 SPDPReader 2个端点,2个participant,在pdp建立连接阶段,participant自己的SPDPWriter 与其他 SPDPReader 进行交互,往SPDPReader 写入自己participant 的信息。
1.3类图
classDiagram
RTPSParticipantImpl *-- BuiltinProtocols
BuiltinProtocols *-- PDP
BuiltinProtocols *-- WLP
PDP *-- EDP
class RTPSParticipantImpl {
+BuiltinProtocols mp_builtinProtocols
}
class BuiltinProtocols {
+PDP mp_PDP
+WLP mp_WLP
}
class PDP {
+EDP mp_EDP
}
这是类图,RTPSParticipant 包含了BuiltinProtocols对象,BuiltinProtocols负责管理内置的协议,pdp ,edp 和 wlp。
而EDP,是由PDP管理的。
1.4总结
本节介绍了RTPS内置的协议种类,和分别的作用,BuiltinProtocols初始化这些协议的时候的大概顺序。
下面分别介绍各个协议。
1.4BuiltinProtocols的初始化时序图
sequenceDiagram
participant BuiltinProtocols
participant PDPSimple
participant PDP
participant EDPSimple
participant WLP
BuiltinProtocols ->> BuiltinProtocols: 1.initBuiltinProtocols()
BuiltinProtocols ->> PDPSimple: 2.new PDPSimple()
PDPSimple ->> PDP: 3.new PDP()
BuiltinProtocols ->> PDPSimple: 4.init()
PDPSimple ->> PDP: 5.initPDP()
PDP ->> PDPSimple: 6.createPDPEndpoints
PDPSimple ->> EDPSimple: 7.new EDPSimple
PDPSimple ->> EDPSimple: 8.initEDP
EDPSimple ->> EDPSimple: 9.createSEDPEndpoints
BuiltinProtocols ->> WLP: 10.new
BuiltinProtocols ->> WLP: 11.initWL
BuiltinProtocols 的初始化大概分为这几步
-
在RTPSParticipantImpl构造函数中调用BuiltinProtocols::initBuiltinProtocols 初始化BuiltinProtocols,Builtin是內置的意思,字面意思就是初始化内置协议,包括pdp发现协议和edp发现协议还有wlp协议,pdp发现协议和edp发现协议有好几种,今天这里使用默认的PDP协议和EDP协议来举例。wlp协议是writer liveliness protocol的缩写。
这个构造函数干了
a.创建并初始化PDPSimple 见2
b.创建并初始化WLP 见10
C.wlp 相关内容 这个见 wlp相关章节
d.TypeLookupManager这个是传输的数据格式相关内容
-
new 了一个PDPSimple,在PDPSimple构造函数里面 new 了PDP,PDPSimple是PDP的子类
-
new 了PDP
-
初始化PDPSimple
-
初始化PDP
-
PDP 调用PDPSimple 创建PDPEndPoints,就是创建statelessreader 和 statelesswriter
-
new 了EDPSimple
-
初始化EDPSimple
-
创建SEDPEndpoint,创建EDPPoints
-
New WLP
-
初始化WLP
2.pdp 初始化
2.1pdp初始化的主要工作:
为PDP配置发现协议,simple static 还是什么其他的协议。下面详细介绍。
协议不同配置的内容不一样。
然后创建了PDPWriter 和 PDPReader,为PDPWriter 和 PDPReader 配置相关组件。
PDPWriter 和PDPReader一般是无状态的writer 和 reader,无状态的writer 和 reader 与有状态的writer和 reader 一个主要的区别就是,statefulwriter 和 statefulreader 缓存,有重传机制,当出现丢包的时候,会重传丢失的udp包,这块会在后续详细介绍。
如图所示:每个PDP都有一个writer 和一个reader,本地的pdp writer 与远端的pdp reader 对接,本地的pdp reader 与远端的pdpwriter 对接。
在创建pdp reader 和writer 的过程中,需要为reader 和writer 配置一些参数,我们选择其中几个比较关键的参数来说明一下这些参数的意义。
watt.endpoint.reliabilityKind = BEST_EFFORT
这个参数又涉及到一个常用的qos,ReliabilityQos
关于qos,这里做一个简单的介绍,qos是策略,是dds可以配制的选项,你希望dds实现什么样的行为,就可以按照qos来配制dds,但是这个配置的范围是有限的。
在一定范围内给了用户一定的使用自由。
同时这个配置也需要两个通信participant的配合,举个例子,participant a 配置了某个qos的选项a,participant b配置了某个qos的选项b,如果a和b不兼容,那么也是有问题的。
截止目前dds协议,有必须要实现的22个qos,fastdds在实现了22个基本qos的同时,实现了16个扩展的qos,这个ReliabilityQos是其中几个常用的qos 之一
typedef enum ReliabilityQosPolicyKind : fastrtps::rtps::octet
{
BEST_EFFORT_RELIABILITY_QOS = 0x01, //!< Best Effort reliability (default for Subscribers).
RELIABLE_RELIABILITY_QOS = 0x02 //!< Reliable reliability (default for Publishers).
} ReliabilityQosPolicyKind;
这个qos,可以配制成BEST_EFFORT_RELIABILITY_QOS 或者 RELIABLE_RELIABILITY_QOS
BEST_EFFORT 按照字面理解就是速度最快的
RELIABLE 按照字面理解就是可信赖的,就是说传输的可靠性是有保证的
在这里watt.endpoint.reliabilityKind = BEST_EFFORT,就是说把endpoint配置成速度最快的,速度最快的意思就是不保证传输的准确性,有可能会丢包。在前几章我们介绍过,fastdds 有tcp,udp 还有共享内存的传输方式,udp本身并不是可靠传输,所以fastdds 如果配置成BEST_EFFORT 是有丢包的可能性的。
在使用udp传输的情况下如果 watt.endpoint.reliabilityKind = RELIABLE,如果出现丢包,就会重传丢的包。
dds协议规定,一个BEST_EFFORT的Data Writer只能与一个BEST_EFFORT的Data Reader匹配。一个RELIABLE的Data Writer可以与一个RELIABLE或者BEST_EFFORT的Data Reader匹配。
即使writer 和 reader 都配置成RELIABLE,也是有丢包的可能的,这个需要结合场景来分析。
ratt.endpoint.durabilityKind = TRANSIENT_LOCAL ,durabilityKind有下面几类:
- VOLATILE, 不保留已发送的数据
- TRANSIENT_LOCAL, 保留已经发送的数据,当有最新的endpointer加入时,会把历史数据全部发送给它
- TRANSIENT, 在TRANSIENT_LOCAL的基础上,持久化到本地
参数选择不一样,writer 和 reader的行为就不一样,这里面很多组合。
2.2pdp初始化的时序图
sequenceDiagram
participant BuiltinProtocols
participant PDPSimple
participant PDP
participant RTPSParticipantImpl
Participant StatelessReader
participant FlowControllerFactory
participant StatelessWriter
participant NetworkFactory
participant UDPTransportInterface
participant UDPSenderResource
BuiltinProtocols ->> BuiltinProtocols: 1.initBuiltinProtocols()
BuiltinProtocols ->> PDPSimple: 2.new PDPSimple()
PDPSimple ->> PDP: 3.new PDP()
BuiltinProtocols ->> PDPSimple: 4.init()
PDPSimple ->> PDP: 5.initPDP()
PDP ->> PDPSimple: 6.createPDPEndpoints
PDPSimple ->> RTPSParticipantImpl: 7.createReader()
RTPSParticipantImpl->> RTPSParticipantImpl: 8.create_reader()
RTPSParticipantImpl ->> StatelessReader: 9.new
RTPSParticipantImpl->>PDPSimple: return statelessReader
PDPSimple ->> RTPSParticipantImpl: 10.createWriter()
RTPSParticipantImpl ->> RTPSParticipantImpl: 11.create_writer()
RTPSParticipantImpl ->> FlowControllerFactory: 12.register_flow_controller
RTPSParticipantImpl ->> FlowControllerFactory: 13.retrieve_flow_controller()
RTPSParticipantImpl ->> StatelessWriter: 14.new
RTPSParticipantImpl->>NetworkFactory: 15.build_send_resources()
NetworkFactory->>UDPTransportInterface: 16.OpenOutputChannel()
UDPTransportInterface->>UDPTransportInterface: 17.OpenAndBindUnicastOutputsocket()
UDPTransportInterface->>UDPSenderResource: 18.new UDPSenderResource()
RTPSParticipantImpl->>PDPSimple: 19.return statelessWriter
-
initBuiltinProtocols 初始化pdp的时候主要干了这几件事:
a.new PDPSimple 见 2
b.init PDPSimple 见 4
-
new pdpsimple 中调用了 new PDP
-
new pdp
-
init PDPSimple 主要干了2件事
a.init PDP
b.create edp 这个后续详细介绍
-
initPdp 主要干了2件事情
a.createPDPEndpoints
b.ParticipantProxyData
-
createPDPEndpoints 主要干了2件事
a.配置参数,createReader 见 7
b.配置参数,createWriter 见 15
-
createReader 主要是调用了create_reader函数
-
create_reader 函数主要干了这几件事情
a.为statelessreader 配置参数
b.create statelessreader 见 9
c.createSendResources. 是reliable 的时候才创建sendresource 见9
d.createAndAssociateReceiverswithEndpoint 见15
-
new StatelessReader 初始化了StatelessReader的父类 RTPSReader 并初始化了CacheChangePool
-
createWriter 主要是调用了create_writer函数
-
create_writer 主要干了这几件事情:
a.为statelesswriter 配置参数
就是writer的一些通用参数
其中主要是为writer配置发送策略也就是flow_controller
见12,13
b.create statelesswriter 见18
c.createSendResources. 见14
d.createAndAssociateReceiverswithEndpoint 是reliable 的时候才调用这个函数
-
register_flow_controller 就是注册writer自己的flow_controller
-
retrieve_flow_controller 获取writer的flow_controller
-
createSendResources 调用NetworkFactory的build_send_resources函数
-
build_send_resources 针对mRegisteredTransports 来OpenOutputChannel
-
在这里对于udp连接 调用UDPTransportInterface 的OpenAndBindUnicastOutputsocket
-
OpenAndBindUnicastOutputsocket创建 UDPSenderResource 并把UDPSenderResource 放入sender_resource_list 中
-
UDPSenderResource new的过程中主要创建了2个函数变量
-
new StatelessWriter 初始化了StatelessWriter的父类 RTPSWriter 并初始化了CacheChangePool
2.3pdp源码解析
步骤1
根据配置的发现协议(参看之前介绍的发现协议的章节),创建相应的对象:
- DiscoveryProtocol_t::SIMPLE simplepdp
- DiscoveryProtocol_t::CLIENT 普通的pdpclient
- DiscoveryProtocol_t::SERVER 普通的pdpserver
- DiscoveryProtocol_t::BACKUP 带数据库的pdpserver
- DiscoveryProtocol_t::SUPER_CLIENT pdpclient,相比DiscoveryProtocol_t::CLIENT的区别,就是会接受所有的server发送的发现信息,client只会接受和自己相关的发现信息
初始化pdp
我们选择一个我们最普遍使用的pdp,simplepdp做一下源码解析
步骤2:创建PDPSimple
PDPSimple::PDPSimple (
BuiltinProtocols* built,
const RTPSParticipantAllocationAttributes& allocation)
: PDP(built, allocation)
{
}
这里调用了PDP的初始化
步骤3:PDP的初始化
PDP::PDP (
BuiltinProtocols* built,
const RTPSParticipantAllocationAttributes& allocation)
: mp_builtin(built)
, mp_RTPSParticipant(nullptr)
, mp_EDP(nullptr)
, participant_proxies_number_(allocation.participants.initial)
, participant_proxies_(allocation.participants)
, participant_proxies_pool_(allocation.participants)
, reader_proxies_number_(allocation.total_readers().initial)
, reader_proxies_pool_(allocation.total_readers())
, writer_proxies_number_(allocation.total_writers().initial)
, writer_proxies_pool_(allocation.total_writers())
, m_hasChangedLocalPDP(true)
, mp_listener(nullptr)
, temp_reader_proxies_({
allocation.locators.max_unicast_locators,
allocation.locators.max_multicast_locators,
allocation.data_limits,
allocation.content_filter})
, temp_writer_proxies_({
allocation.locators.max_unicast_locators,
allocation.locators.max_multicast_locators,
allocation.data_limits})
, mp_mutex(new std::recursive_mutex())
, resend_participant_info_event_(nullptr)
{
size_t max_unicast_locators = allocation.locators.max_unicast_locators;
size_t max_multicast_locators = allocation.locators.max_multicast_locators;
// 初始化participant_proxies_pool_
// ParticipantProxyData 包含了Participant的信息
// 在pdp发现阶段,会互相发送这些信息给对端
for (size_t i = 0; i < allocation.participants.initial; ++i)
{
participant_proxies_pool_.push_back(new ParticipantProxyData(allocation));
}
// ReaderProxyData包含了reader的信息
//
for (size_t i = 0; i < allocation.total_readers().initial; ++i)
{
reader_proxies_pool_.push_back(new ReaderProxyData(max_unicast_locators, max_multicast_locators,
allocation.data_limits, allocation.content_filter));
}
// WriterProxyData包含了writer的信息
for (size_t i = 0; i < allocation.total_writers().initial; ++i)
{
writer_proxies_pool_.push_back(new WriterProxyData(max_unicast_locators, max_multicast_locators,
allocation.data_limits));
}
}
干了3件事情:
1.初始化了ParticipantProxyData,里面包含了participant 的所有信息,
2.初始化ReaderProxyData,里面包含的是reader的所有信息,
3.初始化WriterProxyData,里面包含的是writer 的所有信息,
步骤4 PDPSimple的初始化
bool PDPSimple::init(
RTPSParticipantImpl* part)
{
// The DATA(p) must be processed after EDP endpoint creation
if (!PDP::initPDP(part))
{
return false;
}
//INIT EDP
if (m_discovery.discovery_config.use_STATIC_EndpointDiscoveryProtocol)
{
mp_EDP = new EDPStatic(this, mp_RTPSParticipant);
if (!mp_EDP->initEDP(m_discovery))
{
EPROSIMA_LOG_ERROR(RTPS_PDP, "Endpoint discovery configuration failed");
delete mp_EDP;
mp_EDP = nullptr;
return false;
}
}
else if (m_discovery.discovery_config.use_SIMPLE_EndpointDiscoveryProtocol)
{
mp_EDP = new EDPSimple(this, mp_RTPSParticipant);
if (!mp_EDP->initEDP(m_discovery))
{
EPROSIMA_LOG_ERROR(RTPS_PDP, "Endpoint discovery configuration failed");
delete mp_EDP;
mp_EDP = nullptr;
return false;
}
}
else
{
EPROSIMA_LOG_WARNING(RTPS_PDP, "No EndpointDiscoveryProtocol defined");
return false;
}
return true;
}
这个函数主要干了2件事情:
1.PDP::initPDP
2.创建edp对象
//根据配置使用不同的edp对象,默认是EDPSimple,这个是edp默认的发现协议
在发现协议里我们介绍过有四种发现协议,在这儿EDPSimple simple(简单发现协议),EDPStatic 对应的协议就是 和 static(EDP发现阶段被跳过,手动配置edp的相关信息)
不难发现 PDPSimple 只使用EDPSimple 或者 EDPStatic这两种发现协议。
我们后续主要看一下EDPSimple相关内容,在fastdds 中一般使用的是simple发现协议。
步骤5:pdp 的initPDP函数
bool PDP::initPDP(
RTPSParticipantImpl* part)
{
EPROSIMA_LOG_INFO(RTPS_PDP, "Beginning");
mp_RTPSParticipant = part;
m_discovery = mp_RTPSParticipant->getAttributes().builtin;
initial_announcements_ = m_discovery.discovery_config.initial_announcements;
//CREATE ENDPOINTS
if (!createPDPEndpoints())
{
return false;
}
//UPDATE METATRAFFIC.
update_builtin_locators();
mp_mutex->lock();
ParticipantProxyData* pdata = add_participant_proxy_data(mp_RTPSParticipant->getGuid(), false, nullptr);
mp_mutex->unlock();
if (pdata == nullptr)
{
return false;
}
initializeParticipantProxyData(pdata);
return true;
}
初始化pdp,主要干了2件事
1.调用createPDPEndpoints,PDPEndPoints,就是创建statelessreader 和 statelesswriter
2.创建了一个ParticipantProxyData对象, 并将这个对象初始化
这个函数包含了Participant的基本信息,在pdp discovery阶段,本地会发送pdp消息给到远端,在这里这个ParticipantProxyData对象就是包含了本地Participant的基本信息,发送的pdp消息中会将ParticipantProxyData对象序列化,然后发送给远端。
步骤6:createPDPEndpoints
Endpoints 就是writer 和 reader,这个函数就是创建PDPwriter 和 PDPreader:
对于PDP来说,需要发现participant,那么就需要Writer 来写信息,Reader来读信息,这样才能完成信息交互,发现Participant
bool PDPSimple::createPDPEndpoints()
{
EPROSIMA_LOG_INFO(RTPS_PDP, "Beginning");
const RTPSParticipantAttributes& pattr = mp_RTPSParticipant->getRTPSParticipantAttributes();
const RTPSParticipantAllocationAttributes& allocation = pattr.allocation;
const BuiltinAttributes& builtin_att = mp_builtin->m_att;
auto endpoints = new fastdds::rtps::SimplePDPEndpoints();
builtin_endpoints_.reset(endpoints);
//SPDP BUILTIN RTPSParticipant READER
// payloadMaxSize CacheChange_t 有效负载的最大大小 最好能容纳尽可能大的数据块,默认值500bytes
HistoryAttributes hatt;
hatt.payloadMaxSize = builtin_att.readerPayloadSize;
// history 内存策略
hatt.memoryPolicy = builtin_att.readerHistoryMemoryPolicy;
// initialReservedCaches 初始缓存大小,默认500
hatt.initialReservedCaches = 25;
if (allocation.participants.initial > 0)
{
// 初始缓存大小
hatt.initialReservedCaches = (int32_t)allocation.participants.initial;
}
if (allocation.participants.maximum < std::numeric_limits<size_t>::max())
{
//最大缓存大小
hatt.maximumReservedCaches = (int32_t)allocation.participants.maximum;
}
PoolConfig reader_pool_cfg = PoolConfig::from_history_attributes(hatt);
endpoints->reader.payload_pool_ = TopicPayloadPoolRegistry::get("DCPSParticipant", reader_pool_cfg);
endpoints->reader.payload_pool_->reserve_history(reader_pool_cfg, true);
endpoints->reader.history_.reset(new ReaderHistory(hatt));
ReaderAttributes ratt;
ratt.endpoint.multicastLocatorList = mp_builtin->m_metatrafficMulticastLocatorList;
ratt.endpoint.unicastLocatorList = mp_builtin->m_metatrafficUnicastLocatorList;
ratt.endpoint.external_unicast_locators = mp_builtin->m_att.metatraffic_external_unicast_locators;
ratt.endpoint.ignore_non_matching_locators = pattr.ignore_non_matching_locators;
ratt.endpoint.topicKind = WITH_KEY;
ratt.endpoint.durabilityKind = TRANSIENT_LOCAL;
ratt.endpoint.reliabilityKind = BEST_EFFORT;
ratt.matched_writers_allocation = allocation.participants;
mp_listener = new PDPListener(this);
RTPSReader* reader = nullptr;
if (mp_RTPSParticipant->createReader(&reader, ratt,
endpoints->reader.payload_pool_, endpoints->reader.history_.get(),
mp_listener, c_EntityId_SPDPReader, true, false))
{
endpoints->reader.reader_ = dynamic_cast<StatelessReader*>(reader);
#if HAVE_SECURITY
mp_RTPSParticipant->set_endpoint_rtps_protection_supports(reader, false);
#endif // if HAVE_SECURITY
}
else
{
EPROSIMA_LOG_ERROR(RTPS_PDP, "SimplePDP Reader creation failed");
delete mp_listener;
mp_listener = nullptr;
endpoints->reader.release();
return false;
}
//SPDP BUILTIN RTPSParticipant WRITER
hatt.payloadMaxSize = mp_builtin->m_att.writerPayloadSize;
hatt.initialReservedCaches = 1;
hatt.maximumReservedCaches = 1;
hatt.memoryPolicy = mp_builtin->m_att.writerHistoryMemoryPolicy;
PoolConfig writer_pool_cfg = PoolConfig::from_history_attributes(hatt);
endpoints->writer.payload_pool_ = TopicPayloadPoolRegistry::get("DCPSParticipant", writer_pool_cfg);
endpoints->writer.payload_pool_->reserve_history(writer_pool_cfg, false);
endpoints->writer.history_.reset(new WriterHistory(hatt));
WriterAttributes watt;
watt.endpoint.external_unicast_locators = mp_builtin->m_att.metatraffic_external_unicast_locators;
watt.endpoint.ignore_non_matching_locators = pattr.ignore_non_matching_locators;
watt.endpoint.endpointKind = WRITER;
watt.endpoint.durabilityKind = TRANSIENT_LOCAL;
watt.endpoint.reliabilityKind = BEST_EFFORT;
watt.endpoint.topicKind = WITH_KEY;
watt.endpoint.remoteLocatorList = m_discovery.initialPeersList;
watt.matched_readers_allocation = allocation.participants;
if (pattr.throughputController.bytesPerPeriod != UINT32_MAX && pattr.throughputController.periodMillisecs != 0)
{
watt.mode = ASYNCHRONOUS_WRITER;
}
RTPSWriter* wout = nullptr;
if (mp_RTPSParticipant->createWriter(&wout, watt, endpoints->writer.payload_pool_, endpoints->writer.history_.get(),
nullptr,
c_EntityId_SPDPWriter, true))
{
endpoints->writer.writer_ = dynamic_cast<StatelessWriter*>(wout);
#if HAVE_SECURITY
mp_RTPSParticipant->set_endpoint_rtps_protection_supports(wout, false);
#endif // if HAVE_SECURITY
if (endpoints->writer.writer_ != nullptr)
{
const NetworkFactory& network = mp_RTPSParticipant->network_factory();
LocatorList_t fixed_locators;
Locator_t local_locator;
for (const Locator_t& loc : mp_builtin->m_initialPeersList)
{
if (network.transform_remote_locator(loc, local_locator))
{
fixed_locators.push_back(local_locator);
}
}
endpoints->writer.writer_->set_fixed_locators(fixed_locators);
}
}
else
{
EPROSIMA_LOG_ERROR(RTPS_PDP, "SimplePDP Writer creation failed");
endpoints->writer.release();
return false;
}
EPROSIMA_LOG_INFO(RTPS_PDP, "SPDP Endpoints creation finished");
return true;
}
这个函数主要是干了2件事
1.调用RTPSParticipantImpl的createReader创建了pdpreader,stateless的 reader
2.调用RTPSParticipantImpl的createWriter创建了pdpwriter,stateless的writer
这边有个fixed_locators 还是个挺关键的参数,就是m_initialPeersList中的locator做转换后存入了fixed_locators
initialPeers 就是在pdp发现阶段,除了监听多播外,还可以配置监听单播消息,initialPeers里面存放的就是远端的ip地址和端口号可以直接使用这个initialPeers 里的ip。
下面看一下RTPSParticipantImpl::createReader 如何创建 pdpreader的
步骤7:createReader
bool RTPSParticipantImpl::createReader(
RTPSReader** ReaderOut,
ReaderAttributes& param,
const std::shared_ptr<IPayloadPool>& payload_pool,
ReaderHistory* hist,
ReaderListener* listen,
const EntityId_t& entityId,
bool isBuiltin,
bool enable)
{
if (!payload_pool)
{
EPROSIMA_LOG_ERROR(RTPS_PARTICIPANT, "Trying to create reader with null payload pool");
return false;
}
auto callback = [hist, listen, &payload_pool, this]
(const GUID_t& guid, ReaderAttributes& param, IPersistenceService* persistence,
bool is_reliable) -> RTPSReader*
{
if (is_reliable)
{
if (persistence != nullptr)
{
return new StatefulPersistentReader(this, guid, param, payload_pool, hist, listen, persistence);
}
else
{
return new StatefulReader(this, guid, param, payload_pool, hist, listen);
}
}
else
{
if (persistence != nullptr)
{
return new StatelessPersistentReader(this, guid, param, payload_pool, hist, listen,
persistence);
}
else
{
return new StatelessReader(this, guid, param, payload_pool, hist, listen);
}
}
};
return create_reader(ReaderOut, param, entityId, isBuiltin, enable, callback);
}
上面这段代码,我们看到有两种类型的reader
StatefulReader 有状态的reader 配置成RELIABLE 的reader
StatelessReader 无状态的reader 配置成BEST_EFFORT 的reader
在pdp阶段创建的都是statelessreader,statelssreader不会保存已经收到的消息,没有消息队列
在edp阶段创建的都是statefulreader,statefulreader会保存已经收到的消息,会给对端的writer发送已经收到的消息的ack消息。
statelessreader一般对应statelesswriter
statefulreader一般对应statefulwriter
步骤8:
template <typename Functor>
bool RTPSParticipantImpl::create_reader(
RTPSReader** reader_out,
ReaderAttributes& param,
const EntityId_t& entity_id,
bool is_builtin,
bool enable,
const Functor& callback)
{
std::string type = (param.endpoint.reliabilityKind == RELIABLE) ? "RELIABLE" : "BEST_EFFORT";
EPROSIMA_LOG_INFO(RTPS_PARTICIPANT, "Creating reader of type " << type);
EntityId_t entId;
// 获取entId
if (!preprocess_endpoint_attributes<READER, 0x04, 0x07>(entity_id, IdCounter, param.endpoint, entId))
{
return false;
}
if (existsEntityId(entId, READER))
{
EPROSIMA_LOG_ERROR(RTPS_PARTICIPANT,
"A reader with the same entityId already exists in this RTPSParticipant");
return false;
}
// Special case for DiscoveryProtocol::BACKUP, which abuses persistence guid
GUID_t former_persistence_guid = param.endpoint.persistence_guid;
if (param.endpoint.persistence_guid == c_Guid_Unknown)
{
if (m_persistence_guid != c_Guid_Unknown)
{
// Generate persistence guid from participant persistence guid
// 根据participant 的persistence guid 获取这个reader 的persistence_guid
param.endpoint.persistence_guid = GUID_t(
m_persistence_guid.guidPrefix,
entity_id);
}
}
// Get persistence service
IPersistenceService* persistence = nullptr;
if (!get_persistence_service(is_builtin, param.endpoint, persistence))
{
return false;
}
// Check for unique_network_flows feature
bool request_unique_flows = false;
uint16_t initial_port = 0;
uint16_t final_port = 0;
// 永远返回true
if (!get_unique_flows_parameters(m_att, param.endpoint, request_unique_flows, initial_port, final_port))
{
//这儿走不到
return false;
}
//给locator 配置端口
normalize_endpoint_locators(param.endpoint);
RTPSReader* SReader = nullptr;
GUID_t guid(m_guid.guidPrefix, entId);
//在这儿创建的是statlessreader
SReader = callback(guid, param, persistence, param.endpoint.reliabilityKind == RELIABLE);
// restore attributes
param.endpoint.persistence_guid = former_persistence_guid;
if (SReader == nullptr)
{
return false;
}
······
//省略部分代码
······
if (param.endpoint.reliabilityKind == RELIABLE)
{
// 在RELIABLE 的情况下,创建sendresources
createSendResources(SReader);
}
if (is_builtin)
{
//设置TrustedWriter
SReader->setTrustedWriter(TrustedWriter(SReader->getGuid().entityId));
}
if (enable)
{
if (!createAndAssociateReceiverswithEndpoint(SReader, request_unique_flows, initial_port, final_port))
{
delete(SReader);
return false;
}
}
{
std::lock_guard<shared_mutex> _(endpoints_list_mutex);
m_allReaderList.push_back(SReader);
if (!is_builtin)
{
m_userReaderList.push_back(SReader);
}
}
*reader_out = SReader;
······
//省略部分代码
······
return true;
}
我们看一下上面代码的 78-82行:
if (param.endpoint.reliabilityKind == RELIABLE)
{
// 在RELIABLE 的情况下,创建sendresources
createSendResources(SReader);
}
这里面的意思是如果端点是RELIABLE,需要创建发送的资源,为什么reader需要创建发送资源?
这里面就涉及到,dds如何实现传输的可靠性,dds需要回传确认信息,来保证传输的可靠性,就是说reader收到消息之后,回传确认信息给writer,如果writer没有收到确认信息,则重传相应的消息。要实现这种回传需要reader 创建传输的socket 通道。
fastdds 实现这个RELIABLE机制,比我刚才说的要复杂,具体的我们在之后的章节做详细准确的介绍。
在pdp阶段创建的都是statelessreader,是BEST_EFFORT,也就是不会有重传机制。
所以这里面不会创建sendresources。
这个函数主要干了这几件事情:
1.为Reader分配guid,和persistence_guid(如果有的话)
2.创建RTPSReader对象(StatlessReader或者StatefulReader)
这里面enable 是false,所以并不会调用createAndAssociateReceiverswithEndpoint,这个enable 什么意思那,就是马上启用这个reader,这个reader 马上开始接收消息,开始工作。
什么意思那就是说enable 为true,表示reader马上开始接收消息,整个pdp 发现模块就开始工作了。
一般来说会把enable 设置成false,而启动这个reader接收消息的动作由其他函数完成。
3.调用RTPSParticipantImpl::createAndAssociateReceiverswithEndpoint
创建ReceiverResources,将ReceiverResources的messageReceiver 与RTPSReader关联
步骤9:比较简单,没有什么代码
bool RTPSParticipantImpl::createWriter(
RTPSWriter** WriterOut,
WriterAttributes& param,
const std::shared_ptr<IPayloadPool>& payload_pool,
WriterHistory* hist,
WriterListener* listen,
const EntityId_t& entityId,
bool isBuiltin)
{
if (!payload_pool)
{
EPROSIMA_LOG_ERROR(RTPS_PARTICIPANT, "Trying to create writer with null payload pool");
return false;
}
auto callback = [hist, listen, &payload_pool, this]
(const GUID_t& guid, WriterAttributes& param, fastdds::rtps::FlowController* flow_controller,
IPersistenceService* persistence, bool is_reliable) -> RTPSWriter*
{
if (is_reliable)
{
if (persistence != nullptr)
{
return new StatefulPersistentWriter(this, guid, param, payload_pool, flow_controller,
hist, listen, persistence);
}
else
{
return new StatefulWriter(this, guid, param, payload_pool, flow_controller,
hist, listen);
}
}
else
{
if (persistence != nullptr)
{
return new StatelessPersistentWriter(this, guid, param, payload_pool, flow_controller,
hist, listen, persistence);
}
else
{
return new StatelessWriter(this, guid, param, payload_pool, flow_controller,
hist, listen);
}
}
};
return create_writer(WriterOut, param, entityId, isBuiltin, callback);
}
上面这段代码,我们看到有两种类型的writer
StatefulWriter 有状态的writer配置成RELIABLE 的writer,StatefulWriter又分为StatefulPersistentWriter 和StatefulWriter,StatefulPersistentWriter会把消息存储在文件系统中
StatelessWriter 无状态的writer 配置成BEST_EFFORT 的writer,StatelessWriter 又分为StatelessPersistentWriter和StatelessWriter,
StatelessPersistentWriter会把消息存储在文件系统中
在pdp阶段创建的都是statlesswriter,
在edp阶段创建的都是statefulwriter
步骤10:create_writer
bool RTPSParticipantImpl::create_writer(
RTPSWriter** writer_out,
WriterAttributes& param,
const EntityId_t& entity_id,
bool is_builtin,
const Functor& callback)
{
std::string type = (param.endpoint.reliabilityKind == RELIABLE) ? "RELIABLE" : "BEST_EFFORT";
EPROSIMA_LOG_INFO(RTPS_PARTICIPANT, "Creating writer of type " << type);
EntityId_t entId;
if (!preprocess_endpoint_attributes<WRITER, 0x03, 0x02>(entity_id, IdCounter, param.endpoint, entId))
{
return false;
}
if (existsEntityId(entId, WRITER))
{
EPROSIMA_LOG_ERROR(RTPS_PARTICIPANT,
"A writer with the same entityId already exists in this RTPSParticipant");
return false;
}
GUID_t guid(m_guid.guidPrefix, entId);
fastdds::rtps::FlowController* flow_controller = nullptr;
const char* flow_controller_name = param.flow_controller_name;
// Support of old flow controller style.
if (param.throughputController.bytesPerPeriod != UINT32_MAX && param.throughputController.periodMillisecs != 0)
{
flow_controller_name = guid_str_.c_str();
if (ASYNCHRONOUS_WRITER == param.mode)
{
fastdds::rtps::FlowControllerDescriptor old_descriptor;
old_descriptor.name = guid_str_.c_str();
old_descriptor.max_bytes_per_period = param.throughputController.bytesPerPeriod;
old_descriptor.period_ms = param.throughputController.periodMillisecs;
flow_controller_factory_.register_flow_controller(old_descriptor);
flow_controller = flow_controller_factory_.retrieve_flow_controller(guid_str_.c_str(), param);
}
else
{
EPROSIMA_LOG_WARNING(RTPS_PARTICIPANT,
"Throughput flow controller was configured while writer's publish mode is configured as synchronous." \
"Throughput flow controller configuration is not taken into account.");
}
}
if (m_att.throughputController.bytesPerPeriod != UINT32_MAX && m_att.throughputController.periodMillisecs != 0)
{
if (ASYNCHRONOUS_WRITER == param.mode && nullptr == flow_controller)
{
flow_controller_name = guid_str_.c_str();
flow_controller = flow_controller_factory_.retrieve_flow_controller(guid_str_, param);
}
else
{
EPROSIMA_LOG_WARNING(RTPS_PARTICIPANT,
"Throughput flow controller was configured while writer's publish mode is configured as synchronous." \
"Throughput flow controller configuration is not taken into account.");
}
}
// Retrieve flow controller.
// If not default flow controller, publish_mode must be asynchronously.
if (nullptr == flow_controller &&
(fastdds::rtps::FASTDDS_FLOW_CONTROLLER_DEFAULT == flow_controller_name ||
ASYNCHRONOUS_WRITER == param.mode))
{
flow_controller = flow_controller_factory_.retrieve_flow_controller(flow_controller_name, param);
}
if (nullptr == flow_controller)
{
if (fastdds::rtps::FASTDDS_FLOW_CONTROLLER_DEFAULT != flow_controller_name &&
SYNCHRONOUS_WRITER == param.mode)
{
EPROSIMA_LOG_ERROR(RTPS_PARTICIPANT, "Cannot use a flow controller in synchronously publication mode.");
}
else
{
EPROSIMA_LOG_ERROR(RTPS_PARTICIPANT, "Cannot create the writer. Couldn't find flow controller "
<< flow_controller_name << " for writer.");
}
return false;
}
// Check for unique_network_flows feature
if (nullptr != PropertyPolicyHelper::find_property(param.endpoint.properties, "fastdds.unique_network_flows"))
{
EPROSIMA_LOG_ERROR(RTPS_PARTICIPANT, "Unique network flows not supported on writers");
return false;
}
// Special case for DiscoveryProtocol::BACKUP, which abuses persistence guid
// 持久化服务
GUID_t former_persistence_guid = param.endpoint.persistence_guid;
if (param.endpoint.persistence_guid == c_Guid_Unknown)
{
if (m_persistence_guid != c_Guid_Unknown)
{
// Generate persistence guid from participant persistence guid
param.endpoint.persistence_guid = GUID_t(
m_persistence_guid.guidPrefix,
entity_id);
}
}
// Get persistence service
IPersistenceService* persistence = nullptr;
if (!get_persistence_service(is_builtin, param.endpoint, persistence))
{
return false;
}
normalize_endpoint_locators(param.endpoint);
RTPSWriter* SWriter = nullptr;
SWriter = callback(guid, param, flow_controller, persistence, param.endpoint.reliabilityKind == RELIABLE);
// restore attributes
param.endpoint.persistence_guid = former_persistence_guid;
if (SWriter == nullptr)
{
return false;
}
if (!SWriter->is_pool_initialized())
{
delete(SWriter);
return false;
}
······
//省略部分代码
······
createSendResources(SWriter);
if (param.endpoint.reliabilityKind == RELIABLE)
{
if (!createAndAssociateReceiverswithEndpoint(SWriter))
{
delete(SWriter);
return false;
}
}
{
std::lock_guard<shared_mutex> _(endpoints_list_mutex);
m_allWriterList.push_back(SWriter);
if (!is_builtin)
{
m_userWriterList.push_back(SWriter);
}
}
*writer_out = SWriter;
······
//省略部分代码
······
return true;
}
1.调用FlowControllerFactory::retrieve_flow_controller,创建或者获取一个flowcontroller
我们在之前讲过flowcontroller的相关内容,flowcontroller主要是控制发送策略。
先根据传入的参数WriterAttributes,来创建或者获取一个flowcontroller。
如果没有成功,再根据 RTPSParticipantImpl的参数 来创建或者获取一个flowcontroller,如果没有成功,创建或者获取一个默认的flowcontroller。
每一个RTPSWriter 都会有一个flow_controller与之匹配,一个flow_controller 可能有多个RTPSWriter与之关联
2.以flow_controller为参数调用回调函数来创建writer,将writer放入m_allWriterList中去,如果不是内置的writer,则放入 m_userWriterList中去
3.创建senderresource
4.如果是Statefulwriter,调用createAndAssociateReceiverswithEndpoint
创建ReceiverResources,将ReceiverResources的messageReceiver 与RTPSReader关联
在这里创建的是statelesswriter ,那么不会创建receiverresource。
下面是这个创建获取函数:
步骤12:register_flow_controller
根据参数注册 flow_controller,所有的flow_controller都是由FlowControllerFactory管理的,所以这里先注册再获取并使用
这个函数的解析可以看上一章节的相关内容。
步骤13:FlowControllerFactory::retrieve_flow_controller
通过参数获取flow_controller,这个是这个writer的发送控制组件
FlowController* FlowControllerFactory::retrieve_flow_controller(
const std::string& flow_controller_name,
const fastrtps::rtps::WriterAttributes& writer_attributes)
{
FlowController* returned_flow = nullptr;
// Detect it has to be returned a default flow_controller.
// 默认的flowcontroller
if (0 == flow_controller_name.compare(FASTDDS_FLOW_CONTROLLER_DEFAULT))
{
//如果是同步的writer
if (fastrtps::rtps::SYNCHRONOUS_WRITER == writer_attributes.mode)
{
//如果说BEST_EFFORT
if (fastrtps::rtps::BEST_EFFORT == writer_attributes.endpoint.reliabilityKind)
{
//从map中取出纯同步的flowcontroller
returned_flow = flow_controllers_[pure_sync_flow_controller_name].get();
}
else
{
//是一般
returned_flow = flow_controllers_[sync_flow_controller_name].get();
}
}
else
{
returned_flow = flow_controllers_[async_flow_controller_name].get();
}
}
else
{
auto it = flow_controllers_.find(flow_controller_name);
if (flow_controllers_.end() != it)
{
returned_flow = it->second.get();
}
}
if (nullptr != returned_flow)
{
returned_flow->init();
}
else
{
EPROSIMA_LOG_ERROR(RTPS_PARTICIPANT,
"Cannot find FlowController " << flow_controller_name << ".");
}
return returned_flow;
}
typedef enum RTPSWriterPublishMode : octet
{
SYNCHRONOUS_WRITER,
ASYNCHRONOUS_WRITER
} RTPSWriterPublishMode;
writer 分为同步writer 和异步writer
如果writer为同步同时是best_effort的情况下,相对应的flowcontroller是纯同步的,就是说没有异步发送的线程
不是纯同步的flowcontroller都有异步发送的线程
上面函数主要干了2件事
1.获取flowcontroller
2.将flowcontroller 初始化
flowcontroller 初始化
void init() override
{
initialize_async_thread();
}
template<typename PubMode = PublishMode>
typename std::enable_if<!std::is_same<FlowControllerPureSyncPublishMode, PubMode>::value, void>::type
initialize_async_thread()
{
bool expected = false;
if (async_mode.running.compare_exchange_strong(expected, true))
{
// Code for initializing the asynchronous thread.
async_mode.thread = std::thread(&FlowControllerImpl::run, this);
}
}
/*! This function is used when PublishMode = FlowControllerPureSyncPublishMode.
* In this case the async thread doesn't need to be initialized.
*/
template<typename PubMode = PublishMode>
typename std::enable_if<std::is_same<FlowControllerPureSyncPublishMode, PubMode>::value, void>::type
initialize_async_thread()
{
// Do nothing.
}
从这个初始化的函数,我们可以看出来,非FlowControllerPureSyncPublishMode的flowcontroller 会创建一个thread 轮训队列,异步发送消息
FlowControllerPureSyncPublishMode的flowcontroller就不会创建thread
简单来说纯同步flowcontroller,不会创建线程,异步flowcontroller会创建一个线程来发送异步消息
这个函数主要是创建了一个thread,轮训发送异步消息。
步骤14:创建发送的资源createSendResources
bool RTPSParticipantImpl::createSendResources(
Endpoint* pend)
{
if (pend->m_att.remoteLocatorList.empty())
{
// Adds the default locators of every registered transport.
// 如果pend->m_att.remoteLocatorList为空就加入239.255.0.1 和 对应的ipv6的地址
// 为空的话,就不变
m_network_Factory.GetDefaultOutputLocators(pend->m_att.remoteLocatorList);
}
std::lock_guard<std::timed_mutex> guard(m_send_resources_mutex_);
//Output locators have been specified, create them
for (auto it = pend->m_att.remoteLocatorList.begin(); it != pend->m_att.remoteLocatorList.end(); ++it)
{
if (!m_network_Factory.build_send_resources(send_resource_list_, (*it)))
{
EPROSIMA_LOG_WARNING(RTPS_PARTICIPANT, "Cannot create send resource for endpoint remote locator (" <<
pend->getGuid() << ", " << (*it) << ")");
}
}
return true;
}
1.NetworkFactory::GetDefaultOutputLocators 获取locator
2.根据locator,NetworkFactory::build_send_resources创建sendresources
针对每个Locator(ip地址和端口号),调用NetworkFactory::build_send_resources 创建发送的socket
步骤15:创建发送的资源build_send_resources
bool NetworkFactory::build_send_resources(
SendResourceList& sender_resource_list,
const Locator_t& locator)
{
bool returned_value = false;
for (auto& transport : mRegisteredTransports)
{
returned_value |= transport->OpenOutputChannel(sender_resource_list, locator);
}
return returned_value;
}
根据各个transport调用OpenOutputChannel
创建 发送的socket,放入RTPSParticipantImpl的sender_resource_list中
步骤16:OpenOutputChannel
bool UDPTransportInterface::OpenOutputChannel(
SendResourceList& sender_resource_list,
const Locator& locator)
{
if (!IsLocatorSupported(locator))
{
return false;
}
std::vector<IPFinder::info_IP> locNames;
//获取还没有sender_resource的ip地址
//获取本地的ip地址,如果已经有sender_resource了,就去掉
get_unknown_network_interfaces(sender_resource_list, locNames);
if (locNames.empty() && !first_time_open_output_channel_)
{
statistics_info_.add_entry(locator);
rescan_interfaces_.store(false);
return true;
}
try
{
uint16_t port = configuration()->m_output_udp_socket;
// If there is no whitelist, we can simply open a generic output socket
// and gain efficiency.
// whitelist 为空
if (is_interface_whitelist_empty())
{
if (first_time_open_output_channel_)
{
first_time_open_output_channel_ = false;
// We add localhost output for multicast, so in case the network cable is unplugged, local
// participants keep receiving DATA(p) announcements
// Also in case that no network interfaces were found
// 本地local 的多播,针对的是无网络情况
try
{
// 根据0.0.0.0创建socket
eProsimaUDPSocket unicastSocket = OpenAndBindUnicastOutputSocket(GenerateAnyAddressEndpoint(
port), port);
getSocketPtr(unicastSocket)->set_option(ip::multicast::enable_loopback(true));
SetSocketOutboundInterface(unicastSocket, localhost_name());
sender_resource_list.emplace_back(
static_cast<SenderResource*>(new UDPSenderResource(*this, unicastSocket, false, true)));
}
catch (asio::system_error const& e)
{
(void)e;
EPROSIMA_LOG_WARNING(RTPS_MSG_OUT, "UDPTransport Error binding interface "
<< localhost_name() << " (skipping) with msg: " << e.what());
}
}
// Create sockets for outbounding multicast for the other found network interfaces.
// 根据外部ip地址创建 sender_resources
if (!locNames.empty())
{
// Create other socket for outbounding rest of interfaces.
for (auto locIt = locNames.begin(); locIt != locNames.end(); ++locIt)
{
uint16_t new_port = 0;
try
{
eProsimaUDPSocket multicastSocket =
OpenAndBindUnicastOutputSocket(generate_endpoint((*locIt).name, new_port), new_port);
SetSocketOutboundInterface(multicastSocket, (*locIt).name);
sender_resource_list.emplace_back(
static_cast<SenderResource*>(new UDPSenderResource(*this, multicastSocket, true)));
}
catch (asio::system_error const& e)
{
(void)e;
EPROSIMA_LOG_WARNING(RTPS_MSG_OUT, "UDPTransport Error binding interface "
<< (*locIt).name << " (skipping) with msg: " << e.what());
}
}
}
}
else
{
//获取还没有sender_resource的ip地址包括回环地址
get_unknown_network_interfaces(sender_resource_list, locNames, true);
for (const auto& infoIP : locNames)
{
if (is_interface_allowed(infoIP.name))
{
// 设置参数
eProsimaUDPSocket unicastSocket =
OpenAndBindUnicastOutputSocket(generate_endpoint(infoIP.name, port), port);
SetSocketOutboundInterface(unicastSocket, infoIP.name);
if (first_time_open_output_channel_)
{
getSocketPtr(unicastSocket)->set_option(ip::multicast::enable_loopback(true));
first_time_open_output_channel_ = false;
}
sender_resource_list.emplace_back(
static_cast<SenderResource*>(new UDPSenderResource(*this, unicastSocket, false, true)));
}
}
}
}
catch (asio::system_error const& e)
{
(void)e;
/* TODO Que hacer?
EPROSIMA_LOG_ERROR(RTPS_MSG_OUT, "UDPTransport Error binding at port: (" << IPLocator::getPhysicalPort(locator) << ")"
<< " with msg: " << e.what());
for (auto& socket : mOutputSockets)
{
delete socket;
}
mOutputSockets.clear();
*/
return false;
}
statistics_info_.add_entry(locator);
rescan_interfaces_.store(false);
return true;
}
// 上面这些代码主要是根据本地地址,创建socket
如果没有白名单地址,就以0.0.0.0和本机地址作为源地址创建socket
如果有白名单地址,则以本机地址和白名单的交集作为源地址创建socket
所有的socket都保存在RTPSParticipantImpl的sender_resource_list中
步骤17:new UDPSenderResource
UDPSenderResource(
UDPTransportInterface& transport,
eProsimaUDPSocket& socket,
bool only_multicast_purpose = false,
bool whitelisted = false)
: SenderResource(transport.kind())
, socket_(moveSocket(socket))
, only_multicast_purpose_(only_multicast_purpose)
, whitelisted_(whitelisted)
, transport_(transport)
{
// Implementation functions are bound to the right transport parameters
clean_up = [this, &transport]()
{
transport.CloseOutputChannel(socket_);
};
send_lambda_ = [this, &transport](
const fastrtps::rtps::octet* data,
uint32_t dataSize,
fastrtps::rtps::LocatorsIterator* destination_locators_begin,
fastrtps::rtps::LocatorsIterator* destination_locators_end,
const std::chrono::steady_clock::time_point& max_blocking_time_point) -> bool
{
return transport.send(data, dataSize, socket_, destination_locators_begin,
destination_locators_end, only_multicast_purpose_, whitelisted_,
max_blocking_time_point);
};
}
UDPSenderResource 的初始化函数,定义了2个表达式,一个是发送表达式,负责发送消息
一个是关闭channel 的表达式,在退出的时候,关闭socket。
2.4类图
classDiagram
RTPSParticipantImpl *-- BuiltinProtocols
BuiltinProtocols *-- PDP
PDP *-- RTPSWriter
PDP *-- RTPSReader
BuiltinProtocols *-- WLP
PDP *-- EDP
EDP <-- EDPSimple
class RTPSParticipantImpl {
+BuiltinProtocols mp_builtinProtocols
}
class BuiltinProtocols {
+PDP mp_PDP
+WLP mp_WLP
}
class PDP {
+EDP mp_EDP
}
class RTPSWriter {
}
class RTPSReader {
}
class EDPSimple {
}
这是类图,RTPSParticipant 包含了BuiltinProtocols对象,BuiltinProtocols负责管理内置的协议,pdp ,edp 和 wlp。
而EDP,是由PDP管理的。
PDP中一般配置有RTPSWriter和RTPSReader,用来读取和发送pdp消息。
2.5总结
本节我们介绍了pdp的大概内容,下一篇我们将主要介绍一下EDP的相关内容。
本篇我们大概介绍了BuiltinProtocols大概有哪些协议组成的,并详细介绍了PDP的大概组成部分。 下一篇我们介绍一下EDP相关内容。
车载消息中间件FastDDS 源码解析(一)FastDDS 介绍和使用
车载消息中间件FastDDS 源码解析(二)RtpsParticipant的创建(上)
车载消息中间件FastDDS 源码解析(三)RtpsParticipant的创建(中)
车载消息中间件FastDDS 源码解析(四)RtpsParticipant的创建(下)
车载消息中间件FastDDS 源码解析(五)BuiltinProtocols(上)
车载消息中间件FastDDS 源码解析(六)BuiltinProtocols(中)EDP
车载消息中间件FastDDS 源码解析(七)BuiltinProtocols(下)WLP&TypeLookupManager
车载消息中间件FastDDS 源码解析(八)TimedEvent
车载消息中间件FastDDS 源码解析(十)发送第一条PDP消息(上)
FastDDS 源码解析(十二)发送第一条PDP消息(下)---异步发送
FastDDS 源码解析(十三)发送第一条PDP消息---跨进程发送
FastDDS 源码解析(十六)处理PDP消息——PDP匹配