车载消息中间件FastDDS 源码解析(一)FastDDS 介绍和使用
车载消息中间件FastDDS 源码解析(二)RtpsParticipant的创建(上)
车载消息中间件FastDDS 源码解析(三)RtpsParticipant的创建(中)
车载消息中间件FastDDS 源码解析(四)RtpsParticipant的创建(下)
车载消息中间件FastDDS 源码解析(五)BuiltinProtocols(上)
车载消息中间件FastDDS 源码解析(六)BuiltinProtocols(中)EDP
车载消息中间件FastDDS 源码解析(七)BuiltinProtocols(下)WLP&TypeLookupManager
车载消息中间件FastDDS 源码解析(八)TimedEvent
车载消息中间件FastDDS 源码解析(十)发送第一条PDP消息(上)
FastDDS 源码解析(十二)发送第一条PDP消息(下)---异步发送
FastDDS 源码解析(十三)发送第一条PDP消息---跨进程发送
上一篇我们介绍了收到一条pdp消息后的一部分前期的处理逻辑。
这一篇我们介绍这个pdp消息所携带的Participant的信息和PDP匹配的过程。
1.PDP消息处理的大概框架
消息处理的大概过程:
1.1时序图
sequenceDiagram
participant PDPListener
participant PDPSimple
participant EDPSimple
participant WLP
PDPListener ->> PDPSimple:1.assignRemoteEndpoints()
PDPSimple ->> PDPSimple:2.notifyAboveRemoteEndpoints()
PDPSimple ->> EDPSimple:3.assignRemoteEndpoints()
PDPSimple ->> WLP:4.assignRemoteEndpoints()
1.PDPSimple 的assignRemoteEndpoints
主要干了2件事:
a.对这个消息pdp匹配,和pdp部分的处理
b.PDPSimple::notifyAboveRemoteEndpoints
2.PDPSimple::notifyAboveRemoteEndpoints 是对EDP , WLP 和 TypeLookupManager的匹配
分别调用了
a.EDP的assignRemoteEndpoints
b.wlp的assignRemoteEndpoints
c.TypeLookupManager的assignRemoteEndpoints
3.调用 EDP的assignRemoteEndpoints就是将远端的participant对应的edp端点进行匹配
4.调用wlp的assignRemoteEndpoints,wlp与远端的participant的wlp端点进行匹配
5.调用TypeLookupManager的assignRemoteEndpoints
1.2源码
void PDPSimple::assignRemoteEndpoints(
ParticipantProxyData* pdata)
{
------
这部分是pdp的匹配
------
//Inform EDP of new RTPSParticipant data:
notifyAboveRemoteEndpoints(*pdata);
------
}
上面的函数
主要干了2件事
1.pdp匹配的相关事情
2.调用notifyAboveRemoteEndpoints,进行EDP匹配,WLP 匹配和TypeLookupManager匹配
void PDPSimple::notifyAboveRemoteEndpoints(
const ParticipantProxyData& pdata)
{
//Inform EDP of new RTPSParticipant data:
if (mp_EDP != nullptr)
{
mp_EDP->assignRemoteEndpoints(pdata);
}
if (mp_builtin->mp_WLP != nullptr)
{
mp_builtin->mp_WLP->assignRemoteEndpoints(pdata);
}
if (mp_builtin->tlm_ != nullptr)
{
mp_builtin->tlm_->assign_remote_endpoints(pdata);
}
}
//调用EDP的assignRemoteEndpoints
在收到pdp消息之后,远端的participant 就已经被发现了,PDP的端点就已经互相配对成功,调用 EDP的assignRemoteEndpoints就是将远端的participant对应的edp端点进行匹配。
调用wlp的assignRemoteEndpoints
wlp与远端的participant的wlp端点进行匹配
调用tlm的assign_remote_endpoints 这个tlm是TypeLookupManager
1.3流程图:
graph TD
A(PDP 的assignRemoteEndpoints )-->B(EDP 的assignRemoteEndpoints)
B-->c(WLP 的assignRemoteEndpoints)
c-->d(TypeLookupManager 的assignRemoteEndpoints)
看这个流程图就更清晰了:
1.PDP的assignRemoteEndpoints,对pdp中的writer和reader等信息进行匹配
2.EDP的assignRemoteEndpoints,对edp中的writer和reader等信息进行匹配
3.WLP的assignRemoteEndpoints,对wlp中的writer和reader等信息进行匹配
4.TypeLookupManager的assignRemoteEndpoints,对tlp中的writer和reader等信息进行匹配
这篇博客接下来的部分我们对PDP的assignRemoteEndpoints进行详细的分析,EDP,WLP,TLP的assignRemoteEndpoints会在接下来的几篇博客中进行详细的分析
2.PDP的assignRemoteEndpoints
2.1时序图
sequenceDiagram
participant PDPListener
participant PDPSimple
participant StatelessReader
participant StatelessWriter
PDPListener ->> PDPSimple:1.assignRemoteEndpoints()
PDPSimple ->> StatelessReader:2.matched_writer_add()
PDPSimple ->> StatelessWriter:3.matched_reader_add()
StatelessWriter ->> StatelessWriter:4.update_reader_info()
PDPSimple ->> StatelessWriter:5.unsent_changes_reset
1.PDPSimple 的assignRemoteEndpoints 主要干了这几件事
a.StatelessReader 的 matched_writer_add 见2.
b.matched_reader_add 见3
c.unsent_changes_reset 见5
d.notifyAboveRemoteEndpoints 见6
2.StatelessReader 的 matched_writer_add ,先找一下这个新的writer 信息 之前有没有存储过,如果存储过,更新writer 的信息,没有存储过就新建信息,加入到队列中去
3.StatelessWriter 的 matched_reader_add 将reader的信息放入队列中,这里面 matched_reader_add多了一些 matched_writer_add 没有的东西,之所以这样是因为reader 增加,需要给reader 分配 senderresource等等的资源,这里面主要干了这几件事情:
a.先找一下这个新的reader 信息 之前有没有存储过,如果存储过,更新reader 的信息,没有存储过就新建信息,加入到队列中去
b.StatelessWriter::update_reader_info
4.StatelessWriter::update_reader_info 更新reader的信息
5.StatelessWriter::unsent_changes_reset 发送消息,这个在参考车载消息中间件FastDDS 源码解析(十)发送第一条PDP消息(上),有详细的描述
步骤1
void PDPSimple::assignRemoteEndpoints(
ParticipantProxyData* pdata)
{
EPROSIMA_LOG_INFO(RTPS_PDP, "For RTPSParticipant: " << pdata->m_guid.guidPrefix);
auto endpoints = static_cast<fastdds::rtps::SimplePDPEndpoints*>(builtin_endpoints_.get());
const NetworkFactory& network = mp_RTPSParticipant->network_factory();
uint32_t endp = pdata->m_availableBuiltinEndpoints;
uint32_t auxendp = endp;
bool use_multicast_locators = !mp_RTPSParticipant->getAttributes().builtin.avoid_builtin_multicast ||
pdata->metatraffic_locators.unicast.empty();
//标识位,远端的participant有没有statelesswriter
auxendp &= DISC_BUILTIN_ENDPOINT_PARTICIPANT_ANNOUNCER;
if (auxendp != 0)
{
auto temp_writer_data = get_temporary_writer_proxies_pool().get();
temp_writer_data->clear();
temp_writer_data->guid().guidPrefix = pdata->m_guid.guidPrefix;
temp_writer_data->guid().entityId = c_EntityId_SPDPWriter;
temp_writer_data->persistence_guid(pdata->get_persistence_guid());
temp_writer_data->set_persistence_entity_id(c_EntityId_SPDPWriter);
temp_writer_data->set_remote_locators(pdata->metatraffic_locators, network, use_multicast_locators);
temp_writer_data->m_qos.m_reliability.kind = RELIABLE_RELIABILITY_QOS;
temp_writer_data->m_qos.m_durability.kind = TRANSIENT_LOCAL_DURABILITY_QOS;
endpoints->reader.reader_->matched_writer_add(*temp_writer_data);
}
auxendp = endp;
//标识位,远端的标识位有没有statelessreader
auxendp &= DISC_BUILTIN_ENDPOINT_PARTICIPANT_DETECTOR;
if (auxendp != 0)
{
auto temp_reader_data = get_temporary_reader_proxies_pool().get();
temp_reader_data->clear();
temp_reader_data->m_expectsInlineQos = false;
temp_reader_data->guid().guidPrefix = pdata->m_guid.guidPrefix;
temp_reader_data->guid().entityId = c_EntityId_SPDPReader;
temp_reader_data->set_remote_locators(pdata->metatraffic_locators, network, use_multicast_locators);
temp_reader_data->m_qos.m_reliability.kind = BEST_EFFORT_RELIABILITY_QOS;
temp_reader_data->m_qos.m_durability.kind = TRANSIENT_LOCAL_DURABILITY_QOS;
endpoints->writer.writer_->matched_reader_add(*temp_reader_data);
StatelessWriter* pW = endpoints->writer.writer_;
if (pW != nullptr)
{
//发送pdp消息
pW->unsent_changes_reset();
}
else
{
EPROSIMA_LOG_ERROR(RTPS_PDP, "Using PDPSimple protocol with a reliable writer");
}
}
#if HAVE_SECURITY
// Validate remote participant
mp_RTPSParticipant->security_manager().discovered_participant(*pdata);
#else
//Inform EDP of new RTPSParticipant data:
notifyAboveRemoteEndpoints(*pdata);
#endif // if HAVE_SECURITY
}
上面的函数主要干了4件事情:
1.调用StatelessReader::matched_writer_add,将远端participant对应的writer信息 加入到 matched_writers_队列中
看一下远端的participant 有没有对应的writer,就是pdp的statelesswriter,如果有的话,就将这个writer加入到对应的队列
auxendp &= DISC_BUILTIN_ENDPOINT_PARTICIPANT_DETECTOR;
m_availableBuiltinEndpoints我们留意一下,它与pdp消息的内容相对应(参考FastDDS 源码解析(十四)接收PDP消息(上)PID_BUILTIN_ENDPOINT_SET)
见步骤2
2.调用StatelessWriter::matched_reader_add,将远端participant对应的reader信息 加入到matched_remote_readers_队列中
看一下远端的participant 有没有对应的reader,就是pdp的statelessreader,如果有的话,就将这个reader加入到对应的队列
见步骤3
3.调用StatelessWriter的unsent_changes_reset函数,发送pdp消息
4.调用notifyAboveRemoteEndpoints,进行EDP匹配和WLP 等匹配
步骤2:
bool StatelessReader::matched_writer_add(
const WriterProxyData& wdata)
{
ReaderListener* listener = nullptr;
{
std::unique_lock<RecursiveTimedMutex> guard(mp_mutex);
listener = mp_listener;
// 如果这个writer之前就有,就更新writer的信息
for (RemoteWriterInfo_t& writer : matched_writers_)
{
if (writer.guid == wdata.guid())
{
EPROSIMA_LOG_INFO(RTPS_READER, "Attempting to add existing writer, updating information");
if (EXCLUSIVE_OWNERSHIP_QOS == m_att.ownershipKind &&
writer.ownership_strength != wdata.m_qos.m_ownershipStrength.value)
{
mp_history->writer_update_its_ownership_strength_nts(
writer.guid, wdata.m_qos.m_ownershipStrength.value);
}
writer.ownership_strength = wdata.m_qos.m_ownershipStrength.value;
if (nullptr != listener)
{
// call the listener without the lock taken
guard.unlock();
listener->on_writer_discovery(this, WriterDiscoveryInfo::CHANGED_QOS_WRITER, wdata.guid(),
&wdata);
}
return false;
}
}
//是不是进程内
bool is_same_process = RTPSDomainImpl::should_intraprocess_between(m_guid, wdata.guid());
//是不是进程间
bool is_datasharing = !is_same_process && is_datasharing_compatible_with(wdata);
RemoteWriterInfo_t info;
info.guid = wdata.guid();
info.persistence_guid = wdata.persistence_guid();
info.has_manual_topic_liveliness = (MANUAL_BY_TOPIC_LIVELINESS_QOS == wdata.m_qos.m_liveliness.kind);
info.is_datasharing = is_datasharing;
info.ownership_strength = wdata.m_qos.m_ownershipStrength.value;
//如果进程间通信
if (is_datasharing)
{
//加入监听
if (datasharing_listener_->add_datasharing_writer(wdata.guid(),
m_att.durabilityKind == VOLATILE,
mp_history->m_att.maximumReservedCaches))
{
EPROSIMA_LOG_INFO(RTPS_READER, "Writer Proxy " << wdata.guid() << " added to " << this->m_guid.entityId
<< " with data sharing");
}
else
{
EPROSIMA_LOG_ERROR(RTPS_READER, "Failed to add Writer Proxy " << wdata.guid()
<< " to " << this->m_guid.entityId
<< " with data sharing.");
return false;
}
}
//将writer放入matched_writers_
if (matched_writers_.emplace_back(info) == nullptr)
{
------
return false;
}
------
add_persistence_guid(info.guid, info.persistence_guid);
m_acceptMessagesFromUnkownWriters = false;
// Intraprocess manages durability itself
if (is_datasharing && !is_same_process && m_att.durabilityKind != VOLATILE)
{
// simulate a notification to force reading of transient changes
// this has to be done after the writer is added to the matched_writers or the processing may fail
datasharing_listener_->notify(false);
}
}
//wlp相关
if (liveliness_lease_duration_ < c_TimeInfinite)
{
auto wlp = mp_RTPSParticipant->wlp();
if ( wlp != nullptr)
{
wlp->sub_liveliness_manager_->add_writer(
wdata.guid(),
liveliness_kind_,
liveliness_lease_duration_);
}
else
{
EPROSIMA_LOG_ERROR(RTPS_LIVELINESS, "Finite liveliness lease duration but WLP not enabled");
}
}
------
return true;
}
主要干了这几件事
1.看一下matched_writers_中有没有这个writer,如果没有就存入
如果有就更新writer中的信息,matched_writers_是RemoteWriterInfo_t的队列
2.如果这个writer是datasharing的writer,加入监听,同时告知其他进程的writer,有新消息过来,让其他进程的writer获取消息进行处理
3.wlp相关内容,这样WLp会对这个writer进行相应的liveness的管理
步骤3:
bool StatelessWriter::matched_reader_add(
const ReaderProxyData& data)
{
using fastdds::rtps::ExternalLocatorsProcessor::filter_remote_locators;
std::unique_lock<RecursiveTimedMutex> guard(mp_mutex);
std::unique_lock<LocatorSelectorSender> locator_selector_guard(locator_selector_);
assert(data.guid() != c_Guid_Unknown);
// 先遍历matched_local_readers_ 再遍历matched_datasharing_readers_ 再遍历matched_remote_readers_
// 如果之前有相关reader,更新reader的信息
// ReaderLocator
if (for_matched_readers(matched_local_readers_, matched_datasharing_readers_, matched_remote_readers_,
[this, &data](ReaderLocator& reader)
{
if (reader.remote_guid() == data.guid())
{
EPROSIMA_LOG_WARNING(RTPS_WRITER, "Attempting to add existing reader, updating information.");
// 更新信息
if (reader.update(data.remote_locators().unicast,
data.remote_locators().multicast,
data.m_expectsInlineQos))
{
filter_remote_locators(*reader.general_locator_selector_entry(),
m_att.external_unicast_locators, m_att.ignore_non_matching_locators);
//
update_reader_info(true);
}
return true;
}
return false;
}
))
{
if (nullptr != mp_listener)
{
// call the listener without locks taken
locator_selector_guard.unlock();
guard.unlock();
mp_listener->on_reader_discovery(this, ReaderDiscoveryInfo::CHANGED_QOS_READER, data.guid(), &data);
}
return false;
}
// Get a locator from the inactive pool (or create a new one if necessary and allowed)
std::unique_ptr<ReaderLocator> new_reader;
// new_reader 从pool中分配一个内存
if (matched_readers_pool_.empty())
{
size_t max_readers = matched_readers_pool_.max_size();
if (getMatchedReadersSize() + matched_readers_pool_.size() < max_readers)
{
const RemoteLocatorsAllocationAttributes& loc_alloc =
mp_RTPSParticipant->getRTPSParticipantAttributes().allocation.locators;
//设置new_reader的locators的数量限制
new_reader.reset(new ReaderLocator(
this,
loc_alloc.max_unicast_locators,
loc_alloc.max_multicast_locators));
}
······
}
else
{
new_reader = std::move(matched_readers_pool_.back());
matched_readers_pool_.pop_back();
}
// Add info of new datareader.
// 创建readerlocator
new_reader->start(data.guid(),
data.remote_locators().unicast,
data.remote_locators().multicast,
data.m_expectsInlineQos,
is_datasharing_compatible_with(data));
//过滤locator
filter_remote_locators(*new_reader->general_locator_selector_entry(),
m_att.external_unicast_locators, m_att.ignore_non_matching_locators);
//加入readerlocator中的locator_selector
locator_selector_.locator_selector.add_entry(new_reader->general_locator_selector_entry());
if (new_reader->is_local_reader())
{
matched_local_readers_.push_back(std::move(new_reader));
EPROSIMA_LOG_INFO(RTPS_WRITER, "Adding reader " << data.guid() << " to " << this->m_guid.entityId
<< " as local reader");
}
else if (new_reader->is_datasharing_reader())
{
matched_datasharing_readers_.push_back(std::move(new_reader));
EPROSIMA_LOG_INFO(RTPS_WRITER, "Adding reader " << data.guid() << " to " << this->m_guid.entityId
<< " as data sharing");
}
else
{
matched_remote_readers_.push_back(std::move(new_reader));
EPROSIMA_LOG_INFO(RTPS_WRITER, "Adding reader " << data.guid() << " to " << this->m_guid.entityId
<< " as remote reader");
}
update_reader_info(true);
if (nullptr != mp_listener)
{
// call the listener without locks taken
locator_selector_guard.unlock();
guard.unlock();
mp_listener->on_reader_discovery(this, ReaderDiscoveryInfo::DISCOVERED_READER, data.guid(), &data);
}
return true;
}
matched_readers分为3类:
matched_local_readers_
matched_datasharing_readers_
matched_remote_readers_,
这三个队列里面存储的都是ReaderLocator
我们注意到matched_writers其实是不分类的,matched_readers分类是因为给reader发送消息的时候,执行的操作不一样,进程内通信,进程间通信和跨端通信是需要区分的。
1.看一下已经匹配的readers(matched_local_readers ,matched_datasharing_readers,matched_remote_readers_ )中有没有,有的话更新readrs的信息,没有就新建一个reader
2.根据这个ReaderProxyData,新建一个ReaderLocator,存入(matched_local_readers ,matched_datasharing_readers,matched_remote_readers_ )其中一个队列中
ReaderLocator::ReaderLocator(
RTPSWriter* owner,
size_t max_unicast_locators,
size_t max_multicast_locators)
{
if (owner->is_datasharing_compatible())
{
datasharing_notifier_ = new DataSharingNotifier(
owner->getAttributes().data_sharing_configuration().shm_directory());
}
}
初始化
bool ReaderLocator::start(
const GUID_t& remote_guid,
const ResourceLimitedVector<Locator_t>& unicast_locators,
const ResourceLimitedVector<Locator_t>& multicast_locators,
bool expects_inline_qos,
bool is_datasharing)
{
if (general_locator_info_.remote_guid == c_Guid_Unknown)
{
assert(c_Guid_Unknown == async_locator_info_.remote_guid);
expects_inline_qos_ = expects_inline_qos;
guid_as_vector_.at(0) = remote_guid;
guid_prefix_as_vector_.at(0) = remote_guid.guidPrefix;
general_locator_info_.remote_guid = remote_guid;
async_locator_info_.remote_guid = remote_guid;
is_local_reader_ = RTPSDomainImpl::should_intraprocess_between(owner_->getGuid(), remote_guid);
is_datasharing &= !is_local_reader_;
local_reader_ = nullptr;
if (!is_local_reader_ && !is_datasharing)
{
general_locator_info_.unicast = unicast_locators;
general_locator_info_.multicast = multicast_locators;
async_locator_info_.unicast = unicast_locators;
async_locator_info_.multicast = multicast_locators;
}
general_locator_info_.reset();
general_locator_info_.enable(true);
async_locator_info_.reset();
async_locator_info_.enable(true);
if (is_datasharing)
{
datasharing_notifier_->enable(remote_guid);
}
return true;
}
return false;
}
1.ReaderLocator中单播多播locator的初始化
2.ReaderLocator中状态重置
classDiagram
ReaderLocator *-- LocatorSelectorEntry
LocatorSelectorEntry *-- Locator_t
class ReaderLocator{
+LocatorSelectorEntry general_locator_info_
+LocatorSelectorEntry async_locator_info_
}
class LocatorSelectorEntry{
+ResourceLimitedVector<Locator_t> unicast
+ResourceLimitedVector<Locator_t> multicast
}
general_locator_info_ 同步发送
async_locator_info_ 异步发送
这两个属性中内容是一样的
statelesswriter 中的locator_selector_general中存放各个匹配的ReaderLocator中的general_locator_info_ 中的locator,同步发送的时候使用。
statefulwriter 中的Locator_selector_async中存放各个匹配的ReaderLocator中的 async_locator_info_中,异步发送的时候使用,
statefulwriter 也有locator_selector_general,同步发送的时候使用。
bool ReaderLocator::update(
const ResourceLimitedVector<Locator_t>& unicast_locators,
const ResourceLimitedVector<Locator_t>& multicast_locators,
bool expects_inline_qos)
{
bool ret_val = false;
if (expects_inline_qos_ != expects_inline_qos)
{
expects_inline_qos_ = expects_inline_qos;
ret_val = true;
}
// 不相等 更新
if (!(general_locator_info_.unicast == unicast_locators) ||
!(general_locator_info_.multicast == multicast_locators))
{
if (!is_local_reader_ && !is_datasharing_reader())
{
general_locator_info_.unicast = unicast_locators;
general_locator_info_.multicast = multicast_locators;
async_locator_info_.unicast = unicast_locators;
async_locator_info_.multicast = multicast_locators;
}
// selector state 清空
general_locator_info_.reset();
// 设置enable 为true
general_locator_info_.enable(true);
async_locator_info_.reset();
async_locator_info_.enable(true);
ret_val = true;
}
return ret_val;
}
1.多播单播地址更新
2.状态重置
步骤4:
void StatelessWriter::update_reader_info(
bool create_sender_resources)
{
bool addGuid = !has_builtin_guid();
is_inline_qos_expected_ = false;
for_matched_readers(matched_local_readers_, matched_datasharing_readers_, matched_remote_readers_,
[this](const ReaderLocator& reader)
{
is_inline_qos_expected_ |= reader.expects_inline_qos();
return false;
}
);
update_cached_info_nts(locator_selector_);
if (addGuid)
{
compute_selected_guids(locator_selector_);
}
if (create_sender_resources)
{
RTPSParticipantImpl* part = mp_RTPSParticipant;
locator_selector_.locator_selector.for_each([part](const Locator_t& loc)
{
part->createSenderResources(loc);
});
}
}
主要干了2件事
1.update_cached_info_nts
2.createSenderResources,如果之前没有发送的SenderResource就重新创建一个
void RTPSWriter::update_cached_info_nts(
LocatorSelectorSender& locator_selector)
{
// 全部enable,将之前的状态放入last_state_
locator_selector.locator_selector.reset(true);
mp_RTPSParticipant->network_factory().select_locators(locator_selector.locator_selector);
}
void NetworkFactory::select_locators(
LocatorSelector& selector) const
{
// 初始化
selector.selection_start();
/* - for each transport:
* - transport_starts is called
* - transport handles the selection state of each entry
* - select may be called
*/
for (auto& transport : mRegisteredTransports)
{
transport->select_locators(selector);
}
}
void UDPTransportInterface::select_locators(
LocatorSelector& selector) const
{
fastrtps::ResourceLimitedVector<LocatorSelectorEntry*>& entries = selector.transport_starts();
for (size_t i = 0; i < entries.size(); ++i)
{
LocatorSelectorEntry* entry = entries[i];
if (entry->transport_should_process)
{
bool selected = false;
// First try to find a multicast locator which is at least on another list.
for (size_t j = 0; j < entry->multicast.size() && !selected; ++j)
{
// check 一下transport 支持不支持
if (IsLocatorSupported(entry->multicast[j]))
{
// 从i+1开始遍历entries,查找entry->multicast[j],将transport_should_process 置为false
// 如果返回true entry->multicast[j]将会被选择
if (check_and_invalidate(entries, i + 1, entry->multicast[j]))
{
entry->state.multicast.push_back(j);
selected = true;
}
else if (entry->unicast.size() == 0)
{
entry->state.multicast.push_back(j);
selected = true;
}
}
}
// If we couldn't find a multicast locator, select all unicast locators
if (!selected)
{
for (size_t j = 0; j < entry->unicast.size(); ++j)
{
if (IsLocatorSupported(entry->unicast[j]) && !selector.is_selected(entry->unicast[j]))
{
entry->state.unicast.push_back(j);
selected = true;
}
}
}
// Select this entry if necessary
if (selected)
{
selector.select(i);
}
}
}
}
配置locatorselector,选择哪些ip地址和端口发送消息
步骤5:
void StatelessWriter::unsent_changes_reset()
{
std::lock_guard<RecursiveTimedMutex> guard(mp_mutex);
std::for_each(mp_history->changesBegin(), mp_history->changesEnd(), [&](CacheChange_t* change)
{
flow_controller_->add_new_sample(this, change,
std::chrono::steady_clock::now() + std::chrono::hours(24));
});
}
这个函数将StatelessWriter 中所有的history中的change 都交给flow_controller_
让flow_controller_发送给远端的StatelessReader,这个参考FastDDS 源码解析(十一)发送第一条PDP消息(中) 发送第一条pdp消息相关内容。
3.pdp发现协议的大概过程
PDP 发现协议实际抓包情况
这是我tcpdump抓的包
我们可以看一下pdp发现协议,到底是如何工作的
sequenceDiagram
participant ParticipantA
participant ParticipantB
ParticipantA ->> ParticipantB: 1.ParticipantA发送PDP组播消息,这时候ParticipantB没有上线,收不到消息
ParticipantB ->> ParticipantA: 2.ParticipantB上线发送PDP组播消息(序号71的消息)
ParticipantA ->> ParticipantB: 3.ParticipantA收到ParticipantB发送的PDP组播消息后,马上发送PDP消息(组播和单播:序号78的消息)
ParticipantB ->> ParticipantA: 4.ParticipantB收到ParticipantA发送的PDP消息后,马上发送PDP消息(组播和单播序号85和87的消息)
ParticipantA ->> ParticipantB: 5.周期性的发送pdp消息(组播和单播)
ParticipantB ->> ParticipantA: 6.周期性的发送pdp消息(组播和单播)
上面同一个paritipant发送的pdp都是重复的pdp消息,他们的seqnumber都是1,如果收到重复性消息,dds会直接丢弃。并不会处理。
所以我们看到当participantA 收到 participantB 第一个消息的时候,会进行处理,同时向外发送pdp消息,如果是重复消息就不会处理。
1序号71 的pdp 消息 是 192.168.49.77的ip 发送给 239.255.0.1的pdp 多播消息
2.192.168.49.1的participant 收到了这个多播消息,马上发送pdp 消息(上面我们已经说到相关逻辑了,在收到pdp消息之后会发送发一条pdp消息出去)
我们可以看一下序号 78 的消息,这条消息是 192.168.49.1的 participant发送给 192.168.49.77的pdp消息
其实192.168.49.1的 participant 还发送一条发送给 239.255.0.1的pdp 多播消息
我们的tcpdump 只抓到了 发送地址 或 接收地址是 192.168.49.77的消息,所以,这条多播消息,我们没有抓到
3.192.168.49.77的participant 收到pdp 消息之后 马上发送pdp消息
序号 85的消息,是 192.168.49.77的 participant发送给 192.168.49.1的pdp消息
序号 87的消息,是 192.168.49.77的 participant发送给 239.255.0.1的pdp多播消息
4.192.168.49.77的participant 在初始化的初期,周期性的发送pdp消息
每隔100ms发送一次,一次发送2条:分别给239.255.0.1 和 192.168.49.1的ip地址发送一条,
这里面发送了5次,一共发送了10条pdp消息
这里
序号为120,128,136,1的消息,是 192.168.49.77的 participant发送给 192.168.49.1的pdp消息
序号 122,130,138,147,159的消息,是 192.168.49.77的 participant发送给 239.255.0.1的pdp多播消息,
这几个消息和第71条消息,是participant初始化过程中发送的消息。
这一部分可以参考 车载消息中间件FastDDS 源码解析(十)发送第一条PDP消息(上)
序号151的消息 是 192.168.49.1的 participant发送给 192.168.49.77的pdp消息
这条消息不是初始化阶段发送的周期性消息了,在participant发送了5次初始化消息后,就会进入常态化周期,每隔3s(默认值)发送一次pdp消息。序号151就是这种常态化周期发送的消息。
车载消息中间件FastDDS 源码解析(一)FastDDS 介绍和使用
车载消息中间件FastDDS 源码解析(二)RtpsParticipant的创建(上)
车载消息中间件FastDDS 源码解析(三)RtpsParticipant的创建(中)
车载消息中间件FastDDS 源码解析(四)RtpsParticipant的创建(下)
车载消息中间件FastDDS 源码解析(五)BuiltinProtocols(上)
车载消息中间件FastDDS 源码解析(六)BuiltinProtocols(中)EDP
车载消息中间件FastDDS 源码解析(七)BuiltinProtocols(下)WLP&TypeLookupManager
车载消息中间件FastDDS 源码解析(八)TimedEvent
车载消息中间件FastDDS 源码解析(十)发送第一条PDP消息(上)
FastDDS 源码解析(十二)发送第一条PDP消息(下)---异步发送