车载消息中间件FastDDS 源码解析(一)FastDDS 介绍和使用
车载消息中间件FastDDS 源码解析(二)RtpsParticipant的创建(上)
车载消息中间件FastDDS 源码解析(三)RtpsParticipant的创建(中)
车载消息中间件FastDDS 源码解析(四)RtpsParticipant的创建(下)
车载消息中间件FastDDS 源码解析(五)BuiltinProtocols(上)
车载消息中间件FastDDS 源码解析(六)BuiltinProtocols(中)EDP
车载消息中间件FastDDS 源码解析(七)BuiltinProtocols(下)WLP&TypeLookupManager
车载消息中间件FastDDS 源码解析(八)TimedEvent
车载消息中间件FastDDS 源码解析(十)发送第一条PDP消息(上)
本篇主要介绍如何同步发送远程的消息。
1.简介
FlowController看名字就是大概什么意思,就是发送流的控制器。
FlowControllerImpl 是一个泛型类模版。里面有两个类型参数PublishMode, SampleScheduling
PublishMode负责FlowControllerImpl 同步异步发送逻辑
SampleScheduling负责FlowControllerImpl在异步发送过程中的调度策略
template<typename PublishMode, typename SampleScheduling>
class FlowControllerImpl : public FlowController
{
}
classDiagram
FlowControllerPureSyncPublishMode <|-- FlowControllerSyncPublishMode
FlowControllerAsyncPublishMode <|-- FlowControllerSyncPublishMode
FlowControllerAsyncPublishMode <|-- FlowControllerLimitedAsyncPublishMode
PublishMode 有四种
FlowControllerPureSyncPublishMode 调用add_new_sample_impl的时候,会先同步发送,调用 deliver_sample_nts,如果发送失败调用enqueue_new_sample_impl异步发送。FlowControllerPureSyncPublishMode没有异步发送,enqueue_new_sample_impl直接返回false。
FlowControllerAsyncPublishMode 调用add_new_sample_impl的时候,直接调用enqueue_new_sample_impl 异步发送
FlowControllerSyncPublishMode 继承了FlowControllerAsyncPublishMode 和 FlowControllerPureSyncPublishMode
调用add_new_sample_impl的时候,会先同步发送,调用 deliver_sample_nts,如果发送失败调用enqueue_new_sample_impl异步发送
FlowControllerLimitedAsyncPublishMode 继承了 FlowControllerAsyncPublishMode
调用add_new_sample_impl的时候,直接调用enqueue_new_sample_impl 异步发送
对于pdp阶段的statelesswriter来说就是FlowControllerPureSyncPublishMode的writer,由于没有异步的发送,也就没有scheduler
SampleScheduling有四种
FlowControllerFifoSchedule 先进先出的调度
FlowControllerRoundRobinSchedule 轮询,每个datawriter 每次轮训发送一个
FlowControllerHighPrioritySchedule 根据优先级来发送
FlowControllerPriorityWithReservationSchedule 每个datawriter保证一个最小的的发送量,剩下的按照优先级来发送
这个在之前的builtinProtocols 章节有介绍过,flowcontroller初始化就是builtinProtocols中初始化的
2.FlowControllerImpl 的同步发送
2.1时序图
sequenceDiagram
participant FlowControllerImpl
participant StatelessWriter
participant RTPSMessageGroup
FlowControllerImpl->>FlowControllerImpl:1.add_new_sample_impl
FlowControllerImpl->>StatelessWriter:2.deliver_sample_nts
StatelessWriter->> RTPSMessageGroup:3.add_data
RTPSMessageGroup ->> RTPSMessageGroup:4.check_and_maybe_flush
RTPSMessageGroup ->> RTPSMessageGroup:5.flush_and_reset
RTPSMessageGroup ->> RTPSMessageGroup:6.send
FlowControllerImpl ->> RTPSMessageGroup:7.~RTPSMessageGroup
RTPSMessageGroup ->> RTPSMessageGroup:8.send
FlowControllerImpl->>FlowControllerImpl:9.enqueue_new_sample_impl
1.FlowControllerImpl 调用add_new_sample_impl,statelesswriter的mode是FlowControllerPureSyncPublishMode,
a.获取writer的LocatorSelectorSender 这个是发送的选择器
b.创建了MessageGroup对象
c.调用statelesswriter的 deliver_sample_nts函数
2.deliver_sample_nts 调用RTPSMessageGroup的add_data函数 或者add_data_frag 函数,如果消息体能够一个包发出去,那么就调用add_data,不能的话需要拆分成几个发出去调用add_data_frag
一个udp包的最大可以是64k
在这儿会对LocatorSelectorSender,做一些设置
这个LocatorSelectorSender是一个选择器,可以选择对哪个iplocator发送消息
3.RTPSMessageGroup::add_data 或者 add_data _frag主要是创建一个message
add_data或者add_data _frag 会调用到check_and_maybe_flush
4.check_and_maybe_flush会check一下目的地有没有改变
如果目的地改变的话会调用到flush_and_reset 发送消息,发送的是之前目的地址的消息
5.flush_and_reset会调用到flush ,flush会调用到send 函数发送消息
6.send 函数就会把消息发送出去
4,5,6步骤是如果sender的目的地有变化,就将之前的消息发送出去(没有就不发)。
7.在 add_new_sample_impl这个函数在退出的时候,创建的messagegroup会被销毁掉,这时候会调用messagegroup的析构函数,在这儿才是真正的调用send函数的地方
8.send 函数就会把消息发送出去
2.2源码解析:
调用flow_controller_->add_new_sample函数,会调用到add_new_sample_impl
步骤1:
template<typename PubMode = PublishMode>
typename std::enable_if<std::is_base_of<FlowControllerPureSyncPublishMode, PubMode>::value, bool>::type
add_new_sample_impl(
fastrtps::rtps::RTPSWriter* writer,
fastrtps::rtps::CacheChange_t* change,
const std::chrono::time_point<std::chrono::steady_clock>& max_blocking_time)
{
// This call should be made with writer's mutex locked.
// 在这儿初始化了locator_selector,get_general_locator_selector 返回statelesswriter 的属性 locator_selector_
fastrtps::rtps::LocatorSelectorSender& locator_selector = writer->get_general_locator_selector();
std::lock_guard<fastrtps::rtps::LocatorSelectorSender> lock(locator_selector);
// 这个初始化很重要
fastrtps::rtps::RTPSMessageGroup group(participant_, writer, &locator_selector);
// 同步发送
if (fastrtps::rtps::DeliveryRetCode::DELIVERED !=
writer->deliver_sample_nts(change, group, locator_selector, max_blocking_time))
{
//失败就异步发送
return enqueue_new_sample_impl(writer, change, max_blocking_time);
}
return true;
}
/*!
* This function stores internally the sample to send it asynchronously.
*/
template<typename PubMode = PublishMode>
typename std::enable_if<!std::is_base_of<FlowControllerPureSyncPublishMode, PubMode>::value, bool>::type
add_new_sample_impl(
fastrtps::rtps::RTPSWriter* writer,
fastrtps::rtps::CacheChange_t* change,
const std::chrono::time_point<std::chrono::steady_clock>& max_blocking_time)
{
return enqueue_new_sample_impl(writer, change, max_blocking_time);
}
如果PublishMode是FlowControllerPureSyncPublishMode或者FlowControllerPureSyncPublishMode的子类
则调用第一个函数:先同步发送,如果同步发送失败则调用enqueue_new_sample_impl,放入队列异步调用
如果PublishMode不是FlowControllerPureSyncPublishMode或者FlowControllerPureSyncPublishMode的子类
则调用第二个函数直接调用enqueue_new_sample_impl,放入队列异步调用
template<typename PubMode = PublishMode>
typename std::enable_if<!std::is_same<FlowControllerPureSyncPublishMode, PubMode>::value, bool>::type
enqueue_new_sample_impl(
fastrtps::rtps::RTPSWriter* writer,
fastrtps::rtps::CacheChange_t* change,
const std::chrono::time_point<std::chrono::steady_clock>& /* TODO max_blocking_time*/)
{
assert(!change->writer_info.is_linked.load());
// Sync delivery failed. Try to store for asynchronous delivery.
std::unique_lock<std::mutex> lock(async_mode.changes_interested_mutex);
sched.add_new_sample(writer, change);
async_mode.cv.notify_one();
return true;
}
/*! This function is used when PublishMode = FlowControllerPureSyncPublishMode.
* In this case there is no async mechanism.
*/
template<typename PubMode = PublishMode>
//纯同步的模式,不会放入队列,使用异步线程发送
typename std::enable_if<std::is_same<FlowControllerPureSyncPublishMode, PubMode>::value, bool>::type
constexpr enqueue_new_sample_impl(
fastrtps::rtps::RTPSWriter*,
fastrtps::rtps::CacheChange_t*,
const std::chrono::time_point<std::chrono::steady_clock>&) const
{
// Do nothing. Return false.
return false;
}
看一下上面的两个函数
如果PublishMode是FlowControllerPureSyncPublishMode,则直接返回false
如果PublishMode不是FlowControllerPureSyncPublishMode,则将消息放入scheduler中去
步骤2:
DeliveryRetCode StatelessWriter::deliver_sample_nts(
CacheChange_t* cache_change,
RTPSMessageGroup& group,
LocatorSelectorSender& locator_selector, // Object locked by FlowControllerImpl
const std::chrono::time_point<std::chrono::steady_clock>& /*TODO max_blocking_time*/)
{
uint64_t change_sequence_number = cache_change->sequenceNumber.to64long();
NetworkFactory& network = mp_RTPSParticipant->network_factory();
DeliveryRetCode ret_code = DeliveryRetCode::DELIVERED;
if (current_sequence_number_sent_ != change_sequence_number)
{
current_sequence_number_sent_ = change_sequence_number;
current_fragment_sent_ = 0;
}
// Send the new sample to intra-process readers.
if (0 == current_fragment_sent_)
{
for_matched_readers(matched_local_readers_, [&, cache_change](ReaderLocator& reader)
{
intraprocess_delivery(cache_change, reader);
return false;
});
}
try
{
uint32_t n_fragments = cache_change->getFragmentCount();
//单独发,就是针对每个remotereader 发送消息
//pdpserver 会把这个设置为true,一般为false
//为什么pdpserver需要单独发?pdpserver对于远端的remotereader的策略是不一样的(对于server和client是区别对待的)
if (m_separateSendingEnabled)
{
//分片发送
if (0 < n_fragments)
{
for (FragmentNumber_t frag = current_fragment_sent_ + 1;
DeliveryRetCode::DELIVERED == ret_code && frag <= n_fragments; ++frag)
{
for (std::unique_ptr<ReaderLocator>& it : matched_remote_readers_)
{
if ((nullptr == reader_data_filter_) ||
reader_data_filter_->is_relevant(*cache_change, it->remote_guid()))
{
group.sender(this, &*it);
size_t num_locators = it->locators_size();
if (group.add_data_frag(*cache_change, frag, is_inline_qos_expected_))
{
add_statistics_sent_submessage(cache_change, num_locators);
}
else
{
EPROSIMA_LOG_ERROR(RTPS_WRITER,
"Error sending fragment (" << cache_change->sequenceNumber << ", " << frag <<
")");
ret_code = DeliveryRetCode::NOT_DELIVERED;
}
}
}
if (DeliveryRetCode::DELIVERED == ret_code)
{
current_fragment_sent_ = frag;
}
}
}
//不分片,数据量比较小的情况下不分片
else
{
for (std::unique_ptr<ReaderLocator>& it : matched_remote_readers_)
{
if ((nullptr == reader_data_filter_) ||
reader_data_filter_->is_relevant(*cache_change, it->remote_guid()))
{
group.sender(this, &*it);
size_t num_locators = it->locators_size();
if (group.add_data(*cache_change, is_inline_qos_expected_))
{
add_statistics_sent_submessage(cache_change, num_locators);
}
else
{
EPROSIMA_LOG_ERROR(RTPS_WRITER, "Error sending change " << cache_change->sequenceNumber);
ret_code = DeliveryRetCode::NOT_DELIVERED;
}
}
}
}
}
//不是单独发
else
{
if (nullptr != reader_data_filter_)
{
//将选择器恢复初始值
locator_selector.locator_selector.reset(false);
for (std::unique_ptr<ReaderLocator>& it : matched_remote_readers_)
{
//cache_change 和remote_reader 是否相关
if (reader_data_filter_->is_relevant(*cache_change, it->remote_guid()))
{
//将locator_selector 中的 LocatorSelectorEntry 设置为enable
locator_selector.locator_selector.enable(it->remote_guid());
}
}
}
else
{
locator_selector.locator_selector.reset(true);
}
//state_has_changed 就是判断一下有没有被设置成enable
if (locator_selector.locator_selector.state_has_changed())
{
//locator_selector的LocatorSelectorEntry 做一些参数设置
network.select_locators(locator_selector.locator_selector);
if (!has_builtin_guid())
{
compute_selected_guids(locator_selector);
}
}
size_t num_locators = locator_selector.locator_selector.selected_size() + fixed_locators_.size();
if (0 < num_locators)
{
//消息大,分片的情况下
if (0 < n_fragments)
{
for (FragmentNumber_t frag = current_fragment_sent_ + 1;
DeliveryRetCode::DELIVERED == ret_code && frag <= n_fragments; ++frag)
{
// 用data frag的形式发送大消息
if (group.add_data_frag(*cache_change, frag, is_inline_qos_expected_))
{
current_fragment_sent_ = frag;
add_statistics_sent_submessage(cache_change, num_locators);
}
else
{
EPROSIMA_LOG_ERROR(RTPS_WRITER,
"Error sending fragment (" << cache_change->sequenceNumber << ", " << frag << ")");
ret_code = DeliveryRetCode::NOT_DELIVERED;
}
}
}
else
{
//消息小,不分片,能够放到一个包里面
if (group.add_data(*cache_change, is_inline_qos_expected_))
{
add_statistics_sent_submessage(cache_change, num_locators);
}
else
{
EPROSIMA_LOG_ERROR(RTPS_WRITER, "Error sending change " << cache_change->sequenceNumber);
ret_code = DeliveryRetCode::NOT_DELIVERED;
}
}
}
}
on_sample_datas(cache_change->write_params.sample_identity(),
cache_change->writer_info.num_sent_submessages);
on_data_sent();
}
------
//这儿调用sender 的初始化
group.sender(this, &locator_selector);
if (DeliveryRetCode::DELIVERED == ret_code &&
change_sequence_number > last_sequence_number_sent_)
{
// This update must be done before calling the callback.
last_sequence_number_sent_ = change_sequence_number;
unsent_changes_cond_.notify_all();
if (nullptr != mp_listener)
{
mp_listener->onWriterChangeReceivedByAll(this, cache_change);
}
}
return ret_code;
}
主要干了这几件事:
- 设置current_sequence_number_sent_
- 针对本地的reader,通过跨进程通信来发送消息
- 根据参数的不同,执行不同的发送操作
m_separateSendingEnabled为true,会针对不同的remotereader分别发送,false,就是统一发送
n_fragments>0,则分包发送,调用RTPSMessageGroup::add_data_frag
否则一个包发送出去调用RTPSMessageGroup::add_data
在这个函数中 调用了Messagegroup的sender函数,设置了下一个RTPS submessage的 sender
每个statelesswriter 有一个属性:LocatorSelectorSender locator_selector_
locator_selector_ 有个属性 fastrtps::rtps::LocatorSelector locator_selector
locator_selector 有个属性ResourceLimitedVector<LocatorSelectorEntry*> entries_;
LocatorSelectorEntry存放了一个remote 端点的所有 单播多播的地址
LocatorSelectorSender是一个选择器,选择朝哪些ip地址,和端口发送消息
这儿也对这些参数做了配置
如果数据包不大,那么就会整个发出去,如果数据包很大就会 分片发出。我们看一下整个发出的代码逻辑,分片的代码逻辑类似。
步骤3:
bool RTPSMessageGroup::add_data(
const CacheChange_t& change,
bool expectsInlineQos)
{
------
// Check preconditions. If fail flush and reset.
check_and_maybe_flush();
// 加入ts timestamp信息
add_info_ts_in_buffer(change.sourceTimestamp);
------
// TODO (Ricardo). Check to create special wrapper.
bool is_big_submessage;
if (!RTPSMessageCreator::addSubmessageData(submessage_msg_, &change_to_add, endpoint_->getAttributes().topicKind,
readerId, expectsInlineQos, inline_qos, &is_big_submessage))
{
EPROSIMA_LOG_ERROR(RTPS_WRITER, "Cannot add DATA submsg to the CDRMessage. Buffer too small");
change_to_add.serializedPayload.data = nullptr;
return false;
}
change_to_add.serializedPayload.data = nullptr;
------
return insert_submessage(is_big_submessage);
}
这个函数主要是干了这几件事:
1.调用check_and_maybe_flush
2.配置change的参数
3.将change ,转化为message
RTPSMessageGroup::add_data
会调用到了check_and_maybe_flush,如果messagegroup的sender的目的地变了,就需要把之前的message发送出去(如果有的话,没有就不发)。
步骤4:
void check_and_maybe_flush()
{
check_and_maybe_flush(sender_->destination_guid_prefix());
}
步骤5:
void RTPSMessageGroup::check_and_maybe_flush(
const GuidPrefix_t& destination_guid_prefix)
{
assert(nullptr != sender_);
CDRMessage::initCDRMsg(submessage_msg_);
//如果sender的目标地址 和之前的不一样,需要把之前的message 清空(就是发送出去)
//如果message中没有内容,不会发送
if (sender_->destinations_have_changed())
{
flush_and_reset();
}
add_info_dst_in_buffer(submessage_msg_, destination_guid_prefix);
}
如果sender的目标地址 和之前的不一样,需要把之前的message 清空(就是发送出去),如果message中没有内容,不会发送。
步骤6:
void RTPSMessageGroup::flush_and_reset()
{
// Flush
flush();
current_dst_ = c_GuidPrefix_Unknown;
}
void RTPSMessageGroup::flush()
{
send();
reset_to_header();
}
这个函数中主要干了2件事
1.调用send函数,发送消息
2.调用reset_to_header,重新设置header
步骤4,5,6其实并不是正常的发送逻辑。
我们再回到函数add_new_sample_impl
fastrtps::rtps::RTPSMessageGroup group(participant_, writer, &locator_selector);
// 同步发送
if (fastrtps::rtps::DeliveryRetCode::DELIVERED !=
writer->deliver_sample_nts(change, group, locator_selector, max_blocking_time))
{
//失败就异步发送
return enqueue_new_sample_impl(writer, change, max_blocking_time);
}
事实上真正的操作在RTPSMessageGroup的析构函数中:
这是代码里面非常诡异的操作,我们看到在add_new_sample_impl创建了一个临时变量RTPSMessageGroup的对象group,在add_new_sample_impl执行完毕后,group变量就回被回收,这时候调用到了RTPSMessageGroup::~RTPSMessageGroup
步骤7:
RTPSMessageGroup::~RTPSMessageGroup() noexcept(false)
{
try
{
send();
}
catch (...)
{
if (!internal_buffer_)
{
participant_->return_send_buffer(std::move(send_buffer_));
}
throw;
}
if (!internal_buffer_)
{
participant_->return_send_buffer(std::move(send_buffer_));
}
}
statelesswriter的大部分的发射操作是在这儿执行的,就是说在析构函数中执行发送操作是主流程。
3.RTPSMessageGroup的send的详细内容
3.1时序图
发送的详细流程
sequenceDiagram
participant RTPSMessageGroup
Participant LocatorSelectorSender
Participant StatelessWriter
Participant RTPSParticipantImpl
Participant SenderResource
Participant UDPTransportInterface
RTPSMessageGroup ->> RTPSMessageGroup:1.send
RTPSMessageGroup ->> LocatorSelectorSender:2.send
LocatorSelectorSender ->> StatelessWriter:3.send_nts
StatelessWriter ->> RTPSParticipantImpl:4.sendSync
RTPSParticipantImpl ->> SenderResource:5.send
SenderResource ->> UDPTransportInterface:6.send
1.RTPSMessageGroup的send函数主要是调用LocatorSelectorSender::send函数
2.LocatorSelectorSender的send函数主要是调用StatelessWriter的send_nts 函数
3.StatelessWriter::send_nts 首先是调用RTPSWriter::send_nts,发送地址是在LocatorSelectorSender选择器中选择的地址。
如果RTPSWriter::send_nts发送失败就返回false,发送成功就会调用RTPSParticipant的同步发送函数sendSync
发送地址是fixed_locators_
4.RTPSParticipant 的函数sendSync,针对选择器选出的locator,通过senderresource发送消息
5.UDPSenderResource实际是通过transport的send函数执行的发送消息的操作,也就是UDPTransportInterface::send的函数
6.UDPTransportInterface::send函数最终调用的socket的接口函数发送的消息
步骤1:
void RTPSMessageGroup::send()
{
if (endpoint_ && sender_)
{
CDRMessage_t* msgToSend = full_msg_;
//我们看到如果full_msg_中没有数据,并不会发送出去。
if (full_msg_->length > RTPSMESSAGE_HEADER_SIZE)
{
std::lock_guard<RTPSMessageSenderInterface> lock(*sender_);
------
eprosima::fastdds::statistics::rtps::add_statistics_submessage(msgToSend);
if (!sender_->send(msgToSend,
max_blocking_time_is_set_ ? max_blocking_time_point_ : (std::chrono::steady_clock::now() +
std::chrono::hours(24))))
{
throw timeout();
}
current_sent_bytes_ += msgToSend->length;
}
}
}
我们看到如果full_msg_中没有数据,并不会发送出去。
有数据会调用了sender*->send,这个sender*就是LocatorSelectorSender对象
步骤2:
bool LocatorSelectorSender::send(
CDRMessage_t* message,
std::chrono::steady_clock::time_point max_blocking_time_point) const
{
return writer_.send_nts(message, *this, max_blocking_time_point);
}
writer_ 可以是StatelessWriter 也可以是StatefulWriter,这里我们看一下StatelessWriter的情况,StatefulWriter类似
步骤3:
bool StatelessWriter::send_nts(
CDRMessage_t* message,
const LocatorSelectorSender& locator_selector,
std::chrono::steady_clock::time_point& max_blocking_time_point) const
{
if (!RTPSWriter::send_nts(message, locator_selector, max_blocking_time_point))
{
return false;
}
return fixed_locators_.empty() ||
mp_RTPSParticipant->sendSync(message, m_guid,
Locators(fixed_locators_.begin()), Locators(fixed_locators_.end()),
max_blocking_time_point);
}
StatelessWriter::send_nts 首先是调用RTPSWriter::send_nts,发送地址是在LocatorSelectorSender选择器中选择的地址
如果RTPSWriter::send_nts发送失败就返回false,发送成功就会调用RTPSParticipant的同步发送函数sendSync
发送地址是fixed_locators_
fixed_locators是initialPeersList中的locator
initialPeersList是可配置的参数,如果配置了这个参数,在pdp阶段将不向多播地址发送消息,只会向initialPeersList地址发送消息。
如果不配置initialPeersList,会向initialPeersList中放入多播地址。这儿就会向多播地址发送消息。
StatelessWriter::deliver_sample_nts函数中已经对LocatorSelectorSender做了参数配置
bool RTPSWriter::send_nts(
CDRMessage_t* message,
const LocatorSelectorSender& locator_selector,
std::chrono::steady_clock::time_point& max_blocking_time_point) const
{
RTPSParticipantImpl* participant = getRTPSParticipant();
return locator_selector.locator_selector.selected_size() == 0 ||
participant->sendSync(message, m_guid, locator_selector.locator_selector.begin(),
locator_selector.locator_selector.end(), max_blocking_time_point);
}
RTPSWriter::send_nts调用了RTPSParticipant的同步发送函数,发送的地址是LocatorSelectorSender选择的地址,
这个地址前面有介绍过,在deliver_sample_nts函数中 会对LocatorSelectorSender做一些配置
步骤4:
bool sendSync(
CDRMessage_t* msg,
const GUID_t& sender_guid,
const LocatorIteratorT& destination_locators_begin,
const LocatorIteratorT& destination_locators_end,
std::chrono::steady_clock::time_point& max_blocking_time_point)
{
bool ret_code = false;
------
std::unique_lock<std::timed_mutex> lock(m_send_resources_mutex_);
{
ret_code = true;
for (auto& send_resource : send_resource_list_)
{
LocatorIteratorT locators_begin = destination_locators_begin;
LocatorIteratorT locators_end = destination_locators_end;
send_resource->send(msg->buffer, msg->length, &locators_begin, &locators_end,
max_blocking_time_point);
}
lock.unlock();
// notify statistics module
on_rtps_send(
sender_guid,
destination_locators_begin,
destination_locators_end,
msg->length);
// checkout if sender is a discovery endpoint
on_discovery_packet(
sender_guid,
destination_locators_begin,
destination_locators_end);
}
return ret_code;
}
RTPSParticipant 的函数sendSync,针对选择器选出的locator,通过senderresource发送消息
发送的数据是CDRMessage_t的buffer,这个之前是有change 转化来的
步骤5:
bool send(
const octet* data,
uint32_t dataLength,
LocatorsIterator* destination_locators_begin,
LocatorsIterator* destination_locators_end,
const std::chrono::steady_clock::time_point& max_blocking_time_point)
{
bool returned_value = false;
if (send_lambda_)
{
returned_value = send_lambda_(data, dataLength, destination_locators_begin, destination_locators_end,
max_blocking_time_point);
}
return returned_value;
}
senderresource通过一个send_lambda_发送消息,这个在UDPSenderResource初始化的时候,进行了初始化,我们可以看一下这个初始化函数
send_lambda_表达式是在UDPSenderResource创建的时候初始化的
UDPSenderResource(
UDPTransportInterface& transport,
eProsimaUDPSocket& socket,
bool only_multicast_purpose = false,
bool whitelisted = false)
: SenderResource(transport.kind())
, socket_(moveSocket(socket))
, only_multicast_purpose_(only_multicast_purpose)
, whitelisted_(whitelisted)
, transport_(transport)
{
// Implementation functions are bound to the right transport parameters
clean_up = [this, &transport]()
{
transport.CloseOutputChannel(socket_);
};
send_lambda_ = [this, &transport](
const fastrtps::rtps::octet* data,
uint32_t dataSize,
fastrtps::rtps::LocatorsIterator* destination_locators_begin,
fastrtps::rtps::LocatorsIterator* destination_locators_end,
const std::chrono::steady_clock::time_point& max_blocking_time_point) -> bool
{
return transport.send(data, dataSize, socket_, destination_locators_begin,
destination_locators_end, only_multicast_purpose_, whitelisted_,
max_blocking_time_point);
};
}
//我们看一下UDPSenderResource实际是通过transport的send函数执行的发送消息的操作
步骤6:
bool UDPTransportInterface::send(
const octet* send_buffer,
uint32_t send_buffer_size,
eProsimaUDPSocket& socket,
fastrtps::rtps::LocatorsIterator* destination_locators_begin,
fastrtps::rtps::LocatorsIterator* destination_locators_end,
bool only_multicast_purpose,
bool whitelisted,
const std::chrono::steady_clock::time_point& max_blocking_time_point)
{
fastrtps::rtps::LocatorsIterator& it = *destination_locators_begin;
bool ret = true;
auto time_out = std::chrono::duration_cast<std::chrono::microseconds>(
max_blocking_time_point - std::chrono::steady_clock::now());
while (it != *destination_locators_end)
{
if (IsLocatorSupported(*it))
{
ret &= send(send_buffer,
send_buffer_size,
socket,
*it,
only_multicast_purpose,
whitelisted,
time_out);
}
++it;
}
return ret;
}
我们接着看调用函数,接着调用了UDPTransportInterface::send函数,这个函数和之前的函数参数不一样
bool UDPTransportInterface::send(
const octet* send_buffer,
uint32_t send_buffer_size,
eProsimaUDPSocket& socket,
const Locator& remote_locator,
bool only_multicast_purpose,
bool whitelisted,
const std::chrono::microseconds& timeout)
{
-------
if (is_multicast_remote_address == only_multicast_purpose || whitelisted)
{
auto destinationEndpoint = generate_endpoint(remote_locator, IPLocator::getPhysicalPort(remote_locator));
size_t bytesSent = 0;
try
{
-------
asio::error_code ec;
statistics_info_.set_statistics_message_data(remote_locator, send_buffer, send_buffer_size);
//这是通过socket发送消息
bytesSent = getSocketPtr(socket)->send_to(asio::buffer(send_buffer,
send_buffer_size), destinationEndpoint, 0, ec);
-------
}
catch (const std::exception& error)
{
EPROSIMA_LOG_WARNING(RTPS_MSG_OUT, error.what());
return false;
}
-------
}
return success;
}
这个是真正的发送函数,调用了socket接口函数,发送的消息
getSocketPtr(socket)->send_to(asio::buffer(send_buffer, send_buffer_size), destinationEndpoint, 0, ec);