FastDDS 源码解析(十八)EDP阶段发送心跳heartbeat

320 阅读14分钟

车载消息中间件FastDDS 源码解析(一)FastDDS 介绍和使用

车载消息中间件FastDDS 源码解析(二)RtpsParticipant的创建(上)

车载消息中间件FastDDS 源码解析(三)RtpsParticipant的创建(中)

车载消息中间件FastDDS 源码解析(四)RtpsParticipant的创建(下)

车载消息中间件FastDDS 源码解析(五)BuiltinProtocols(上)

车载消息中间件FastDDS 源码解析(六)BuiltinProtocols(中)EDP

车载消息中间件FastDDS 源码解析(七)BuiltinProtocols(下)WLP&TypeLookupManager

车载消息中间件FastDDS 源码解析(八)TimedEvent

车载消息中间件FastDDS 源码解析(九)Message

车载消息中间件FastDDS 源码解析(十)发送第一条PDP消息(上)

FastDDS 源码解析(十一)发送第一条PDP消息(中)

FastDDS 源码解析(十二)发送第一条PDP消息(下)---异步发送

FastDDS 源码解析(十三)发送第一条PDP消息---跨进程发送

FastDDS 源码解析(十四)接收PDP消息(上)

FastDDS 源码解析(十五)接收PDP消息(下)

FastDDS 源码解析(十六)处理PDP消息——PDP匹配

FastDDS 源码解析(十七)处理PDP消息——EDP匹配

上一篇我们讲到收到pdp消息之后的EDP匹配部分,其中远端的EDP writer匹配已经讲完,这一篇我们介绍一下远端的EDP reader匹配到本地的StatefulWriter。匹配完成后writer将会发送心跳heartbeat

1.1时序图

sequenceDiagram
		participant EDPSimple
		participant StatefulWriter
		participant ReaderProxy
		participant RTPSMessageGroup
		
		EDPSimple ->> StatefulWriter:1.matched_reader_add
		StatefulWriter ->> ReaderProxy:2.new
		StatefulWriter ->> ReaderProxy:3.start
		StatefulWriter ->> StatefulWriter:4.send_heartbeat_nts_ 
		StatefulWriter ->> RTPSMessageGroup:5.flush_and_reset
		RTPSMessageGroup ->> RTPSMessageGroup:6.flush
		RTPSMessageGroup ->> RTPSMessageGroup:7.send

1.StatefulWriter的matched_reader_add,主要是干了4件事

a.new了一个ReaderProxy,这个ReaderProxy 主要是存储了远端的reader的信息 见2

b.配置了相关参数和设置

c.发送心跳 见4和5,发送心跳调用了2个函数

d.启动一个TimedEvent 周期性的发送心跳,周期间隔默认是3s

c和d的区别在于,c是在收到pdp消息之后,马上发送一个心跳,d是周期性地发送心跳

3.ReaderProxy的初始化函数 主要是初始化了2个TimedEvent函数

nack_supression_event_ 在发送一个消息之后,如果没有收到回执,这个TimedEvent会把消息设置为unacked

initial_heartbeat_event_ 初始化心跳 这个只在同一进程的EndPoint 之间发送心跳

4.ReaderProxy的start函数,主要是设置一些参数,启动initial_heartbeat_event_

5.send_heartbeat_nts_ 调用RTPSMessageGroup::add_heartbeat,RTPSMessageGroup的fullmessage中加入heartbeat 的submessage

6.flush_and_reset 就是将RTPSMessageGroup的fullmessage 发送出去

7.调用flush函数

8.调用send函数

6,7 这两个函数其实是非正常状态下发送的流程

8 在RTPSMessageGroup析构函数执行的时候调用send函数将消息发送了出去,这个是主流程

发送这块的详细流程可以看一下之前我们说的发送PDP消息这一部分,参考FastDDS 源码解析(十一)发送第一条PDP消息(中)

1.2源码

步骤一

bool StatefulWriter::matched_reader_add(
        const ReaderProxyData& rdata)
{
    using fastdds::rtps::ExternalLocatorsProcessor::filter_remote_locators;

    if (rdata.guid() == c_Guid_Unknown)
    {
        EPROSIMA_LOG_ERROR(RTPS_WRITER, "Reliable Writer need GUID_t of matched readers");
        return false;
    }

    std::unique_lock<RecursiveTimedMutex> guard(mp_mutex);
    std::unique_lock<LocatorSelectorSender> guard_locator_selector_general(locator_selector_general_);
    std::unique_lock<LocatorSelectorSender> guard_locator_selector_async(locator_selector_async_);

    // Check if it is already matched.
    // 先查找这个statefulwriter已经匹配的reader,如果有的话,就更新信息
    // matched_local_readers_ 本地reader,同一个进程的reader
    // matched_datasharing_readers_ 共享内存的reader, 跨进程的reader
  	// matched_remote_readers_ 远程的reader
    if (for_matched_readers(matched_local_readers_, matched_datasharing_readers_, matched_remote_readers_,
            [this, &rdata](ReaderProxy* reader)
            {
                //更新已有的ReaderProxy的信息
                if (reader->guid() == rdata.guid())
                {
                    EPROSIMA_LOG_INFO(RTPS_WRITER, "Attempting to add existing reader, updating information.");
                    if (reader->update(rdata))
                    {
                        filter_remote_locators(*reader->general_locator_selector_entry(),
                        m_att.external_unicast_locators, m_att.ignore_non_matching_locators);
                        filter_remote_locators(*reader->async_locator_selector_entry(),
                        m_att.external_unicast_locators, m_att.ignore_non_matching_locators);
                        update_reader_info(locator_selector_general_, true);
                        update_reader_info(locator_selector_async_, true);
                    }
                    return true;
                }
                return false;
            }))
    {
        if (nullptr != mp_listener)
        {
            // call the listener without locks taken
            guard_locator_selector_async.unlock();
            guard_locator_selector_general.unlock();
            guard.unlock();

            mp_listener->on_reader_discovery(this, ReaderDiscoveryInfo::CHANGED_QOS_READER, rdata.guid(), &rdata);
        }
        return false;
    }

    // Get a reader proxy from the inactive pool (or create a new one if necessary and allowed)
    // 创建一个ReaderProxy,先看一下缓存中有没有,有就从缓存中取一个,没有就创建一个
    ReaderProxy* rp = nullptr;
    if (matched_readers_pool_.empty())
    {
        size_t max_readers = matched_readers_pool_.max_size();
        if (getMatchedReadersSize() + matched_readers_pool_.size() < max_readers)
        {
            const RTPSParticipantAttributes& part_att = mp_RTPSParticipant->getRTPSParticipantAttributes();
            rp = new ReaderProxy(m_times, part_att.allocation.locators, this);
        }
        else
        {
            EPROSIMA_LOG_WARNING(RTPS_WRITER, "Maximum number of reader proxies (" << max_readers <<
                    ") reached for writer " << m_guid);
            return false;
        }
    }
    else
    {
        rp = matched_readers_pool_.back();
        matched_readers_pool_.pop_back();
    }

    // Add info of new datareader.
    // 将信息填入ReaderProxy的对象
    rp->start(rdata, is_datasharing_compatible_with(rdata));
    filter_remote_locators(*rp->general_locator_selector_entry(),
            m_att.external_unicast_locators, m_att.ignore_non_matching_locators);
    filter_remote_locators(*rp->async_locator_selector_entry(),
            m_att.external_unicast_locators, m_att.ignore_non_matching_locators);
  	//加入远端的同步发送地址
    locator_selector_general_.locator_selector.add_entry(rp->general_locator_selector_entry());
    //加入远端的异步发送地址
    locator_selector_async_.locator_selector.add_entry(rp->async_locator_selector_entry());

    //将rp(ReaderProxy)加入相应的地方
    if (rp->is_local_reader())
    {
        matched_local_readers_.push_back(rp);
        EPROSIMA_LOG_INFO(RTPS_WRITER, "Adding reader " << rdata.guid() << " to " << this->m_guid.entityId
                                                        << " as local reader");
    }
    else
    {
        if (rp->is_datasharing_reader())
        {
            matched_datasharing_readers_.push_back(rp);
            EPROSIMA_LOG_INFO(RTPS_WRITER, "Adding reader " << rdata.guid() << " to " << this->m_guid.entityId
                                                            << " as data sharing");
        }
        else
        {
            matched_remote_readers_.push_back(rp);
            EPROSIMA_LOG_INFO(RTPS_WRITER, "Adding reader " << rdata.guid() << " to " << this->m_guid.entityId
                                                            << " as remote reader");
        }
    }
		
    //更新reader的信息
    update_reader_info(locator_selector_general_, true);
    update_reader_info(locator_selector_async_, true);

    //如果是跨进程的reader,就返回
    if (rp->is_datasharing_reader())
    {
        if (nullptr != mp_listener)
        {
            // call the listener without locks taken
            guard_locator_selector_async.unlock();
            guard_locator_selector_general.unlock();
            guard.unlock();

            mp_listener->on_reader_discovery(this, ReaderDiscoveryInfo::DISCOVERED_READER, rdata.guid(), &rdata);
        }
        return true;
    }

    bool is_reliable = rp->is_reliable();
    if (is_reliable)
    {
        //本地消息的index号的范围
        SequenceNumber_t min_seq = get_seq_num_min();
        SequenceNumber_t last_seq = get_seq_num_max();
        RTPSMessageGroup group(mp_RTPSParticipant, this, rp->message_sender());

        // History not empty
        if (min_seq != SequenceNumber_t::unknown())
        {
            (void)last_seq;
            assert(last_seq != SequenceNumber_t::unknown());
            assert(min_seq <= last_seq);

            try
            {
                // Late-joiner
              	// 队列内消息的处理,如果策略是之前的旧消息也需要发送给新的reader,则将旧消息放入发送队列
                if (TRANSIENT_LOCAL <= rp->durability_kind() &&
                        TRANSIENT_LOCAL <= m_att.durabilityKind)
                {
                    for (History::iterator cit = mp_history->changesBegin(); cit != mp_history->changesEnd(); ++cit)
                    {
                        // Holes are managed when deliver_sample(), sending GAP messages.
                        if (rp->rtps_is_relevant(*cit))
                        {
                            ChangeForReader_t changeForReader(*cit);

                            // If it is local, maintain in UNSENT status and add to flow controller.
                            if (rp->is_local_reader())
                            {
                                flow_controller_->add_old_sample(this, *cit);
                            }
                            // In other case, set as UNACKNOWLEDGED and expects the reader request them.
                            else
                            {
                                changeForReader.setStatus(UNACKNOWLEDGED);
                            }

                            rp->add_change(changeForReader, true, false);
                        }
                    }
                }
                else
                {
                    if (rp->is_local_reader())
                    {
                        intraprocess_gap(rp, min_seq, mp_history->next_sequence_number());
                    }
                    else
                    {
                        // Send a GAP of the whole history.
                      	// 发送gap消息,告诉reader,我这边有这些消息,但是这些消息不能发送给你
                        group.add_gap(min_seq, SequenceNumberSet_t(mp_history->next_sequence_number()), rp->guid());
                    }
                }

                // Always activate heartbeat period. We need a confirmation of the reader.
                // The state has to be updated.
                // 周期性地发送心跳包
                periodic_hb_event_->restart_timer(std::chrono::steady_clock::now() + std::chrono::hours(24));
            }
            catch (const RTPSMessageGroup::timeout&)
            {
                EPROSIMA_LOG_ERROR(RTPS_WRITER, "Max blocking time reached");
            }
        }

        if (rp->is_local_reader())
        {
            //发送本地的心跳包
            intraprocess_heartbeat(rp);
        }
        else
        {
            //发送远程的心跳包
            send_heartbeat_nts_(1u, group, disable_positive_acks_);
            //真正的发送动作在这儿执行
            group.flush_and_reset();
        }
    }
    else
    {
        // Acknowledged all for best-effort reader.
        // 如果是best-effrot reader 将所有历史消息设置为,对端发送了回执消息
        rp->acked_changes_set(mp_history->next_sequence_number());
    }
------
   
    if (nullptr != mp_listener)
    {
        // call the listener without locks taken
        guard_locator_selector_async.unlock();
        guard_locator_selector_general.unlock();
        guard.unlock();

        mp_listener->on_reader_discovery(this, ReaderDiscoveryInfo::DISCOVERED_READER, rdata.guid(), &rdata);
    }
    return true;
}

//matched_reader_add 主要干了这几件事情

1.check一下本地有没有相应的reader的信息,如果有的话,更新信息

代码21-52行

2.没有reader的信息,就创建一个readerproxy,保存到相应的vector

代码56-109行

3.根据reader种类分别处理历史消息,这里面启动周期性的心跳

如果是reliable,同时writer和reader的策略是TRANSIENT_LOCAL或者TRANSIENT,需要将history之中所有的消息都重新发送一遍(加入到flow_controller_)给这个新匹配的reader。

如果不是TRANSIENT_LOCAL或者TRANSIENT,那么需要发送一个gap消息,通知reader,我们有这些消息,但是我们并不发送之前的消息给你这个新匹配的reader。

如果是reliable,启动周期性的心跳发送(代码193行),发送一个心跳包(代码201-211行)。

这儿有个关键参数

typedef enum DurabilityKind_t

{

VOLATILE, //这个类型的Writer,在发送了n条消息之后,如果有Reader匹配上,那么只会给Reader发送n条消息之后的消息

TRANSIENT_LOCAL, //这个类型的Writer,如果有Reader匹配上,那么会给Reader发送history中的所有数据,如果history中没有就不会发送了

TRANSIENT, //这个类型的Writer,不会因为应用程序的退出,而丢失数据,会给匹配的Reader发送所有的数据

PERSISTENT //这个还没有实现

}DurabilityKind_t;

这儿有四个qos 与之对应

typedef enum DurabilityQosPolicyKind : fastrtps::rtps::octet {

VOLATILE_DURABILITY_QOS, //Writer,在发送了n条消息之后,如果有Reader匹配上,那么只会给Reader发送n条消息之后的消息

TRANSIENT_LOCAL_DURABILITY_QOS, //Writer,如果有Reader匹配上,那么会给Reader发送history中的所有数据,如果history中没有就不会发送了

TRANSIENT_DURABILITY_QOS,//这个类型的Writer,不会因为应用程序的退出,而丢失数据,会给匹配的Reader发送所有的数据

PERSISTENT_DURABILITY_QOS//还没有实现 } DurabilityQosPolicyKind_t;

这些策略是需要writer和reader互相匹配的,

writer的qos == TRANSIENT_DURABILITY_QOSwriter的qos == TRANSIENT_LOCAL_DURABILITY_QOSwriter的qos == VOLATILE_DURABILITY_QOS
reader的qos == TRANSIENT_DURABILITY_QOS支持不支持不支持
reader的qos == TRANSIENT_LOCAL_DURABILITY_QOS支持支持不支持
reader的qos == VOLATILE_DURABILITY_QOS支持支持支持

如果writer配置的策略是TRANSIENT_DURABILITY_QOS,那么reader 配置 3个其余的策略都能实现

如果writer配置的策略是TRANSIENT_LOCAL_DURABILITY_QOS,那么reader只能支持TRANSIENT_LOCAL_DURABILITY_QOS 和VOLATILE_DURABILITY_QOS

如果writer配置的策略是VOLATILE_DURABILITY_QOS,那么reader只能支持VOLATILE_DURABILITY_QOS

步骤2:

ReaderProxy::ReaderProxy(
        const WriterTimes& times,
        const RemoteLocatorsAllocationAttributes& loc_alloc,
        StatefulWriter* writer)
{
    nack_supression_event_ = new TimedEvent(writer_->getRTPSParticipant()->getEventResource(),
                    [&]() -> bool
                    {
                        writer_->perform_nack_supression(guid());
                        return false;
                    },
                    TimeConv::Time_t2MilliSecondsDouble(times.nackSupressionDuration));

    initial_heartbeat_event_ = new TimedEvent(writer_->getRTPSParticipant()->getEventResource(),
                    [&]() -> bool
                    {
                        writer_->intraprocess_heartbeat(this);
                        return false;
                    }, 0);

    stop();
}

//这儿初始化了2个TimedEvent:

initial_heartbeat_event_ 这个主要是发送本地初始化心跳的,这个TimedEvent只使用一次

nack_supression_event_ 这个主要是将所有在途中的消息的状态设置为还没有收到回执的状态,同时启动周期性的heartbeat

void StatefulWriter::perform_nack_supression(
        const GUID_t& reader_guid)
{
    std::unique_lock<RecursiveTimedMutex> lock(mp_mutex);

    for_matched_readers(matched_local_readers_, matched_datasharing_readers_, matched_remote_readers_,
            [this, &reader_guid](ReaderProxy* reader)
            {
                if (reader->guid() == reader_guid)
                {
                    reader->perform_nack_supression();
                    periodic_hb_event_->restart_timer();
                    return true;
                }
                return false;
            }
            );
}

nack_supression_event_中的执行函数主要干了2件事

1.调用了ReaderProxy的perform_nack_supression

2.periodic_hb_event_的restart_timer(),启动周期性的心跳

步骤3

void ReaderProxy::start(
        const ReaderProxyData& reader_attributes,
        bool is_datasharing)
{
    locator_info_.start(
        reader_attributes.guid(),
        reader_attributes.remote_locators().unicast,
        reader_attributes.remote_locators().multicast,
        reader_attributes.m_expectsInlineQos,
        is_datasharing);

    is_active_ = true;
    durability_kind_ = reader_attributes.m_qos.m_durability.durabilityKind();
    expects_inline_qos_ = reader_attributes.m_expectsInlineQos;
    is_reliable_ = reader_attributes.m_qos.m_reliability.kind != BEST_EFFORT_RELIABILITY_QOS;
    disable_positive_acks_ = reader_attributes.disable_positive_acks();
    if (durability_kind_ == DurabilityKind_t::VOLATILE)
    {
        SequenceNumber_t min_sequence = writer_->get_seq_num_min();
        changes_low_mark_ = (min_sequence == SequenceNumber_t::unknown()) ?
                writer_->next_sequence_number() - 1 : min_sequence - 1;
    }
    else
    {
        acked_changes_set(SequenceNumber_t());  // Simulate initial acknack to set low mark
    }

    timers_enabled_.store(is_remote_and_reliable());
    if (is_local_reader())
    {
        //启动初始化的heartbeat
        initial_heartbeat_event_->restart_timer();
    }

    EPROSIMA_LOG_INFO(RTPS_READER_PROXY, "Reader Proxy started");
}

设置参数,设置发送的changes_low_mark_

步骤4:

void StatefulWriter::send_heartbeat_nts_(
        size_t number_of_readers,
        RTPSMessageGroup& message_group,
        bool final,
        bool liveliness)
{
    if (!number_of_readers)
    {
        return;
    }

    SequenceNumber_t firstSeq = get_seq_num_min();
    SequenceNumber_t lastSeq = get_seq_num_max();

    if (firstSeq == c_SequenceNumber_Unknown || lastSeq == c_SequenceNumber_Unknown)
    {
        assert(firstSeq == c_SequenceNumber_Unknown && lastSeq == c_SequenceNumber_Unknown);

        if (number_of_readers == 1 || liveliness)
        {
            firstSeq = next_sequence_number();
            lastSeq = firstSeq - 1;
        }
        else
        {
            return;
        }
    }
    else
    {
        assert(firstSeq <= lastSeq);
    }

    incrementHBCount();
    message_group.add_heartbeat(firstSeq, lastSeq, m_heartbeatCount, final, liveliness);
    // Update calculate of heartbeat piggyback.
    currentUsageSendBufferSize_ = static_cast<int32_t>(sendBufferSize_);

    EPROSIMA_LOG_INFO(RTPS_WRITER,
            getGuid().entityId << " Sending Heartbeat (" << firstSeq << " - " << lastSeq << ")" );
}

调用RTPSMessageGroup的add_heartbeat,加入heartbeat的信息。

这个函数的调用是在步骤1 matched_reader_add中调用的。

剩下的发送流程可以参考FastDDS 源码解析(十一)发送第一条PDP消息(中)

最终调用send函数

将RTPSMessageGroup 的fullmessage 发送出去

send函数里面干了什么可以参考发送pdp消息这一节FastDDS 源码解析(十一)发送第一条PDP消息(中)

2.3周期发送心跳

上面的是直接发送心跳,此外还有周期性地发送心跳

periodic_hb_event_ = new TimedEvent(
        pimpl->getEventResource(),
        [&]() -> bool
        {
            return send_periodic_heartbeat();
        },
        TimeConv::Time_t2MilliSecondsDouble(m_times.heartbeatPeriod));

默认周期是3s,也可以自己进行设置

bool StatefulWriter::send_periodic_heartbeat(
        bool final,
        bool liveliness)
{
    std::lock_guard<RecursiveTimedMutex> guardW(mp_mutex);
    std::lock_guard<LocatorSelectorSender> guard_locator_selector_general(locator_selector_general_);

    bool unacked_changes = false;
    //不是保活相关的包
    if (!liveliness)
    {
        SequenceNumber_t first_seq_to_check_acknowledge = get_seq_num_min();
        if (SequenceNumber_t::unknown() == first_seq_to_check_acknowledge)
        {
            first_seq_to_check_acknowledge = mp_history->next_sequence_number() - 1;
        }

        unacked_changes = for_matched_readers(matched_local_readers_, matched_datasharing_readers_,
                        matched_remote_readers_,
                        [first_seq_to_check_acknowledge](ReaderProxy* reader)
                        {
                            return reader->has_unacknowledged(first_seq_to_check_acknowledge);
                        }
                        );

        if (unacked_changes)
        {
            try
            {
                //TODO if separating, here sends periodic for all readers, instead of ones needed it.
                send_heartbeat_to_all_readers();
            }
            catch (const RTPSMessageGroup::timeout&)
            {
                EPROSIMA_LOG_ERROR(RTPS_WRITER, "Max blocking time reached");
            }
        }
    }
    ------
    return unacked_changes;
}

liveliness 默认是false

statefulwriter发送消息的时候,如果reader收到消息,需要发送一个ack消息,表示reader收到了某些消息。

如果history中有没有被reader ack的消息,需要调用send_heartbeat_to_all_readers。

void StatefulWriter::send_heartbeat_to_all_readers()
{
    // This method is only called from send_periodic_heartbeat

    if (m_separateSendingEnabled)
    {
        for (ReaderProxy* reader : matched_remote_readers_)
        {
            send_heartbeat_to_nts(*reader);
        }
    }
    else
    {
        for (ReaderProxy* reader : matched_local_readers_)
        {
            intraprocess_heartbeat(reader);
        }

        for (ReaderProxy* reader : matched_datasharing_readers_)
        {
            reader->datasharing_notify();
        }

        if (there_are_remote_readers_)
        {
            RTPSMessageGroup group(mp_RTPSParticipant, this, &locator_selector_general_);
            select_all_readers_nts(group, locator_selector_general_);

            assert(
                (SequenceNumber_t::unknown() == get_seq_num_min() && SequenceNumber_t::unknown() == get_seq_num_max()) ||
                (SequenceNumber_t::unknown() != get_seq_num_min() &&
                SequenceNumber_t::unknown() != get_seq_num_max()));

            add_gaps_for_holes_in_history_(group);
		
            send_heartbeat_nts_(locator_selector_general_.all_remote_readers.size(), group, disable_positive_acks_);
        }
    }
}

m_separateSendingEnabled 这儿有个关键参数,单独发送的场景一般只用在server-client场景和安全场景下。

如果是m_separateSendingEnabled 为true就是调用send_heartbeat_to_nts

如果m_separateSendingEnabled 为flase就对进程内,进程间,跨网络的reader,分别处理

进程内调用:intraprocess_heartbeat 进行进程内消息传送

进程间调用:datasharing_notify 进行进程间消息传送

跨网络发送:

1.如果history中不是所有的消息都可以发送给远端的reader,就发送一个gap消息,告诉远端这个情况

调用add_gaps_for_holes_in_history_

2.调用send_heartbeat_nts_,这个函数是将心跳submessage加入到fullmessage中去,这个函数上面介绍过

void StatefulWriter::send_heartbeat_to_nts(
        ReaderProxy& remoteReaderProxy,
        bool liveliness,
        bool force /* = false */)
{
    SequenceNumber_t first_seq_to_check_acknowledge = get_seq_num_min();
    if (SequenceNumber_t::unknown() == first_seq_to_check_acknowledge)
    {
        first_seq_to_check_acknowledge = mp_history->next_sequence_number() - 1;
    }
    if (remoteReaderProxy.is_reliable() &&
            (force || liveliness || remoteReaderProxy.has_unacknowledged(first_seq_to_check_acknowledge)))
    {
        if (remoteReaderProxy.is_local_reader())
        {
            intraprocess_heartbeat(&remoteReaderProxy, liveliness);
        }
        else if (remoteReaderProxy.is_datasharing_reader())
        {
            remoteReaderProxy.datasharing_notify();
        }
        else
        {
            try
            {
                RTPSMessageGroup group(mp_RTPSParticipant, this, remoteReaderProxy.message_sender());
                SequenceNumber_t firstSeq = get_seq_num_min();
                SequenceNumber_t lastSeq = get_seq_num_max();

                if (firstSeq != c_SequenceNumber_Unknown && lastSeq != c_SequenceNumber_Unknown)
                {
                    assert(firstSeq <= lastSeq);
                    if (!liveliness)
                    {
                        add_gaps_for_holes_in_history_(group);
                    }
                }

                send_heartbeat_nts_(1u, group, disable_positive_acks_, liveliness);
            }
            catch (const RTPSMessageGroup::timeout&)
            {
                EPROSIMA_LOG_ERROR(RTPS_WRITER, "Max blocking time reached");
            }
        }
    }
}

这个函数针对进程内,进程间,跨网络的reader,分别处理

进程内调用:intraprocess_heartbeat 进行进程内消息传送

进程间调用:datasharing_notify 进行进程间消息传送

跨网络:

1.如果history中不是所有的消息都可以发送给远端的reader,就发送一个gap消息,告诉远端这个情况

2.调用send_heartbeat_nts_,这个函数是将心跳submessage加入到fullmessage中去,这个函数上面介绍过

我们看到send_heartbeat_to_nts和send_heartbeat_to_all_readers对于消息发送是有相似之处的,最终的处理是基本一致的

2.4类图

classDiagram
      EDPSimple *-- StatefulReader
      EDPSimple *-- StatefulWriter
      StatefulReader *-- WriterProxy
      WriterProxy *-- heartbeat_response_
      WriterProxy *-- initial_acknack_
      StatefulWriter *-- ReaderProxy
      StatefulWriter *-- periodic_hb_event_
      StatefulWriter *-- nack_response_event_
      ReaderProxy *-- intitial_heartbeat_event_
      ReaderProxy *-- nack_supression_event_
      
      class StatefulReader{
      		+ResourceLimitedVector<WriterProxy*> matched_writers
      }
      class WriterProxy{
      		+TimedEvent* heartbeat_response_
      		+TimedEvent* initial_acknack_
      }
      class StatefulWriter{
      		+ResourceLimitedVector<ReaderProxy*> matched_local_readers_
      		+ResourceLimitedVector<ReaderProxy*> matched_datasharing_readers_
      		+ResourceLimitedVector<ReaderProxy*> matched_remote_readers_
      		TimedEvent* periodic_hb_event_
      		TimedEvent* nack_response_event_
      }
      class ReaderProxy{
      		+TimedEvent* intitial_heartbeat_event_
      		+TimedEvent* nack_supression_event_
      }

我们看到

1.EDPSimple有StatefulReader 和 StatefulWriter

2.StatefulWriter有3个队列存放了和这个StatefulWriter对应的reader的信息,也就是这个reader的代理,ReaderProxy

3.ReaderProxy内有两个TimedEvent,intitial_heartbeat_event_ 和 nack_supression_event_

intitial_heartbeat_event_ 就是会发送初始化的heartbeat 消息。

nack_supression_event_ 就是在收到对端的acknack消息后发送heartbeat消息。

1.5一条具体的heartbeat

image-20240831171334782.png

这是我们用tcpdump抓的一条具体的heartbeat消息

readerEntityId 表示这是一条发送给 ENTITYID_BUILTIN_PUBLICATIONS_READER的消息,发送方writerEnityId: ENTITYID_BUILTIN_PUBLICATIONS_WRITER。

firstAvailableSeqNumber:1

lastSeqNumber:1

表示这个writer的history中可以被获取的消息的seqnumber范围是 [1,1]

count:1

表示这是writer发送给reader的第几条heartbeat

车载消息中间件FastDDS 源码解析(一)FastDDS 介绍和使用

车载消息中间件FastDDS 源码解析(二)RtpsParticipant的创建(上)

车载消息中间件FastDDS 源码解析(三)RtpsParticipant的创建(中)

车载消息中间件FastDDS 源码解析(四)RtpsParticipant的创建(下)

车载消息中间件FastDDS 源码解析(五)BuiltinProtocols(上)

车载消息中间件FastDDS 源码解析(六)BuiltinProtocols(中)EDP

车载消息中间件FastDDS 源码解析(七)BuiltinProtocols(下)WLP&TypeLookupManager

车载消息中间件FastDDS 源码解析(八)TimedEvent

车载消息中间件FastDDS 源码解析(九)Message

车载消息中间件FastDDS 源码解析(十)发送第一条PDP消息(上)

FastDDS 源码解析(十一)发送第一条PDP消息(中)

FastDDS 源码解析(十二)发送第一条PDP消息(下)---异步发送

FastDDS 源码解析(十三)发送第一条PDP消息---跨进程发送

FastDDS 源码解析(十四)接收PDP消息(上)

FastDDS 源码解析(十五)接收PDP消息(下)

FastDDS 源码解析(十六)处理PDP消息——PDP匹配

FastDDS 源码解析(十七)处理PDP消息——EDP匹配