Audio HAL 通话录音上下行分离

502 阅读12分钟

背景需求

做语音识别和答录功能的app大都需要将通话录音上下行VOICE_DOWNLINK/VOICE_UPLINK单独分离实时转义识别。

两个解决方向:

1、AudioRecord 方法的立体声录制PCM数据混合音源了,二进制文件无法区分哪些数据是左声道,哪些是右声道。

通过修改AudioRecord立体声PCM录制方案,每8位一组数据,左声道8位在前,右声道8位在后。

2、使用两个实例同时运行MediaRecorder的VOICE_UPLINK,VOICE_DOWNLINK录制上行或下行数据,且数据不重用。

基于这两个方向我们都需要去看 Audio HAL 代码。

最终结论

一开始从方向二出发,由于安卓本身的录音机制同一时间只允许一个应用录音,首先需要破解这个机制,

Android Audio - 支持多应用同时录音_Android8.1修改方法 参考这个修改后,确实可以同时 start 两个 MediaRecorder。但最终问题是不管如何设置 AudioSource

VOICE_DOWNLINK 和 VOICE_UPLINK,保存录音文件内容都是一样的,依旧是上下行混在一起。最终发现底层 buffer 是同一个,所以

保存的数据一样,这块尝试改了好几天,没啥进展,具体分析过程看下文。

再尝试方向一,通过 AudioRecord 获取原始 pcm 数据按 8 位分离,最终通过这个搞定了!

分析过程

主要的代码目录位于

vendor/mediatek/proprietary/hardware/audio/common/V3/

framework 相关的分析这里就不写了,根据这篇查看

Android 5.1 Audio HAL分析

这里从 adev_open_input_stream() 接入

vendor\mediatek\proprietary\hardware\audio\common\aud_drv\audio_hw_hal.cpp

static int adev_open_input_stream(struct audio_hw_device *dev,
                                      audio_io_handle_t handle,
                                      audio_devices_t devices,
                                      struct audio_config *config,
                                      struct audio_stream_in **stream_in,
                                      audio_input_flags_t flags /*__unused*/,
                                      const char *address __unused,
                                      audio_source_t source /*__unused*/) {
#ifdef AUDIO_HAL_PROFILE_ENTRY_FUNCTION
        AudioAutoTimeProfile _p(__func__);
#endif
        struct legacy_audio_device *ladev = to_ladev(dev);
        status_t status = (status_t) handle;;
        struct legacy_stream_in *in;
        int ret;
        AudioParameter param = AudioParameter();
        param.addInt(String8(AudioParameter::keyInputSource), source);
        in = (struct legacy_stream_in *)calloc(1, sizeof(*in));
        if (!in) {
            return -ENOMEM;
        }
//这里加了log打印,发现走的 openInputStreamWithFlags()
#ifdef UPLINK_LOW_LATENCY
        ALOGD("-%s() %d", __FUNCTION__, __LINE__);
        in->legacy_in = ladev->hwif->openInputStreamWithFlags(devices, (int *) &config->format,
                                                              &config->channel_mask, &config->sample_rate,
                                                              &status, (audio_in_acoustics_t)0, flags);
#else
        ALOGD("-%s() %d", __FUNCTION__, __LINE__);
        in->legacy_in = ladev->hwif->openInputStream(devices, (int *) &config->format,
                                                     &config->channel_mask, &config->sample_rate,
                                                     &status, (audio_in_acoustics_t)0);
#endif
        if (!in->legacy_in) {
            ret = status;
            goto err_open;
        }
        in->legacy_in->setParameters(param.toString());
        in->stream.common.get_sample_rate = in_get_sample_rate;
        in->stream.common.set_sample_rate = in_set_sample_rate;
.....

进入 AudioALSAHardware::openInputStreamWithFlags

vendor\mediatek\proprietary\hardware\audio\common\V3\aud_drv\AudioALSAHardware.cpp

AudioMTKStreamInInterface *AudioALSAHardware::openInputStreamWithFlags(
    uint32_t devices,
    int *format,
    uint32_t *channels,
    uint32_t *sampleRate,
    status_t *status,
    audio_in_acoustics_t acoustics,
    audio_input_flags_t flags) {
	ALOGD("-%s() %d", __FUNCTION__, __LINE__);
    return mStreamManager->openInputStream(devices, format, channels, sampleRate, status, acoustics, flags);
}

继续跟进 AudioALSAStreamManager::openInputStream

vendor\mediatek\proprietary\hardware\audio\common\V3\aud_drv\AudioALSAStreamManager.cpp

AudioMTKStreamInInterface *AudioALSAStreamManager::openInputStream(
    uint32_t devices,
    int *format,
    uint32_t *channels,
    uint32_t *sampleRate,
    status_t *status,
    audio_in_acoustics_t acoustics,
    uint32_t input_flag) {

.....

	 // create stream in
    AudioALSAStreamIn *pAudioALSAStreamIn = new AudioALSAStreamIn();
    audio_devices_t input_device = static_cast<audio_devices_t>(devices);
    bool sharedDevice = (input_device & ~AUDIO_DEVICE_BIT_IN) & (AUDIO_DEVICE_IN_BUILTIN_MIC | AUDIO_DEVICE_IN_BACK_MIC | AUDIO_DEVICE_IN_WIRED_HEADSET);
.....	
	pAudioALSAStreamIn->set(devices, format, channels, sampleRate, status, acoustics, input_flag);

    ALOGD("-%s(), in = %p, status = 0x%x, mStreamInVector.size() = %zu",
          __FUNCTION__, pAudioALSAStreamIn, *status, mStreamInVector.size());
    return pAudioALSAStreamIn;

AudioALSAStreamIn 非常重要,最后干活的将是这个对象,这里将 pAudioALSAStreamIn 返回到 audio_hw_hal.cpp 中

顺序调用 in->legacy_in->setParameters(param.toString()),其实调用 AudioALSAStreamIn->setParameters()

vendor\mediatek\proprietary\hardware\audio\common\V3\aud_drv\AudioALSAStreamIn.cpp

status_t AudioALSAStreamIn::setParameters(const String8 &keyValuePairs) {
    ALOGD("+%s(): %s", __FUNCTION__, keyValuePairs.string());
    AudioParameter param = AudioParameter(keyValuePairs);

    /// keys
    const String8 keyInputSource = String8(AudioParameter::keyInputSource);
    const String8 keyRouting     = String8(AudioParameter::keyRouting);

    /// parse key value pairs
    status_t status = NO_ERROR;
    int value = 0;
    int oldCount;
    String8 value_str;
    /// intput source
    if (param.getInt(keyInputSource, value) == NO_ERROR) {
        param.remove(keyInputSource);
        oldCount = android_atomic_inc(&mLockCount);
        // TODO(Harvey): input source
        AL_AUTOLOCK(mLock);
        ALOGV("%s() InputSource = %d", __FUNCTION__, value);
        mStreamAttributeTarget.input_source = static_cast<audio_source_t>(value);

......
        

关键的地方在 mStreamAttributeTarget.input_source 这个值即上层 setAudioSource 传递对应 audio_source_t 值

system\media\audio\include\system\audio-base.h

typedef enum {
    AUDIO_SOURCE_DEFAULT = 0,
    AUDIO_SOURCE_MIC = 1,
    AUDIO_SOURCE_VOICE_UPLINK = 2,
    AUDIO_SOURCE_VOICE_DOWNLINK = 3,
    AUDIO_SOURCE_VOICE_CALL = 4,
    AUDIO_SOURCE_CAMCORDER = 5,
    AUDIO_SOURCE_VOICE_RECOGNITION = 6,
    AUDIO_SOURCE_VOICE_COMMUNICATION = 7,
    AUDIO_SOURCE_REMOTE_SUBMIX = 8,
    AUDIO_SOURCE_UNPROCESSED = 9,
    AUDIO_SOURCE_CNT = 10,
    AUDIO_SOURCE_MAX = 9, // (CNT - 1)
    AUDIO_SOURCE_FM_TUNER = 1998,
    AUDIO_SOURCE_HOTWORD = 1999,
} audio_source_t;


一系列参数设置完成后,将开始 read 数据。回到 audio_hw_hal.cpp 中

    static ssize_t in_read(struct audio_stream_in *stream, void *buffer,
                           size_t bytes) {
#ifdef AUDIO_HAL_PROFILE_ENTRY_FUNCTION
        AudioAutoTimeProfile _p(__func__, AUDIO_HAL_FUNCTION_READ_NS);
#endif
        struct legacy_stream_in *in =
                reinterpret_cast<struct legacy_stream_in *>(stream);
        return in->legacy_in->read(buffer, bytes);
    }

最终还是 AudioALSAStreamIn 干活,ssize_t AudioALSAStreamIn::read(void *buffer, ssize_t bytes)

vendor\mediatek\proprietary\hardware\audio\common\V3\aud_drv\AudioALSAStreamIn.cpp

ssize_t AudioALSAStreamIn::read(void *buffer, ssize_t bytes) {
    ALOGV("%s(), bytes= %zu", __FUNCTION__, bytes);
    if (tempDebugflag) {
        ALOGD("%s()+, bytes= %zu", __FUNCTION__, bytes);
    }
    ssize_t ret_size = bytes;
    /* fast record is RT thread and keep streamin lock.
    so other thread can't get streamin lock. if necessary, read will active release CPU. */
    int tryCount = 10;
    while ((mLockCount != 0 || mSuspendLockCount != 0) && tryCount--) {
        ALOGV("%s, free CPU, mLockCount = %d, mSuspendLockCount = %d, tryCount %d", __FUNCTION__, mLockCount, mSuspendLockCount, tryCount);
        usleep(300);
        if (tryCount == 0) {
            ALOGD("%s, free CPU, mLockCount = %d, mSuspendLockCount = %d, tryCount %d", __FUNCTION__, mLockCount, mSuspendLockCount, tryCount);
        }
    }

....
	    /// check open
        if (mStandby == true) {
            status = open();
        }

看到这 open() 方法,这个方法有点东西

status_t AudioALSAStreamIn::open() {
    // call open() only when mLock is locked.
    ASSERT(AL_TRYLOCK(mLock) != 0);

    ALOGD("%s()", __FUNCTION__);

    status_t status = NO_ERROR;

    if (mStandby == true) {
        // create capture handler
        ASSERT(mCaptureHandler == NULL);
        mCaptureHandler = mStreamManager->createCaptureHandler(&mStreamAttributeTarget);
        if (mCaptureHandler == NULL) {
            status = BAD_VALUE;
            return status;
        }
        mStandby = false;

        // open audio hardware
        status = mCaptureHandler->open();
        ASSERT(status == NO_ERROR);

        OpenPCMDump();
    }

    return status;
}

mStreamManager->createCaptureHandler 将刚刚的参数一并传递

vendor\mediatek\proprietary\hardware\audio\common\V3\aud_drv\AudioALSAStreamManager.cpp

AudioALSACaptureHandlerBase *AudioALSAStreamManager::createCaptureHandler(
    stream_attribute_t *stream_attribute_target) {
    ALOGD("+%s(), mAudioMode = %d, input_source = %d, input_device = 0x%x, mBypassDualMICProcessUL=%d, sample_rate=%d",
          __FUNCTION__, mAudioMode, stream_attribute_target->input_source, stream_attribute_target->input_device, mBypassDualMICProcessUL, stream_attribute_target->sample_rate);
    //AL_AUTOLOCK(mLock);

.....


        else if ((stream_attribute_target->input_source == AUDIO_SOURCE_VOICE_UPLINK) || (stream_attribute_target->input_source == AUDIO_SOURCE_VOICE_DOWNLINK) ||
                 (stream_attribute_target->input_source == AUDIO_SOURCE_VOICE_CALL) ||
                 ((isModeInPhoneCall() == true) && (WCNChipController::GetInstance()->IsBTMergeInterfaceSupported() == false) && (stream_attribute_target->input_device == AUDIO_DEVICE_IN_BLUETOOTH_SCO_HEADSET))) {
            if ((isModeInPhoneCall() == false) ||
                (SpeechDriverFactory::GetInstance()->GetSpeechDriver()->GetApSideModemStatus(SPEECH_STATUS_MASK) == false)) {
                ALOGD("Can not open PhoneCall Record now !! Because it is not PhoneCallMode or speech driver is not ready, return NULL");
                AL_UNLOCK(mLock);
                return NULL;
            }
            pCaptureHandler = new AudioALSACaptureHandlerVoice(stream_attribute_target);

当前正在通话中,且音频源为 VOICE_CALL、VOICE_UPLINK、VOICE_DOWNLINK,初始化 AudioALSACaptureHandlerVoice 并 return

回到刚刚 AudioALSAStreamIn::open 中,往下顺序调用 mCaptureHandler->open();

vendor\mediatek\proprietary\hardware\audio\common\V3\aud_drv\AudioALSACaptureHandlerVoice.cpp

status_t AudioALSACaptureHandlerVoice::open() {
    ALOGD("+%s(), input_device = 0x%x, input_source = 0x%x",
          __FUNCTION__, mStreamAttributeTarget->input_device, mStreamAttributeTarget->input_source);
    //+open(), input_device = 0x80000004, input_source = 0x3
    //+open(), input_device = 0x80000080, input_source = 0x3
    ASSERT(mCaptureDataClient == NULL);

    AudioALSACaptureDataProviderBase *provider = NULL;//

#ifndef LEGACY_VOICE_RECORD
    switch (mStreamAttributeTarget->input_source) {
#ifdef INCALL_DL_RECORD_DISABLED
    default: {
        provider = AudioALSACaptureDataProviderVoiceUL::getInstance();
        break;
    }
#else
    case AUDIO_SOURCE_VOICE_DOWNLINK: {
        provider = AudioALSACaptureDataProviderVoiceDL::getInstance();
        break;
    }
    case AUDIO_SOURCE_VOICE_UPLINK: {
        provider = AudioALSACaptureDataProviderVoiceUL::getInstance();
        break;
    }
    case AUDIO_SOURCE_VOICE_CALL: {
        provider = AudioALSACaptureDataProviderVoiceMix::getInstance();
        break;
    }
    default: {
        provider = AudioALSACaptureDataProviderVoiceUL::getInstance();
        break;
    }
#endif
    }
#else
    // legacy voice record
    provider = AudioALSACaptureDataProviderVoice::getInstance();
#endif


#if defined(MTK_AURISYS_FRAMEWORK_SUPPORT)
    mCaptureDataClient = new AudioALSACaptureDataClientAurisysNormal(provider, mStreamAttributeTarget, NULL);
#else
    mCaptureDataClient = new AudioALSACaptureDataClient(provider, mStreamAttributeTarget);
#endif

    ALOGD("-%s()", __FUNCTION__);
    return NO_ERROR;
}

来到这里终于有点意思了,看到根据不同的音频源,实例化对应的 AudioALSACaptureDataProvider

这里加了log打印,发现同时开启两个录音线程,音频源分别设置 VOICE_UPLINK VOICE_DOWNLINK,但打印的

都是最后一个,也就是前面的被覆盖了。继续往下将 provider 传参到 new AudioALSACaptureDataClient(provider, mStreamAttributeTarget);

vendor\mediatek\proprietary\hardware\audio\common\V3\aud_drv\AudioALSACaptureDataClient.cpp

AudioALSACaptureDataClient::AudioALSACaptureDataClient(AudioALSACaptureDataProviderBase *pCaptureDataProvider, stream_attribute_t *stream_attribute_target) :
    mCaptureDataProvider(pCaptureDataProvider),
    mAudioSpeechEnhanceInfoInstance(AudioSpeechEnhanceInfo::getInstance()),
    mStreamAttributeSource(mCaptureDataProvider->getStreamAttributeSource()),
    mStreamAttributeTarget(stream_attribute_target),
    mBliSrc(NULL),
    mMicMute(false),
    mMuteTransition(false),
    mSPELayer(NULL),
    mAudioALSAVolumeController(AudioVolumeFactory::CreateAudioVolumeController()),
    mChannelRemixOp(CHANNEL_REMIX_NOP),
    mBesRecTuningEnable(false),
    //echoref+++
    mCaptureDataProviderEchoRef(NULL),
    mStreamAttributeSourceEchoRef(NULL),
    mStreamAttributeTargetEchoRef(NULL),
    mBliSrcEchoRef(NULL),
    mBliSrcEchoRefBesRecord(NULL)
    //echoref---
{
    ALOGD("%s()", __FUNCTION__);

    // init member struct
    memset((void *)&mEchoRefRawDataBuf, 0, sizeof(mEchoRefRawDataBuf));
    memset((void *)&mEchoRefSrcDataBuf, 0, sizeof(mEchoRefSrcDataBuf));

    // raw data
    memset((void *)&mRawDataBuf, 0, sizeof(mRawDataBuf));
    mRawDataBuf.pBufBase = new char[kClientBufferSize];
    mRawDataBuf.bufLen   = kClientBufferSize;
    mRawDataBuf.pRead    = mRawDataBuf.pBufBase;
    mRawDataBuf.pWrite   = mRawDataBuf.pBufBase;
    ASSERT(mRawDataBuf.pBufBase != NULL);
....

 // init SRC
    if (mStreamAttributeSource->sample_rate != mStreamAttributeTarget->sample_rate) {
        ALOGD("sample_rate: %d => %d, num_channels: %d => %d, audio_format: 0x%x => 0x%x",
              mStreamAttributeSource->sample_rate, mStreamAttributeTarget->sample_rate,
              mStreamAttributeSource->num_channels, mStreamAttributeSource->num_channels,
              mStreamAttributeSource->audio_format, mStreamAttributeTarget->audio_format);

        SRC_PCM_FORMAT  SrcFormat = mStreamAttributeTarget->audio_format == AUDIO_FORMAT_PCM_16_BIT ? SRC_IN_Q1P15_OUT_Q1P15 : SRC_IN_Q1P31_OUT_Q1P31;
        mBliSrc = newMtkAudioSrc(
                      mStreamAttributeSource->sample_rate, mStreamAttributeSource->num_channels,
                      mStreamAttributeTarget->sample_rate, mStreamAttributeSource->num_channels,
                      SrcFormat);
        mBliSrc->open();
    }
    if (mStreamAttributeTarget->BesRecord_Info.besrecord_enable) {
        //move CheckNeedBesRecordSRC to here for mStreamAttributeSource info
        CheckNeedBesRecordSRC();
    }

    CheckChannelRemixOp();
    //debug++
    bTempDebug = mAudioSpeechEnhanceInfoInstance->GetDebugStatus();
    //debug--
}

一顿各种初始化完成后,朋友们我们终于要开始录音了。 mBliSrc->open();

回到刚刚对应的 Provider,这里以 AudioALSACaptureDataProviderVoiceUL 为例

vendor\mediatek\proprietary\hardware\audio\common\V3\aud_drv\AudioALSACaptureDataProviderVoiceUL.cpp

status_t AudioALSACaptureDataProviderVoiceUL::open() {
    ALOGD("%s()", __FUNCTION__);
    ASSERT(mEnable == false);

    SpeechDataProcessingHandler::getInstance()->getStreamAttributeSource(SpeechDataProcessingHandler::VOICE_UL_CALLER, &mStreamAttributeSource);
    mPeriodBufferSize = getPeriodBufSize(&mStreamAttributeSource, UPLINK_NORMAL_LATENCY_MS);
    mRawDataRingBuf.bufLen = mPeriodBufferSize * 4;

    /* Setup ringbuf */
    mRawDataRingBuf.pBufBase = new char[mRawDataRingBuf.bufLen];
    mRawDataRingBuf.pRead    = mRawDataRingBuf.pBufBase;
    mRawDataRingBuf.pWrite   = mRawDataRingBuf.pBufBase;
    mRawDataRingBuf.pBufEnd  = mRawDataRingBuf.pBufBase + mRawDataRingBuf.bufLen;

    ALOGD("%s(), mStreamAttributeSource: audio_format = %d, num_channels = %d, audio_channel_mask = %x, sample_rate = %d, periodBufferSize = %d\n",
          __FUNCTION__, mStreamAttributeSource.audio_format, mStreamAttributeSource.num_channels, mStreamAttributeSource.audio_channel_mask, mStreamAttributeSource.sample_rate, mPeriodBufferSize);

    mEnable = true;

    OpenPCMDump(LOG_TAG);

    return SpeechDataProcessingHandler::getInstance()->recordOn(RECORD_TYPE_UL);
}

这里 SpeechDataProcessingHandler 很关键,从字面理解为处理语音数据 handler

来看下 SpeechDataProcessingHandler::getInstance() 都干了啥

vendor\mediatek\proprietary\hardware\audio\common\V3\aud_drv\SpeechDataProcessingHandler.cpp

SpeechDataProcessingHandler::SpeechDataProcessingHandler() {
    ALOGD("+%s()", __FUNCTION__);

    // Init thread resources
    mStopThreadFlag = false;
    mBliSrcUL = NULL;
    mBliSrcDL = NULL;
    mSrcSampleRateUL = 0;
    mSrcSampleRateDL = 0;
    mSpeechRecordOn = false;

    int ret;

    ret = pthread_cond_init(&mSpeechDataNotifyEvent, NULL);
    if (ret != 0) {
        ALOGE("mSpeechDataNotifyEvent create fail!!!");
    }

    ret = pthread_mutex_init(&mSpeechDataNotifyMutex, NULL);
    if (ret != 0) {
        ALOGE("nSpeechDataNotifyMutex create fail!!!");
    }

    ret = pthread_create(&mSpeechDataProcessingThread, NULL, SpeechDataProcessingHandler::threadLoop, (void *)this);
    if (ret != 0) {
        ALOGE("mSpeechDataProcessingThread create fail!!!");
    } else {
        ALOGD("mSpeechDataProcessingThread = %lu created", mSpeechDataProcessingThread);
    }

    ALOGD("-%s()", __FUNCTION__);
}

搞了一个线程一直在读取录音数据 threadLoop() 这个方法尤其重要,加log看具体的执行过程

收到数据后在 processSpeechPacket() 中处理

刚刚的 AudioALSACaptureDataProviderVoiceUL::open() 中最后一句调用 recordOn(RECORD_TYPE_UL)

RECORD_TYPE_UL 类型定义在 SpeechType.h 中

vendor\mediatek\proprietary\hardware\audio\common\include\SpeechType.h

enum record_type_t {
    RECORD_TYPE_UL = 0,
    RECORD_TYPE_DL = 1,
    RECORD_TYPE_MIX = 2
};

static int mUserCounter = 0;
status_t SpeechDataProcessingHandler::recordOn(record_type_t type __unused) {
    ALOGD("+%s(),type_record = %d\n", __FUNCTION__, type);
    AL_AUTOLOCK(speechDataProcessingHandlerLock);

    mUserCounter++;
    if (mUserCounter == 1) {
            SpeechDriverFactory::GetInstance()->GetSpeechDriver()->RecordOn(RECORD_TYPE_MIX);
            //SpeechDriverFactory::GetInstance()->GetSpeechDriver()->RecordOn(type);
        ALOGD("%s(), First user, record on.\n", __FUNCTION__);
    } else {
        ALOGD("%s(), Record already on. user = %d\n", __FUNCTION__, mUserCounter);
    }

    ALOGD("-%s()\n", __FUNCTION__);
    return NO_ERROR;
}

可以看到此处并没有使用刚刚传递的 RECORD_TYPE_UL,而是默认混合 RECORD_TYPE_MIX。是我太天真了,以为将此处修改为

type 就能正常录取 UL 和 DL。继续进入 SpeechDriver()->RecordOn(RECORD_TYPE_MIX)

vendor\mediatek\proprietary\hardware\audio\common\V3\speech_driver\SpeechDriverLAD.cpp

/*==============================================================================
 *                     Recording Control
 *============================================================================*/
status_t SpeechDriverLAD::RecordOn(record_type_t type_record) {
    ALOGD("%s(), sample_rate = %d, channel = %d, type_record = %d, MSG_A2M_RECORD_RAW_PCM_ON", __FUNCTION__, mRecordSampleRateType, mRecordChannelType, type_record);
    uint16_t param_16bit;

    SetApSideModemStatus(RAW_RECORD_STATUS_MASK);
    mRecordType = type_record;
    pCCCI->SetPcmRecordType(type_record);
    param_16bit = mRecordSampleRateType  | (mRecordChannelType << 4);
    return pCCCI->SendMessageInQueue(pCCCI->InitCcciMailbox(MSG_A2M_RECORD_RAW_PCM_ON, param_16bit, 0));
}

这样底层开启录音后,刚刚的 threadLoop(void *arg) 中将循环收取数据,然后根据 rawPcmDir 是 RECORD_TYPE_UL

RECORD_TYPE_DL RECORD_TYPE_MIX 判断对应 Provider 是否存在,存在调用 provideModemRecordDataToProvider(ringBuf)

将数据分发出去。

当调用 mMediaRecorderd.stop(); mMediaRecorder.reset(); mMediaRecorder.release(); 后,底层将结束录音。

vendor\mediatek\proprietary\hardware\audio\common\V3\aud_drv\AudioALSAStreamIn.cpp

回到 AudioALSAStreamIn::close() 中

status_t AudioALSAStreamIn::close() {
    // call close() only when mLock is locked.
    ASSERT(AL_TRYLOCK(mLock) != 0);

    ALOGD("%s()", __FUNCTION__);

    status_t status = NO_ERROR;

    if (mStandby == false) {
        mStandby = true;

        ASSERT(mCaptureHandler != NULL);

        // close audio hardware
        status = mCaptureHandler->close();
        if (status != NO_ERROR) {
            ALOGE("%s(), close() fail!!", __FUNCTION__);
        }

        ClosePCMDump();
        // destroy playback handler
        mStreamManager->destroyCaptureHandler(mCaptureHandler);
        mCaptureHandler = NULL;
    }

    ASSERT(mCaptureHandler == NULL);
    return status;
}

AudioALSACaptureHandlerVoice::close()

vendor\mediatek\proprietary\hardware\audio\common\V3\aud_drv\AudioALSACaptureHandlerVoice.cpp

delete 释放指针 mCaptureDataClient

status_t AudioALSACaptureHandlerVoice::close() {
    ALOGD("+%s()", __FUNCTION__);

    ASSERT(mCaptureDataClient != NULL);
    delete mCaptureDataClient;

    ALOGD("-%s()", __FUNCTION__);
    return NO_ERROR;
}

vendor\mediatek\proprietary\hardware\audio\common\V3\aud_drv\AudioALSACaptureDataClient.cpp

AudioALSACaptureDataClient::~AudioALSACaptureDataClient() {
    ALOGD("%s()", __FUNCTION__);

.....

    if (mBliSrc != NULL) {
        mBliSrc->close();
        deleteMtkAudioSrc(mBliSrc);
        mBliSrc = NULL;
    }

    //TODO: Sam, add here for temp
    //BesRecord+++
    StopBesRecord();
....

mBliSrc->close() 调用到 AudioALSACaptureDataProviderVoiceUL::close()

vendor\mediatek\proprietary\hardware\audio\common\V3\aud_drv\AudioALSACaptureDataProviderVoiceUL.cpp

status_t AudioALSACaptureDataProviderVoiceUL::close() {
    ALOGD("%s()", __FUNCTION__);

    mEnable = false;

    delete[] mRawDataRingBuf.pBufBase;
    memset(&mRawDataRingBuf, 0, sizeof(mRawDataRingBuf));

    ClosePCMDump();

    return SpeechDataProcessingHandler::getInstance()->recordOff(RECORD_TYPE_UL);
}

vendor\mediatek\proprietary\hardware\audio\common\V3\aud_drv\SpeechDataProcessingHandler.cpp

status_t SpeechDataProcessingHandler::recordOff(record_type_t type __unused) {
    ALOGD("+%s()\n", __FUNCTION__);
    AL_AUTOLOCK(speechDataProcessingHandlerLock);

    mUserCounter--;
    if (mUserCounter == 0) {
             SpeechDriverFactory::GetInstance()->GetSpeechDriver()->RecordOff(RECORD_TYPE_MIX);
            //SpeechDriverFactory::GetInstance()->GetSpeechDriver()->RecordOff(type);
        ALOGD("%s(), No user, record off!\n", __FUNCTION__);
    } else {
        ALOGD("%s(), Record is still using. user = %d\n", __FUNCTION__, mUserCounter);
    }
    ALOGD("-%s()\n", __FUNCTION__);
#endif
    return NO_ERROR;
}

vendor\mediatek\proprietary\hardware\audio\common\V3\speech_driver\SpeechDriverLAD.cpp

status_t SpeechDriverLAD::RecordOn(record_type_t type_record) {
    ALOGD("%s(), sample_rate = %d, channel = %d, type_record = %d, MSG_A2M_RECORD_RAW_PCM_ON", __FUNCTION__, mRecordSampleRateType, mRecordChannelType, type_record);
    uint16_t param_16bit;

    SetApSideModemStatus(RAW_RECORD_STATUS_MASK);
    mRecordType = type_record;
    pCCCI->SetPcmRecordType(type_record);
    param_16bit = mRecordSampleRateType  | (mRecordChannelType << 4);
    return pCCCI->SendMessageInQueue(pCCCI->InitCcciMailbox(MSG_A2M_RECORD_RAW_PCM_ON, param_16bit, 0));
}

到此流程基本就分析完了。最终通过修改 audio hal 层 + AudioRecord 录取 VOICE_CALL 通道方案完美解决。

其它

过滤关键tag值

AudioALSAStream AudioALSAStreamIn AudioALSAHardware AudioALSACaptureDataProvider

2019-01-01 10:09:57.325 280-954/? D/AudioALSAHardware: +setParameters(): A2dpSuspended=true
2019-01-01 10:09:57.325 280-954/? W/AudioALSAHardware: setParameters(), still have param.size() = 1, remain param = "A2dpSuspended=true"
2019-01-01 10:09:57.325 280-954/? D/AudioALSAHardware: -setParameters(): A2dpSuspended=true 
2019-01-01 10:09:57.953 280-954/? D/AudioALSAHardware: +routing createAudioPatch Mixer->1
2019-01-01 10:09:57.955 280-954/? D/AudioALSAHardwareResourceManager: +startInputDevice(), new_device = 0x80000004, mInputDevice = 0x0, mStartInputDeviceCount = 0, mMicInverse = 0, mNumPhoneMicSupport = 2
2019-01-01 10:09:57.957 280-954/? D/AudioALSAHardwareResourceManager: +startOutputDevice(), new_devices = 0x1, mOutputDevices = 0x0, mStartOutputDevicesCount = 0 SampleRate = 16000
2019-01-01 10:09:58.042 280-954/? D/AudioALSAHardware: +routing createAudioPatch 80000004->Mixer Src 3
2019-01-01 10:09:58.768 280-954/? D/AudioALSAHardware: +routing createAudioPatch Mixer->2
2019-01-01 10:09:58.790 280-954/? D/AudioALSAHardwareResourceManager: +stopOutputDevice(), mOutputDevices = 0x1, mStartOutputDevicesCount = 1
2019-01-01 10:09:58.791 280-954/? D/AudioALSAHardwareResourceManager: +stopInputDevice(), mInputDevice = 0x80000004, stop_device = 0x80000004, mStartInputDeviceCount = 1, mMicInverse = 0, mNumPhoneMicSupport = 2
2019-01-01 10:09:58.794 280-954/? D/AudioALSAHardwareResourceManager: +startInputDevice(), new_device = 0x80000080, mInputDevice = 0x0, mStartInputDeviceCount = 0, mMicInverse = 0, mNumPhoneMicSupport = 2
2019-01-01 10:09:58.796 280-954/? D/AudioALSAHardwareResourceManager: +startOutputDevice(), new_devices = 0x2, mOutputDevices = 0x0, mStartOutputDevicesCount = 0 SampleRate = 16000
2019-01-01 10:09:58.991 280-954/? D/AudioALSAHardware: setVoiceVolume(), volume = 0.258523, mUseTuningVolume = 1
2019-01-01 10:09:59.000 280-954/? D/AudioALSAHardware: +routing createAudioPatch Mixer->2
2019-01-01 10:09:59.027 280-954/? D/AudioALSAHardware: +routing createAudioPatch 80000080->Mixer Src 3
2019-01-01 10:09:59.448 280-954/? D/AudioALSAHardware: +routing createAudioPatch Mixer->1
2019-01-01 10:09:59.470 280-954/? D/AudioALSAHardwareResourceManager: +stopOutputDevice(), mOutputDevices = 0x2, mStartOutputDevicesCount = 1
2019-01-01 10:09:59.479 280-954/? D/AudioALSAHardwareResourceManager: +stopInputDevice(), mInputDevice = 0x80000080, stop_device = 0x80000080, mStartInputDeviceCount = 1, mMicInverse = 0, mNumPhoneMicSupport = 2
2019-01-01 10:09:59.482 280-954/? D/AudioALSAHardwareResourceManager: +startInputDevice(), new_device = 0x80000004, mInputDevice = 0x0, mStartInputDeviceCount = 0, mMicInverse = 0, mNumPhoneMicSupport = 2
2019-01-01 10:09:59.484 280-954/? D/AudioALSAHardwareResourceManager: +startOutputDevice(), new_devices = 0x1, mOutputDevices = 0x0, mStartOutputDevicesCount = 0 SampleRate = 16000
2019-01-01 10:09:59.607 280-954/? D/AudioALSAHardware: setVoiceVolume(), volume = 0.230409, mUseTuningVolume = 1
2019-01-01 10:09:59.610 280-954/? D/AudioALSAHardware: +routing createAudioPatch Mixer->1
2019-01-01 10:09:59.627 280-954/? D/AudioALSAHardware: +routing createAudioPatch 80000004->Mixer Src 3
2019-01-01 10:10:07.965 280-412/? D/AudioALSAHardware: +setParameters(): A2dpSuspended=false

Android 5.1 Audio HAL分析

Android 录音实现(AudioRecord)

音频framework与中间层分析