基于Android 9.0
1.概述
前面在《Android 音频子系统--11:AudioTrack数据传递》和《Android 音频子系统--12:PlaybackThread处理混音数据流程》章节已经初步分析了播放音频过程中的数据处理,本章节再从AudioTrack到Hal层的数据传输做更详细的分析。
2.AudioTrack与AudioFlinger数据传输
一般应用使用java层AudioTrack,MediaPlayer或者SoundPool播放音频时,需要写入数据进行播放。数据写入最终都是调用到native层的AudioTrack的write()接口写入数据。在AudioTrack实例构造后,应用程序接着可以写入音频数据了。写入的数据由AudioFlinger进行处理,所以AudioTrack与AudioFlinger属于生产者-消费者的关系,具体如下:
-
AudioTrack: AudioTrack在FIFO中找到一块可用空间,把用户传入的音频数据写入到这块可用空间上,然后更新写位置(对于AudioFinger来说,意味FIFO上有更多的可读数据了);如果用户传入的数据量比可用空间要大,那么要把用户传入的数据拆分多次写入到FIFO中(AudioTrack 和 AudioFlinger是不同的进程,AudioFlinger同时也在不停地读取数据,所以FIFO可用空间是在不停变化的)
-
AudioFlinger: AudioFlinger在FIFO中找到一块可读数据块,把可读数据拷贝到目的缓冲区上,然后更新读位置(对于AudioTrack来说,意味着FIFO上有更多的可用空间了);如果FIFO上可读数据量比预期的要小,那么要进行多次的读取,才能积累到预期的数据量(AudioTrack和AudioFlinger是不同的进程,AudioTrack同时也在不停地写入数据,所以FIFO可读的数据量是在不停变化的)
AudioTrack播放音频之前先建立通道,并找到通道的唯一句柄值output,这个output实质就是在AudioFlinger创建PlaybackThread后,以key-value形式上保存(output, PlaybackThread)的key值,通过output就可以找到播放音频将从哪个PlaybackThread线程传递数据,这个线程相当于一个中间角色,应用层进程将音频数据以匿名共享内存方式传递给PlaybackThread,而PlaybackThread也是以匿名共享内存方式传递数据给HAL层;其实也能理解,跨进程传递大数据,效率高一点的就是共享内存了;
这里用PlaybackThread泛指所有的播放线程,但是实际的线程有可能是MixerThread、DirectOutputThread、OffloadThread等等线程(一般来说是传递的音频参数Flag决定的)。
2.1 AudioTrack与AudioFlinger共享内存建立
AudioTrack与AudioFlinger之间的共享内存的建立根据AudioTrack模式分为两种,MODE_STATIC模式由AudioTrack应用端建立,MODE_STREAM模式则是由AudioFlinger服务端建立。此处先分析AudioFlinger服务端模块创建匿名共享内存。
创建共享内存时机位于AudioFlinger中,获取输出通道成功后,位于下图红色箭头指向的地方:
(1)AudioFlinger::createTrack()
sp<IAudioTrack> AudioFlinger::createTrack(const CreateTrackInput& input,
CreateTrackOutput& output,
status_t *status)
{
...
// 从AudioPolicyService获取输出通道output句柄等信息
lStatus = AudioSystem::getOutputForAttr(&input.attr, &output.outputId, sessionId, &streamType,
clientPid, clientUid, &input.config, input.flags,
&output.selectedDeviceId, &portId);
...
{
Mutex::Autolock _l(mLock);
/* outputId是AudioPolicyService返回的,就是输出通道output句柄,
* 这个id其实一开始来源于AudioFlinger在打开HAL层的device时创建的
* PlaybackThread线程,key-value形式保存在AudioFlinger的
* mPlaybackThreads成员中,这里就是从成员中找到对应的PlaybackThread
*/
PlaybackThread *thread = checkPlaybackThread_l(output.outputId);
if (thread == NULL) {
ALOGE("no playback thread found for output handle %d", output.outputId);
lStatus = BAD_VALUE;
goto Exit;
}
/* clientPid是应用层那边的,这里注册也就是用一个Client类保存客户端信息
* ,并保存在AudioFlinger的集合成员mClients内
*/
client = registerPid(clientPid);
...
ALOGV("createTrack() sessionId: %d", sessionId);
//保存音频属性
output.sampleRate = input.config.sample_rate;
output.frameCount = input.frameCount;
output.notificationFrameCount = input.notificationFrameCount;
output.flags = input.flags;
/* 在AudioFlinger端创建一个Track,对应应用层端的AudioTrack,一一对应的
* ,共享内存也是在这里面的逻辑完成的
*/
track = thread->createTrack_l(client, streamType, input.attr, &output.sampleRate,
input.config.format, input.config.channel_mask,
&output.frameCount, &output.notificationFrameCount,
input.notificationsPerBuffer, input.speed,
input.sharedBuffer, sessionId, &output.flags,
input.clientInfo.clientTid, clientUid, &lStatus, portId);
LOG_ALWAYS_FATAL_IF((lStatus == NO_ERROR) && (track == 0));
// we don't abort yet if lStatus != NO_ERROR; there is still work to be done regardless
output.afFrameCount = thread->frameCount();
output.afSampleRate = thread->sampleRate();
output.afLatencyMs = thread->latency();
...
}
....
// return handle to client
//TrackHandler包裹内部Track,返回给AudioTrack客户端,这个Track是通过Binder通信的
trackHandle = new TrackHandle(track);
//错误失败情况下回释放资源
Exit:
if (lStatus != NO_ERROR && output.outputId != AUDIO_IO_HANDLE_NONE) {
AudioSystem::releaseOutput(output.outputId, streamType, sessionId);
}
*status = lStatus;
return trackHandle;
}
从上面函数可以得出,获取输出通道后output句柄后,会找到其对应PlaybackThread线程,再用这个线程创建一个Track,此Track与应用层的AudioTrack一一对应,他们之间可通过binder IPC方式访问,PlaybackThread在创建Track的同时会完成匿名共享内存的创建与分配使用,可以说Track管理着共享内存;其内部逻辑相对复杂,我们先列一个流程图,讲解其中的关键点即可:
上图在TrackBase构造函数中,使用了参数client来完成创建共享内存,client在AudioFlinger中创建的(client = registerPid(clientPid)),保存了应用端的信息。
流程的最后使用匿名共享内存方式来传递音频数据,匿名共享内存大致原理是:在tmpfs文件系统上创建的文件绑定到虚拟地址空间,而tmpfs文件系统存在于pageCache和swap缓存上,IO处理速度快,当mmap映射时触发实际的物理内存分配并映射到进程空间,这样进程就可以直接使用这块内存了。
我们看看MemoryHeapBase中如何创建共享内存的,在它的构造方法中:
MemoryHeapBase::MemoryHeapBase(size_t size, uint32_t flags, char const * name)
: mFD(-1), mSize(0), mBase(MAP_FAILED), mFlags(flags),
mDevice(nullptr), mNeedUnmap(false), mOffset(0)
{
const size_t pagesize = getpagesize();
size = ((size + pagesize-1) & ~(pagesize-1));
//在tmpfs临时文件系统上创建匿名共享内存
int fd = ashmem_create_region(name == nullptr ? "MemoryHeapBase" : name, size);
ALOGE_IF(fd<0, "error creating ashmem region: %s", strerror(errno));
if (fd >= 0) {
//映射共享内存到当前进程空间,方便操作
if (mapfd(fd, size) == NO_ERROR) {
if (flags & READ_ONLY) {
ashmem_set_prot_region(fd, PROT_READ);
}
}
}
}
status_t MemoryHeapBase::mapfd(int fd, size_t size, off_t offset)
{
......
if ((mFlags & DONT_MAP_LOCALLY) == 0) {
//映射共享内存到本进程
void* base = (uint8_t*)mmap(nullptr, size,
PROT_READ|PROT_WRITE, MAP_SHARED, fd, offset);
if (base == MAP_FAILED) {
ALOGE("mmap(fd=%d, size=%zu) failed (%s)",
fd, size, strerror(errno));
close(fd);
return -errno;
}
//mBase是分配的地址
mBase = base;
mNeedUnmap = true;
} else {
mBase = nullptr; // not MAP_FAILED
mNeedUnmap = false;
}
mFD = fd; //共享内存的文件描述符
mSize = size; //内存大小
mOffset = offset; //偏移地址
return NO_ERROR;
}
上面代码是匿名共享内存分配过程,注意它分配完成后返回的两个参数mFd是文件描述符,mBase是映射的内存地址;这两个是如何被使用的?在TrackBase中可以查看到:
AudioFlinger::ThreadBase::TrackBase::TrackBase(
ThreadBase *thread,
const sp<Client>& client,
const audio_attributes_t& attr,
uint32_t sampleRate,
audio_format_t format,
audio_channel_mask_t channelMask,
size_t frameCount,
//static模式 buffer有值不为空
void *buffer,
size_t bufferSize,
audio_session_t sessionId,
pid_t creatorPid,
uid_t clientUid,
bool isOut,
alloc_type alloc,
track_type type,
audio_port_handle_t portId)
.......
{
//roundup向上取舍,为frameCount+1
size_t minBufferSize = buffer == NULL ? roundup(frameCount) : frameCount;
// check overflow when computing bufferSize due to multiplication by mFrameSize.
if (minBufferSize < frameCount // roundup rounds down for values above UINT_MAX / 2
|| mFrameSize == 0 // format needs to be correct
|| minBufferSize > SIZE_MAX / mFrameSize) {
android_errorWriteLog(0x534e4554, "34749571");
return;
}
//再乘一个每帧的大小
minBufferSize *= mFrameSize;
if (buffer == nullptr) {
bufferSize = minBufferSize; // allocated here.
//不允许设置的缓存区大小bufferSize比最小缓存区还小,会导致提供的数据过小
} else if (minBufferSize > bufferSize) {
android_errorWriteLog(0x534e4554, "38340117");
return;
}
//audio_track_cblk_t是一个结构体
size_t size = sizeof(audio_track_cblk_t);
if (buffer == NULL && alloc == ALLOC_CBLK) {
// check overflow when computing allocation size for streaming tracks.
//因为至少要分配一个audio_track_cblk_t结构体大小
if (size > SIZE_MAX - bufferSize) {
android_errorWriteLog(0x534e4554, "34749571");
return;
}
/** 总容量=mFrameSize * frameCount + sizeof(audio_track_cblk_t)
* frameCount来源于应用端传递,等于用户设置的缓存区大小/每帧数据大小
* */
size += bufferSize;
}
if (client != 0) {
/** 这个就是上面分配匿名共享内存的调用,它返回的是Allocation类型,
* 因为他对MemoryHeapBase做了一次包裹分装
* */
mCblkMemory = client->heap()->allocate(size);
if (mCblkMemory == 0 ||
//pointer函数返回的是MemoryHeapBase的mBase,也就是其实际映射的内存地址,然后
//在把其强转为audio_track_cblk_t类型,并赋值给mCblk成员
(mCblk = static_cast<audio_track_cblk_t *>(mCblkMemory->pointer())) == NULL) {
ALOGE("%s(%d): not enough memory for AudioTrack size=%zu", __func__, mId, size);
client->heap()->dump("AudioTrack");
mCblkMemory.clear();
return;
}
} else {
//如果找不到客户端client的话,就直接本地映射
mCblk = (audio_track_cblk_t *) malloc(size);
if (mCblk == NULL) {
ALOGE("%s(%d): not enough memory for AudioTrack size=%zu", __func__, mId, size);
return;
}
}
// construct the shared structure in-place.
if (mCblk != NULL) {
//赋值构造函数,但是初始值也是空
new(mCblk) audio_track_cblk_t();
switch (alloc) {
........
case ALLOC_CBLK:
//stream模式使用匿名共享内存
if (buffer == NULL) {
//mBuffer是共享内存映射地址的第1个,不是第0个
mBuffer = (char*)mCblk + sizeof(audio_track_cblk_t);
memset(mBuffer, 0, bufferSize); // 初始化内存:用0填充
} else {
//static模式就直接使用应用层传过来的buffer
mBuffer = buffer;
#if 0
mCblk->mFlags = CBLK_FORCEREADY; // FIXME hack, need to fix the track ready logic
#endif
}
break;
.......
}
}
}
从上面代码可以得出,在AudioFlinger模块,匿名共享内存创建是由MemoryHeapBase完成的,管理由Track和TrackBase来管理,内部成员mCblk成员是一个audio_track_cblk_t类型,是负责客户端和服务端的匿名共享内存使用调度的,相当重要,而真正传递数据的内存块在mBuffer成员负责,内存大小等于frameSize*frameCount+sizeof(audio_track_cblk_t),但是也要注意如果是static模式,mBuffer是应用端传过来的buffer,而不是使用前面创建的共享内存。
- MODE_STREAM 模式下的匿名共享内存结构:
| |
| -------------------> mCblkMemory <--------------------- |
| |
+--------------------+------------------------------------+
| audio_track_cblk_t | FIFO |
+--------------------+------------------------------------+
^ ^ (frameSize*frameCount)
| |
mCblk mBuffer
mCblk = static_cast<audio_track_cblk_t *>(mCblkMemory->pointer());
new(mCblk) audio_track_cblk_t();
mBuffer = (char*)mCblk + sizeof(audio_track_cblk_t);
匿名共享内存分配图如上,audio_track_cblk_t是用于同步处理client与server端的写入读取关系,buffer是真正用于数据读写的内存;可以理解mBuffer是一个缓行缓冲区,rear是写入的位置指针,front是读取的位置指针,每次写入都会从rear地址处写入,每次读取都会从front读取;但是每次写入时能写多少,读取时又能读多少呢?这个就由前面的audio_track_cblk来控制了。
2.2 AudioFlinger服务端模块管理共享内存
再回到Track的构造函数中去,会有一个负责和应用程序端交互管理共享内存的类,如下:
AudioFlinger::PlaybackThread::Track::Track(
PlaybackThread *thread,
const sp<Client>& client,
audio_stream_type_t streamType,
const audio_attributes_t& attr,
uint32_t sampleRate,
audio_format_t format,
audio_channel_mask_t channelMask,
size_t frameCount,
void *buffer,
size_t bufferSize,
const sp<IMemory>& sharedBuffer,
audio_session_t sessionId,
pid_t creatorPid,
uid_t uid,
audio_output_flags_t flags,
track_type type,
audio_port_handle_t portId)
: TrackBase(thread, client, attr, sampleRate, format, channelMask, frameCount,
(sharedBuffer != 0) ? sharedBuffer->pointer() : buffer,
(sharedBuffer != 0) ? sharedBuffer->size() : bufferSize,
sessionId, creatorPid, uid, true /*isOut*/,
(type == TYPE_PATCH) ? ( buffer == NULL ? ALLOC_LOCAL : ALLOC_NONE) : ALLOC_CBLK,
type, portId)
{
//mCblk是在TrackBase中已经完成赋值,如果为空,说明共享内存分配失败了直接返回
if (mCblk == NULL) {
return;
}
//shareBuffer来源于应用程序端,0表示stream模式,非0是static模式
if (sharedBuffer == 0) {
//stream模式
mAudioTrackServerProxy = new AudioTrackServerProxy(mCblk, mBuffer, frameCount,
mFrameSize, !isExternalTrack(), sampleRate);
} else {
//static模式,注意这里的mBuffer实质就是应用端传递过来的内存地址
mAudioTrackServerProxy = new StaticAudioTrackServerProxy(mCblk, mBuffer, frameCount,
mFrameSize);
}
//这个mServerProxy就是与应用程序端交互共享内存、同步等
mServerProxy = mAudioTrackServerProxy;
......
}
以上重要的逻辑就是要记住mAudioTrackServerProxy,它负责管理与同步匿名共享内存的使用,IPC通信,应用端也会有一套ClientProxy对应,通过这个Proxy应用端和服务端使用obtainBuffer申请内存,控制写入读取等;到这里服务端AudioFlinger对于共享内存的工作也就做完了;stream模式下mBuffer等于服务端自己创建的匿名共享内存地址,而Static模式mBuffer来源于应用端传递过来的sharedBuffer。
后面在PlaybackThread中还有一点重要工作要处理,如下:
sp<AudioFlinger::PlaybackThread::Track> AudioFlinger::PlaybackThread::createTrack_l(.....){
.......
track = new Track(this, client, streamType, attr, sampleRate, format,
channelMask, frameCount,
nullptr /* buffer */, (size_t)0 /* bufferSize */, sharedBuffer,
sessionId, creatorPid, uid, *flags, TrackBase::TYPE_DEFAULT, portId);
mTracks.add(track);
.......
return track;
}
sp<IAudioTrack> AudioFlinger::createTrack(const CreateTrackInput& input,
CreateTrackOutput& output,
status_t *status)
{
........
Track track = thread->createTrack_l(client, streamType, localAttr, &output.sampleRate,
input.config.format, input.config.channel_mask,
&output.frameCount, &output.notificationFrameCount,
input.notificationsPerBuffer, input.speed,
input.sharedBuffer, sessionId, &output.flags,
callingPid, input.clientInfo.clientTid, clientUid,
&lStatus, portId);
........
// return handle to client
trackHandle = new TrackHandle(track);
}
上面代码有两个关键点:
- PlaybackThread将track加入了自身的mTracks成员,它这么做得意义是什么,这点很重要?
- 对track做了一次包裹封装为TrackHandle,返回给应用层;那么应用层模块拿到TrackHandle如何处理?
2.3 AudioTrack模块处理共享内存
AudioTrack收到AudioFlinger服务端模块创建Track成功后,代码逻辑如下:
status_t AudioTrack::createTrack_l()
{
.......
IAudioFlinger::CreateTrackOutput output;
//返回的TrackHandler内包含了AudioFlinger的创建的Track,Track内则包含了各种
//portId\outPutId、session等,还有共享内存信息
sp<IAudioTrack> track = audioFlinger->createTrack(input,
output,
&status);
.......
//getCblk就是获取服务端TrackBase的mCblkMemory成员,共享内存的总管理
sp<IMemory> iMem = track->getCblk();
if (iMem == 0) {
ALOGE("%s(%d): Could not get control block", __func__, mPortId);
status = NO_INIT;
goto exit;
}
//这个pointer是分配内存映射的首地址,首地址是一个audio_track_cblk_t成员
void *iMemPointer = iMem->pointer();
if (iMemPointer == NULL) {
ALOGE("%s(%d): Could not get control block pointer", __func__, mPortId);
status = NO_INIT;
goto exit;
}
// mAudioTrack是之前与AudioFlinger服务端交互的TrackHandler,如果存在,则把之前的关系注销
if (mAudioTrack != 0) {
IInterface::asBinder(mAudioTrack)->unlinkToDeath(mDeathNotifier, this);
mDeathNotifier.clear();
}
//audioFlinger那边的创建成功的track,保存后续可以binder方式交互
mAudioTrack = track;
mCblkMemory = iMem;
IPCThreadState::self()->flushCommands();
//第一个字节强制转换成audio_track_cblk_t结构体,后面的则是实际映射地址
audio_track_cblk_t* cblk = static_cast<audio_track_cblk_t*>(iMemPointer);
mCblk = cblk;
.......
//buffer是实际的映射地址
void* buffers;
if (mSharedBuffer == 0) {
// stream模式
buffers = cblk + 1;
} else {
//static模式就直接用客户端的产生的地址
buffers = mSharedBuffer->pointer();
if (buffers == NULL) {
ALOGE("%s(%d): Could not get buffer pointer", __func__, mPortId);
status = NO_INIT;
goto exit;
}
}
mAudioTrack->attachAuxEffect(mAuxEffectId);
// If IAudioTrack is re-created, don't let the requested frameCount
// decrease. This can confuse clients that cache frameCount().
if (mFrameCount > mReqFrameCount) {
mReqFrameCount = mFrameCount;
}
// reset server position to 0 as we have new cblk.
mServer = 0;
// update proxy; stream模式往哪里写数据,就是这个mProxy内部的
/**
* mProxy是最终的代理proxy,数据内存;static模式是StaticAudioTrackClientProxy,
* buffers内存由app客户端提供;stream模式是AudioTrackClientProxy,内存buffer由audioFlinger提供
* */
if (mSharedBuffer == 0) {
mStaticProxy.clear();
mProxy = new AudioTrackClientProxy(cblk, buffers, mFrameCount, mFrameSize);
} else {
mStaticProxy = new StaticAudioTrackClientProxy(cblk, buffers, mFrameCount, mFrameSize);
mProxy = mStaticProxy;
}
.......
}
在应用层端,拿到服务端的TrackHandle后,主要工作从TrackHandle中取出共享内存管理结构体audio_track_cblk_t和实际的内存映射地址buffers,如果是static模式,就直接使用客户端创建的分配的内存地址mShareBuffer,最后客户端也对buffers、audio_track_cblk_t做了包裹封装为AudioTrackClientProxy,用于后续数据写入与内存的管理,最后应用层与AudioFlinger服务端用图总结如下:
上图需要注意的是,clientProxy与ServerProxy之间通信方式是Binder通信IPC,他们通信的内容就是匿名共享内存的信息,怎么写入,写入到哪个地方等;音频数据读写走的是下面的匿名共享内存buffer;
2.4 static 模式sharedBuffer
static 模式其sharedBuffer是什么?也是匿名共享内存吗?
答案:是的,印证这个答案就要去应用端static模式下创建sharedBuffer逻辑处,位于应用端创建AudioTrack时,在android_media_AudioTrack.cpp的android_media_AudioTrack_setup函数下:
static jint
android_media_AudioTrack_setup(){
......
switch (memoryMode) {
case MODE_STREAM:
......
case MODE_STATIC:
//static模式是客户端应用程序自身创建内存
if (!lpJniStorage->allocSharedMem(buffSizeInBytes)) {
ALOGE("Error creating AudioTrack in static mode: error creating mem heap base");
goto native_init_failure;
}
......
}
------------------------------------------------------------------------------
bool allocSharedMem(int sizeInBytes) {
mMemHeap = new MemoryHeapBase(sizeInBytes, 0, "AudioTrack Heap Base");
if (mMemHeap->getHeapID() < 0) {
return false;
}
mMemBase = new MemoryBase(mMemHeap, 0, sizeInBytes);
return true;
}
是不是很熟悉mMemHeap,它是一个MemoryHeapBase类,也就是负责创建匿名共享内存的;所以无论是stream还是static模式,AudioFlinger和应用端之间的音频数据传递都是共享内存,区别就是创建者不一样,分配的大小不一样。
2.5 AudioTrack与AudioFlinger数据结构图
这里以stream模式写数据进行讲解,static模式大同小异,都是共享内存传递的方式,stream模式的数据写入逻辑大致如下图:
上图没有涉及应用层java到native上图部分,只显示从native的AudioTrack到AudioFlinger的数据传递部分,也是我们需要关注的部分;这部分代码逻辑相对复杂,有了上面的精炼图,只需要共享内存的两端,如何写入,如何读出?
2.6 AudioTrack写入数据
位于AudioTrack的write函数中:
ssize_t AudioTrack::write(const void* buffer, size_t userSize, bool blocking)
{
if (mTransfer != TRANSFER_SYNC && mTransfer != TRANSFER_SYNC_NOTIF_CALLBACK) {
return INVALID_OPERATION;
}
.......
size_t written = 0;
Buffer audioBuffer;
while (userSize >= mFrameSize) {
audioBuffer.frameCount = userSize / mFrameSize;
//从共享内存中获取内存buffer
status_t err = obtainBuffer(&audioBuffer,
blocking ? &ClientProxy::kForever : &ClientProxy::kNonBlocking);
if (err < 0) {
if (written > 0) {
break;
}
if (err == TIMED_OUT || err == -EINTR) {
err = WOULD_BLOCK;
}
return ssize_t(err);
}
size_t toWrite = audioBuffer.size;
//拷贝音频数据到audioBuffer
memcpy(audioBuffer.i8, buffer, toWrite);
buffer = ((const char *) buffer) + toWrite;
userSize -= toWrite;
written += toWrite;
releaseBuffer(&audioBuffer);
}
if (written > 0) {
mFramesWritten += written / mFrameSize;
if (mTransfer == TRANSFER_SYNC_NOTIF_CALLBACK) {
//t为回调线程,回调给java上层应用层
const sp<AudioTrackThread> t = mAudioTrackThread;
if (t != 0) {
t->wake();
}
}
}
return written;
}
以上主要就是向共享内存先obtain申请空闲内存,然后memcpy写入数据,然后AudioFlinger那端就会进行读取操作;上面函数的obtainBuffer最终会执行到ClientProxy的obtainBuffer中去,我们进去看看:
status_t ClientProxy::obtainBuffer(Buffer* buffer, const struct timespec *requested,
struct timespec *elapsed)
{
.......
// compute number of frames available to write (AudioTrack) or read (AudioRecord)
int32_t front; //front是读取指针 rear是写入指针
int32_t rear;
//mIsOut标致着是写入数据
if (mIsOut) {
//为了使写入数据能尽可能成功,所以要尽可能知道最新能写入的空间有多少,
//所以读取最新的mFront指针,看服务端读取了多少数据
// The barrier following the read of mFront is probably redundant.
// We're about to perform a conditional branch based on 'filled',
// which will force the processor to observe the read of mFront
// prior to allowing data writes starting at mRaw.
// However, the processor may support speculative execution,
// and be unable to undo speculative writes into shared memory.
// The barrier will prevent such speculative execution.
front = android_atomic_acquire_load(&cblk->u.mStreaming.mFront);
rear = cblk->u.mStreaming.mRear;
} else {
// On the other hand, this barrier is required.
rear = android_atomic_acquire_load(&cblk->u.mStreaming.mRear);
front = cblk->u.mStreaming.mFront;
}
// write to rear, read from front计算已有的数据占用的空间
ssize_t filled = audio_utils::safe_sub_overflow(rear, front);
// pipe should not be overfull
if (!(0 <= filled && (size_t) filled <= mFrameCount)) {
if (mIsOut) {
ALOGE("Shared memory control block is corrupt (filled=%zd, mFrameCount=%zu); "
"shutting down", filled, mFrameCount);
mIsShutdown = true;
status = NO_INIT;
goto end;
}
// for input, sync up on overrun
filled = 0;
cblk->u.mStreaming.mFront = rear;
(void) android_atomic_or(CBLK_OVERRUN, &cblk->mFlags);
}
// Don't allow filling pipe beyond the user settable size.
// The calculation for avail can go negative if the buffer size
// is suddenly dropped below the amount already in the buffer.
// So use a signed calculation to prevent a numeric overflow abort.
ssize_t adjustableSize = (ssize_t) getBufferSizeInFrames();
//avail表示空闲内存的大小
ssize_t avail = (mIsOut) ? adjustableSize - filled : filled;
if (avail < 0) {
avail = 0;
} else if (avail > 0) {
// 'avail' may be non-contiguous, so return only the first contiguous chunk
size_t part1;
if (mIsOut) {
rear &= mFrameCountP2 - 1;
part1 = mFrameCountP2 - rear;
} else {
front &= mFrameCountP2 - 1;
part1 = mFrameCountP2 - front;
}
//part1是计算出来的最终空闲内存大小
if (part1 > (size_t)avail) {
part1 = avail;
}
//如果part1比buffer要求的帧数大,用不了这么多就取实际buffer需求的大小
if (part1 > buffer->mFrameCount) {
part1 = buffer->mFrameCount;
}
buffer->mFrameCount = part1;
//mRaw存放内存地址指针,rear是写入指针位移数,mFrameSize是每个位移的单位大小
buffer->mRaw = part1 > 0 ?
&((char *) mBuffers)[(mIsOut ? rear : front) * mFrameSize] : NULL;
buffer->mNonContig = avail - part1;
mUnreleased = part1;
status = NO_ERROR;
break;
}
......
}
上面主要是获取空闲buffer,计算剩余的内存空间是否满足buffer要求的大小,满足就把空闲地址指针rear赋值给buffer的mRaw即可;在返回到AudioTrack的write函数中,写入音频数据到buffer中即可,写入完成后要releaseBuffer,这个releaseBuffer并不是释放内存,因为服务端还没把数据读走,这里的releaseBuffer只是保存写入指针状态,如下函数:
/**
* releaseBuffer并不是释放内存,只是把写入数据状态更新到mCblk的rear和front指针中去
* **/
__attribute__((no_sanitize("integer")))
void ClientProxy::releaseBuffer(Buffer* buffer)
{
LOG_ALWAYS_FATAL_IF(buffer == NULL);
//stepCount记录此次写入的帧数
size_t stepCount = buffer->mFrameCount;
if (stepCount == 0 || mIsShutdown) {
// prevent accidental re-use of buffer
buffer->mFrameCount = 0;
buffer->mRaw = NULL;
buffer->mNonContig = 0;
return;
}
.......
mUnreleased -= stepCount;
audio_track_cblk_t* cblk = mCblk;
// Both of these barriers are required
//写入数据情况下,同步mRear指针
if (mIsOut) {
int32_t rear = cblk->u.mStreaming.mRear;
android_atomic_release_store(stepCount + rear, &cblk->u.mStreaming.mRear);
// 读取数据情况下,同步mFront指针
} else {
int32_t front = cblk->u.mStreaming.mFront;
android_atomic_release_store(stepCount + front, &cblk->u.mStreaming.mFront);
}
}
插入一个细节,实质上在write写入音频数据之前,要先调用AudioTrack的start函数,framework已经帮我们调用了,无需手动调用;start函数执行后会通过TrackHandle跨进程到AudioFlinger的Track中的start,看看Track的start函数关键细节:
status_t AudioFlinger::PlaybackThread::Track::start(AudioSystem::sync_event_t event __unused,
audio_session_t triggerSession __unused)
{
......
status = playbackThread->addTrack_l(this);
......
}
------------------------------------------------------------------------------
status_t AudioFlinger::PlaybackThread::addTrack_l(const sp<Track>& track)
{
mActiveTracks.add(track);
}
也就是把当前这个track加入到PlaybackThread的mActiveTracks中去,再看看上面的数据转移框图,就知道PlaybackThread线程是确定使用哪个Track了。
2.7 AudioFlinger读取数据
2.7.1 PlaybackThread::threadLoop
AudioFlinger服务端读取共享内存的数据,是通过PlaybackThread线程执行threadLoop()方法开始读取数据。threadLoop()方法启动过程如下:
AudioPolicyService::onFirstRef(...)
->AudioPolicyService::createAudioPolicyManager(...);
-->AudioPolicyManager(...);
--->AudioFlinger::loadHwModule(...);
--->AudioFlinger::openOutput(...);
---->AudioFlinger::openOutput_l(...);
----->创建thread(MixerThread(这里要注意:因为第一次并未传递flag,因此不会创建OffloadThread、DirectOutputThread))
----->mPlaybackThreads.add(*output, thread);将thread加入到mPlaybackThreads中
main_audioserver.cpp的main函数将启动audioserver进程,AudioFlinger服务就在这个进程中。
int main(int argc __unused, char **argv)
{
...
AudioFlinger::instantiate();
AudioPolicyService::instantiate();
...
}
instantiate()是AudioFlinger父类BinderService中的函数,BinderService是一个模板类,来看下实现:
class BinderService
{
public:
static status_t publish(bool allowIsolated = false) {
sp<IServiceManager> sm(defaultServiceManager());
return sm->addService(
String16(SERVICE::getServiceName()),
new SERVICE(), allowIsolated);
}
static void instantiate() { publish(); }
}
可以看到instantiate()调用的是publish函数,函数中先获取ServiceManager对象,然后调用addService向其中注册服务;addService第一个参数是服务名称,第二参数是一个服务对象;实际上在创建了AudioFlinger对象;由于AudioFlinger继承的BnAudioFlinger是由RefBase层层继承而来的,根据强指针被第一次引用时调用onFirstRef的程序逻辑,所以在new AudioFlinger传递给addService时,AudioFlinger将调用它的onFirstRef函数,AudioFlinger的onFirstRef函数并没有做什么重要的工作。
其实,AudioFlinger处理音频数据的操作主要是在AudioFlinger::PlaybackThread::threadLoop中做的,我们接下来看这里是如何被调用到的。在AudioFlinger::openOutput_l中会new MixerThread等播放线程,并和outputId存储在mPlaybackThreads中:
sp<AudioFlinger::PlaybackThread> AudioFlinger::openOutput_l(audio_module_handle_t module,
audio_io_handle_t *output,
audio_config_t *config,
audio_devices_t devices,
const String8& address,
audio_output_flags_t flags)
{
...
if (status == NO_ERROR) {
//创建播放线程
PlaybackThread *thread;
if (flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) {
thread = new OffloadThread(this, outputStream, *output, devices, mSystemReady);
ALOGV("openOutput_l() created offload output: ID %d thread %p", *output, thread);
} else if ((flags & AUDIO_OUTPUT_FLAG_DIRECT)
|| !isValidPcmSinkFormat(config->format)
|| !isValidPcmSinkChannelMask(config->channel_mask)) {
thread = new DirectOutputThread(this, outputStream, *output, devices, mSystemReady);
ALOGV("openOutput_l() created direct output: ID %d thread %p", *output, thread);
} else {
thread = new MixerThread(this, outputStream, *output, devices, mSystemReady);
ALOGV("openOutput_l() created mixer output: ID %d thread %p", *output, thread);
}
//添加到mPlaybackThreads中
mPlaybackThreads.add(*output, thread);//添加播放线程
return thread;
}
}
因为PlaybackThread继承自Thread::RefBase,所以在第一次引用会调用时会调用PlaybackThread::threadLoop,来看代码:
void AudioFlinger::PlaybackThread::onFirstRef()
{
run(mThreadName, ANDROID_PRIORITY_URGENT_AUDIO);
}
原来调用的是run函数,PlaybackThread中并没有run函数,这里调用的是其父类run函数,我们看下具体调用:
// system/core/libutils/Threads.cpp
status_t Thread::run(const char* name, int32_t priority, size_t stack)
{
LOG_ALWAYS_FATAL_IF(name == nullptr, "thread name not provided to Thread::run");
.......
bool res;
if (mCanCallJava) {
res = createThreadEtc(_threadLoop,
this, name, priority, stack, &mThread);
} else {
res = androidCreateRawThreadEtc(_threadLoop,
this, name, priority, stack, &mThread);
}
......
return NO_ERROR;
}
_threadLoop对应位置的参数类型是android_thread_func_t,这是一个函数指针类型;来看_threadLoop函数:
// system/core/libutils/Threads.cpp
int Thread::_threadLoop(void* user)
{
Thread* const self = static_cast<Thread*>(user);
......
do {
bool result;
if (first) {
first = false;
self->mStatus = self->readyToRun();
result = (self->mStatus == NO_ERROR);
if (result && !self->exitPending()) {
result = self->threadLoop();
}
} else {
result = self->threadLoop();
}
......
return 0;
}
发现最终调用的是PlaybackThread::threadLoop()函数。
bool AudioFlinger::PlaybackThread::threadLoop()
{
//...
while (!exitPending())
{
cpuStats.sample(myName);
Vector< sp<EffectChain> > effectChains;
{ // scope for mLock
Mutex::Autolock _l(mLock);
//处理配置变更,当有配置改变的事件发生时,需要调用 sendConfigEvent_l() 来通知 PlaybackThread,
//这样PlaybackThread才能及时处理配置事件;常见的配置事件是切换音频通路;
processConfigEvents_l();
//...
//如果数据为0,则让声卡进入休眠状态
if ((!mActiveTracks.size() && systemTime() > standbyTime) ||
isSuspended()) {
// put audio hardware into standby after short delay
if (shouldStandby_l()) {
//声卡休眠
threadLoop_standby();
mStandby = true;
}
if (!mActiveTracks.size() && mConfigEvents.isEmpty()) {
IPCThreadState::self()->flushCommands();
//...
//线程休眠点,直到AudioTrack发送广播唤醒
mWaitWorkCV.wait(mLock);
//...
continue;
}
}
//关键点1:混音前的准备工作
mMixerStatus = prepareTracks_l(&tracksToRemove);
//...repareTracks_l(
lockEffectChains_l(effectChains);
} // mLock scope ends
if (mBytesRemaining == 0) {
mCurrentWriteLength = 0;
if (mMixerStatus == MIXER_TRACKS_READY) {
//关键点2:混音
threadLoop_mix();
} else if ((mMixerStatus != MIXER_DRAIN_TRACK)
&& (mMixerStatus != MIXER_DRAIN_ALL)) {
threadLoop_sleepTime();
if (sleepTime == 0) {
mCurrentWriteLength = mSinkBufferSize;
}
}
if (mMixerBufferValid) {
void *buffer = mEffectBufferValid ? mEffectBuffer : mSinkBuffer;
audio_format_t format = mEffectBufferValid ? mEffectBufferFormat : mFormat;
//把数据从thread.mMixerBuffer复制到thread.mSinkBuffer
memcpy_by_audio_format(buffer, format, mMixerBuffer, mMixerBufferFormat,
mNormalFrameCount * mChannelCount);
}
//...
}
//...
if (mEffectBufferValid) {
//把数据从thread.mEffectBuffer复制到thread.mSinkBuffer
memcpy_by_audio_format(mSinkBuffer, mFormat, mEffectBuffer, mEffectBufferFormat,
mNormalFrameCount * mChannelCount);
}
// enable changes in effect chain
unlockEffectChains(effectChains);
if (!waitingAsyncCallback()) {
// sleepTime == 0 means we must write to audio hardware
if (sleepTime == 0) {
if (mBytesRemaining) {
//关键点3:音频输出
ssize_t ret = threadLoop_write();
//...
} else if ((mMixerStatus == MIXER_DRAIN_TRACK) ||
(mMixerStatus == MIXER_DRAIN_ALL)) {
threadLoop_drain();
}
//...
} else {
usleep(sleepTime);
}
}
//...
}
threadLoop_exit();
if (!mStandby) {
threadLoop_standby();
mStandby = true;
}
//...
return false;
}
playbackthread负责创建线程,但这里接下来要分析的关键方法都在DirectOutputThread,MixerThread中实现,分析三个关键方法为prepareTracks_l、 threadLoop_mix、threadLoop_write。
2.7.2 MixerThread
2.7.2.1 prepareTracks_l()
AudioFlinger::PlaybackThread::mixer_state AudioFlinger::MixerThread::prepareTracks_l(
Vector< sp<Track> > *tracksToRemove)
{
//processDeletedTrackIds参数传入的是一个函数指针,而processDeletedTrackIds会遍历mDeletedTrackIds
(void)mTracks.processDeletedTrackIds([this](int trackId) {
// 集合,把需要删除的track删除掉,如果混音器AudioMixer也存在此track,把混音器里面对应的track也删除
if (mAudioMixer->exists(trackId)) {
mAudioMixer->destroy(trackId);
}
});
//mDeletedTrackIds集合请空,保存了需要删除的track
mTracks.clearDeletedTrackIds();
//初始化混音状态为IDLE
mixer_state mixerStatus = MIXER_IDLE;
//mActiveTracks保存了活跃的Track,也就是要被混音的
size_t count = mActiveTracks.size();
//materVolume是系统的主音量,还有其他的type类型音量等
float masterVolume = mMasterVolume;
//mMasterMute为系统主音量是否静音状态
bool masterMute = mMasterMute;
//如果系统为静音则设置主音量为0不发声
if (masterMute) {
masterVolume = 0;
}
......
bool noFastHapticTrack = true;
//遍历当前MixerThread线程下所有的存活Track
for (size_t i=0 ; i<count ; i++) {
const sp<Track> t = mActiveTracks[i];
// this const just means the local variable doesn't change
Track* const track = t.get();
// fastTrack不会在这里进行混音,略过
if (track->isFastTrack()) {
//...
}
.........
{
audio_track_cblk_t* cblk = track->cblk();
const int trackId = track->id();
//先检查一下混音器audioMixer是否已经创建内部Track了
if (!mAudioMixer->exists(trackId)) {
//在混音前内部AudioMixer创建一个Track,对应外面AudioFlinger的Track
status_t status = mAudioMixer->create(
trackId,
track->mChannelMask,
track->mFormat,
track->mSessionId);
......
}
//计算一次混音最少需要多少帧
size_t desiredFrames;
const uint32_t sampleRate = track->mAudioTrackServerProxy->getSampleRate();
AudioPlaybackRate playbackRate = track->mAudioTrackServerProxy->getPlaybackRate();
//desired想要的,mNormalFrameCount是HAL层共享内存缓冲区可接收最少的帧数,配置采样率播
//放速度计算上层应该供应最少的帧数,防止出现underrun情况
desiredFrames = sourceFramesNeededWithTimestretch(
sampleRate, mNormalFrameCount, mSampleRate, playbackRate.mSpeed);
desiredFrames += mAudioMixer->getUnreleasedFrames(trackId);
uint32_t minFrames = 1;
if ((track->sharedBuffer() == 0) && !track->isStopped() && !track->isPausing() &&
(mMixerStatusIgnoringFastTracks == MIXER_TRACKS_READY)) {
minFrames = desiredFrames;
}
//共享内存提供的音频数据量
size_t framesReady = track->framesReady();
//如果已准备的音频帧大于最小帧,这说明有足够的数据进行混音;否则数据不够需要等待
if ((framesReady >= minFrames) && track->isReady() &&
!track->isPaused() && !track->isTerminated())
{
mixedTracks++;
//开始计算音量
int param = AudioMixer::VOLUME;
if (track->mFillingUpStatus == Track::FS_FILLED) {
// no ramp for the first volume setting
track->mFillingUpStatus = Track::FS_ACTIVE;
if (track->mState == TrackBase::RESUMING) {
track->mState = TrackBase::ACTIVE;
//由暂停到恢复,且mServer代表提供数据端部位空
if (cblk->mServer != 0) {
//加入渐变音效,也就是音量恢复时音量大小从小变大
param = AudioMixer::RAMP_VOLUME;
}
}
//重置重采样管理器,将混音器总的重采样管理器重置为空
mAudioMixer->setParameter(trackId, AudioMixer::RESAMPLE, AudioMixer::RESET, NULL);
mLeftVolFloat = -1.0;
// FIXME should not make a decision based on mServer
} else if (cblk->mServer != 0) {
// If the track is stopped before the first frame was mixed,
// do not apply ramp
param = AudioMixer::RAMP_VOLUME;
}
// compute volume for this track
uint32_t vl, vr; // in U8.24 integer format
float vlf, vrf, vaf; // in [0.0, 1.0] float format
// read original volumes with volume control
float typeVolume = mStreamTypes[track->streamType()].volume;
//为什么是乘法,不是加法呢,type音频音量*master主音量
float v = masterVolume * typeVolume;
const sp<AudioTrackServerProxy> proxy = track->mAudioTrackServerProxy;
//这个是渐变音量,会暴露接口给应用层,由应用层决定音量变化VolumeShaper,根据released已经
//播放的音频位置,确定渐变VolumeShaper的对应点的音量
const float vh = track->getVolumeHandler()->getVolume(
track->mAudioTrackServerProxy->framesReleased()).first;
//当前track已经pause或者此类型的音频已经设置静音将把音量设置为0
if (track->isPausing() || mStreamTypes[track->streamType()].mute
|| track->isPlaybackRestricted()) {
vl = vr = 0;
vlf = vrf = vaf = 0.;
if (track->isPausing()) {
track->setPaused();
}
} else {
//获取当前Track的音量,mCblk内传递过来,用户端上层传入,由用户控制
/* 默认值:
* mVolume[AUDIO_INTERLEAVE_LEFT] = 1.0f;
* mVolume[AUDIO_INTERLEAVE_RIGHT] = 1.0f;
*/
// 可通过AudioTrack::setVolume()接口设置,一般应用不会设置这个
gain_minifloat_packed_t vlr = proxy->getVolumeLR();
//左右声道值合成在vlr中,将其分解为两个单独的声道值
vlf = float_from_gain(gain_minifloat_unpack_left(vlr));
vrf = float_from_gain(gain_minifloat_unpack_right(vlr));
//GAIN_FLOAT_UNITY为1.0,也就是最大不能超过1
if (vlf > GAIN_FLOAT_UNITY) {
ALOGV("Track left volume out of range: %.3g", vlf);
vlf = GAIN_FLOAT_UNITY;
}
if (vrf > GAIN_FLOAT_UNITY) {
ALOGV("Track right volume out of range: %.3g", vrf);
vrf = GAIN_FLOAT_UNITY;
}
//为啥做乘法,看混音理论派音量分贝叠加
// V: masterVolume * typeVolume
// vh: VolumeShaper,系统fadein和fadeout效果音
// vlf:AudioTrack::setVolume(),默认为1.0
vlf *= v * vh;
vrf *= v * vh;
......
}
track->setFinalVolume((vrf + vlf) / 2.f);
......
// XXX: these things DON'T need to be done each time
mAudioMixer->setBufferProvider(trackId, track);
mAudioMixer->enable(trackId);
//param可能取值VOLUME和RAMP_VOLUME 设置音量或auxlevel到AudioMixer的Track的mAuxLevel或mVloume
mAudioMixer->setParameter(trackId, param, AudioMixer::VOLUME0, &vlf);
mAudioMixer->setParameter(trackId, param, AudioMixer::VOLUME1, &vrf);
mAudioMixer->setParameter(trackId, param, AudioMixer::AUXLEVEL, &vaf);
mAudioMixer->setParameter(
trackId,
AudioMixer::TRACK,
AudioMixer::FORMAT, (void *)track->format());
mAudioMixer->setParameter(
trackId,
AudioMixer::TRACK,
AudioMixer::CHANNEL_MASK, (void *)(uintptr_t)track->channelMask());
mAudioMixer->setParameter(
trackId,
AudioMixer::TRACK,
AudioMixer::MIXER_CHANNEL_MASK,
(void *)(uintptr_t)(mChannelMask | mHapticChannelMask));
/*创建重采样管理器
*注意:安卓的MixerThread会对所有的track进行重采样
*那么在混音的时候会调用重采样的混音方法。
*/
mAudioMixer->setParameter(
trackId,
AudioMixer::RESAMPLE,
AudioMixer::SAMPLE_RATE,
(void *)(uintptr_t)reqSampleRate);
AudioPlaybackRate playbackRate = proxy->getPlaybackRate();
//设置回播率
mAudioMixer->setParameter(
trackId,
AudioMixer::TIMESTRETCH,
AudioMixer::PLAYBACK_RATE,
&playbackRate);
//mainBuffer是音频数据来源的地址
// mMixerBufferEnabled:kEnableExtendedPrecision = true
if (mMixerBufferEnabled
&& (track->mainBuffer() == mSinkBuffer
|| track->mainBuffer() == mMixerBuffer)) {
mAudioMixer->setParameter(
name,
AudioMixer::TRACK,
AudioMixer::MIXER_FORMAT, (void *)mMixerBufferFormat);
mAudioMixer->setParameter(
name,
AudioMixer::TRACK,
AudioMixer::MAIN_BUFFER, (void *)mMixerBuffer);
// TODO: override track->mainBuffer()?
mMixerBufferValid = true;
} else {
// mainBuffer() ==> mMainBuffer ==> thread->sinkBuffer() ==> mSinkBuffer
mAudioMixer->setParameter(
name,
AudioMixer::TRACK,
AudioMixer::MIXER_FORMAT, (void *)EFFECT_BUFFER_FORMAT);
mAudioMixer->setParameter(
name,
AudioMixer::TRACK,
AudioMixer::MAIN_BUFFER, (void *)track->mainBuffer());
}
//auxBuffer是音频效果effect那边的buffer
mAudioMixer->setParameter(
trackId,
AudioMixer::TRACK,
AudioMixer::AUX_BUFFER, (void *)track->auxBuffer());
.....
if (mMixerStatusIgnoringFastTracks != MIXER_TRACKS_READY ||
mixerStatus != MIXER_TRACKS_ENABLED) {
mixerStatus = MIXER_TRACKS_READY;
}
} else {
//音频数据没准备够的处理
.......
mixerStatus = MIXER_TRACKS_ENABLED;
}
}
}
//返回状态
return mixerStatus;
}
从上面代码可以得知主要分为以下几个工作:
- 查看音频数据是否充足
- 计算音量
- 将外部Track属性、参数等工作设置到AudioMixer内部的Track中
最后,返回mixerStatus状态到threadLoop函数中,会根据返回状态决定是否混音音频数据还是睡眠等待音频数据填满。
(1)计算音频数据准备是否充足
首先,从应用层到AudioFlinger中间通过匿名共享内存(大小固定)传递音频数据,同理AudioFlinger到HAL层也是匿名共享内存来传递音频数据。
其次,HAL层的缓存区在建立时将其大小buffersize传递给了PlaybackThread,计算出来该缓冲区处理音频数据最小帧数为mNormalFrameCount.
最后,由于Kernal层处理音频的采样率和应用层的音频采样率可能不相等,所以要mNormalFrameCount转换一下才行,判断应用层提供的数据大于它才行。
//计算一次混音最少需要多少帧
size_t desiredFrames;
const uint32_t sampleRate = track->mAudioTrackServerProxy->getSampleRate();
AudioPlaybackRate playbackRate = track->mAudioTrackServerProxy->getPlaybackRate();
//desired想要的,mNormalFrameCount是HAL层共享内存缓冲区可接收最少的帧数,配置采样率播
//放速度计算上层应该供应最少的帧数,防止出现underrun情况
desiredFrames = sourceFramesNeededWithTimestretch(
sampleRate, mNormalFrameCount, mSampleRate, playbackRate.mSpeed);
desiredFrames += mAudioMixer->getUnreleasedFrames(trackId);
//minFrames默认等于1,是假如是STATIC模式,一次写入即可
uint32_t minFrames = 1;
if ((track->sharedBuffer() == 0) && !track->isStopped() && !track->isPausing() &&
(mMixerStatusIgnoringFastTracks == MIXER_TRACKS_READY)) {
minFrames = desiredFrames;
}
看看这个sourceFramesNeededWithTimestretch是如何计算的?
static inline size_t sourceFramesNeededWithTimestretch(
uint32_t srcSampleRate, size_t dstFramesRequired, uint32_t dstSampleRate,
float speed) {
// required is the number of input frames the resampler needs
size_t required = sourceFramesNeeded(srcSampleRate, dstFramesRequired, dstSampleRate);
// to deliver this, the time stretcher requires:
return required * (double)speed + 1 + 1; // accounting for rounding dependencies
}
------------------------------------------------------------------------------
static inline size_t sourceFramesNeeded(
uint32_t srcSampleRate, size_t dstFramesRequired, uint32_t dstSampleRate) {
/* 为什么要这么做?可以这么理解:
* 1. 采样器的采样率sampleRate=count/单位时间h;
* 2. 同理,播放时将音频数据单位时间内处理的数量也是有一个转换率,也可以理解为采样率,应该叫转换率,也就是下面的公式dstSampleRate
* 3. dstFrameRequireed就是播放时,转换率能处理完的数据;然后按照源采样率公式和目标采样率公式就可以计算出源要提供多少的数据
* 才能满足目的帧数
**/
// 单位时间h = srcFramesRequired(count) / srcSampleRate = dstFramesRequired / dstSampleRate
// return srcFramesRequired = dstFramesRequired * srcSampleRate/ dstSampleRate
return srcSampleRate == dstSampleRate ? dstFramesRequired :
size_t((uint64_t)dstFramesRequired * srcSampleRate / dstSampleRate + 1 + 1);
}
应用端提供的大小大致是这个过程,看上面sourceFramesNeeded函数内注释,不同采样率的转换规则,最后要乘上播放速度即可。
(2)音量大小计算
在Android的音频系统中,音量可以分为这些音量:
- 系统音量masterVolume
- 类型音量typeVolume如music、call、alarm等
- AudioTrack用户设置的音量volume
除以上外,AudioTrack还提供了VolumeShape渐变音量,如果用户提供了渐变音量,则最终的音量也会变化;还有一个状态变量mute影响着音量,mute代表着静音,音量大小为0;
音量计算见上面“prepareTracks_l函数混音前准备工作”章节,这里解锁下这么多音量整合为最终的音量是通过乘法来处理的:
finalvolum = masterVolume ∗ typeVolume ∗ volume....
因为音量是以分贝为单位计算的,而分贝是无量纲的,所以当多个音量叠加时不能用加法来处理,因为1分贝加1分贝并不等于两分贝,那为什么要用乘法呢? 简单来说按照分贝的定义公式,分贝叠加的效果近似于乘法关系,所以这里是用了乘法。
(3)设置AudioMixer内部Track属性
设置的内容很多,包含format、sampleRate、音量volume、音频数据输入地址mainBuffer、重采样器等等,都是通过统一的方法setParameter来设置进去的,我们进去看看函数:
void AudioMixer::setParameter(int name, int target, int param, void *value)
{
LOG_ALWAYS_FATAL_IF(!exists(name), "invalid name: %d", name);
const std::shared_ptr<Track> &track = mTracks[name];
int valueInt = static_cast<int>(reinterpret_cast<uintptr_t>(value));
int32_t *valueBuf = reinterpret_cast<int32_t*>(value);
switch (target) {
case TRACK:
switch (param) {
case CHANNEL_MASK:
case MAIN_BUFFER:
case AUX_BUFFER:
case FORMAT:
case MIXER_FORMAT:
case MIXER_CHANNEL_MASK:
case HAPTIC_ENABLED:
case HAPTIC_INTENSITY:
default:ALWAYS_FATAL("setParameter track: bad param %d", param);
break;
}
case RESAMPLE:
switch (param) {
case SAMPLE_RATE:
if (track->setResampler(uint32_t(valueInt), mSampleRate)) {
ALOGV("setParameter(RESAMPLE, SAMPLE_RATE, %u)",
uint32_t(valueInt));
invalidate();
}
case RESET:
case REMOVE:
default:
LOG_ALWAYS_FATAL("setParameter resample: bad param %d", param);
}
break;
case RAMP_VOLUME:
case VOLUME:
switch (param) {
case AUXLEVEL:
if (setVolumeRampVariables(*reinterpret_cast<float*>(value),
target == RAMP_VOLUME ? mFrameCount : 0,
&track->auxLevel, &track->prevAuxLevel, &track->auxInc,
&track->mAuxLevel, &track->mPrevAuxLevel, &track->mAuxInc)) {
ALOGV("setParameter(%s, AUXLEVEL: %04x)",
target == VOLUME ? "VOLUME" : "RAMP_VOLUME", track->auxLevel);
invalidate();
}
break;
default:
if ((unsigned)param >= VOLUME0 && (unsigned)param < VOLUME0 + MAX_NUM_VOLUMES) {
if (setVolumeRampVariables(*reinterpret_cast<float*>(value),
target == RAMP_VOLUME ? mFrameCount : 0,
&track->volume[param - VOLUME0],
&track->prevVolume[param - VOLUME0],
&track->volumeInc[param - VOLUME0],
&track->mVolume[param - VOLUME0],
&track->mPrevVolume[param - VOLUME0],
&track->mVolumeInc[param - VOLUME0])) {
ALOGV("setParameter(%s, VOLUME%d: %04x)",
target == VOLUME ? "VOLUME" : "RAMP_VOLUME", param - VOLUME0,
track->volume[param - VOLUME0]);
invalidate();
}
} else {
LOG_ALWAYS_FATAL("setParameter volume: bad param %d", param);
}
}
break;
case TIMESTRETCH:
switch (param) {
case PLAYBACK_RATE:
default:
break;
default:
LOG_ALWAYS_FATAL("setParameter: bad target %d", target);
}
}
setParameter内部就是一大堆的switch-case语法,主要是往AudioMixer内部的Track设置一些参数等,捡重要的几个来讲:
- 设置重采样器
在上面参数设置中name为RESAMPLE,param为SAMPLE_RATE就是设置重采样管理器,分析setResampler函数:
/**
* trackSampleRate为混音前采样率 devSampleRate为混音后
* **/
bool AudioMixer::Track::setResampler(uint32_t trackSampleRate, uint32_t devSampleRate)
{
/** 是否需要重采样的依据:
* 1. 当前音频采样率和目标采样率是否相等
* 2. mResampler采样管理器是否为空,mResampler属于Track内部的成员
* **/
if (trackSampleRate != devSampleRate || mResampler.get() != nullptr) {
//sampleRate为Track成员,创建Track时赋值为mSampleRate
if (sampleRate != trackSampleRate) {
sampleRate = trackSampleRate;
if (mResampler.get() == nullptr) {
......
//创建重采样AudioResampler,参数为重采样之后的参数
mResampler.reset(AudioResampler::create(
mMixerInFormat,
resamplerChannelCount,
devSampleRate, quality));
}
return true;
}
}
return false;
}
主要根据重采样之后的声道数、采样率以及质量来确定选择什么样的Resampler,最后设置到Track的mResampler成员。
- 设置Track音量
在具体分析往track设置音量时,先了解以下track内部关于音量volume的成员变量有哪些:
static constexpr uint32_t MAX_NUM_VOLUMES = FCC_2; // stereo volume only 值为2
union {
//volume代表上层设置的音量,float转换为16bit时的音量值,这里数组是2表示左右两个通道
int16_t volume[MAX_NUM_VOLUMES]; // U4.12 fixed point (top bit should be zero)
int32_t volumeRL;
};
/**
* MAX_NUM_VOLUMES一般是2,即是左右声道的音量值
* **/
int32_t prevVolume[MAX_NUM_VOLUMES]; //上一次的音量值,是32bit,渐变音量ramp_vloume时会用到
int32_t volumeInc[MAX_NUM_VOLUMES]; //渐变音量ramp_vloume时会用到,每次音量的增量
//以下三个和上面的三个变量意义是一样的,只是他们保存的是来自应用层设置的原始音量值float类型,
float mVolume[MAX_NUM_VOLUMES];
float mPrevVolume[MAX_NUM_VOLUMES];
float mVolumeInc[MAX_NUM_VOLUMES];
上面的音量参数带m开头表示应用层AudioTrack设置的原始音量值float类型,非m开头是转换为int类型后的音量值,最后在混音计算时都是用非m开头的变量;上面提到了ramp_volume渐变音量,它是一个什么东西呢?简单来说就是有变化的音量,声音从高到低或从低到高,其实现原理就是每次设置音量时都用上一次的音量(preVolume)加上音量增量(volumeInc),就实现了音量的变化。
有了上面的认识后,再来看AudioMixer关于音量设置setParameter(trackId, target=VOLUME/RAMP_VOLUME, param = AudioMixer::VOLUME0, value=&vlf)就简单多,这个音量实质就是给上面的volume、preVolume等变量计算赋值:
//确保param是在0~MAX_NUM_VOLUMES也就是左右声道数范围内
if ((unsigned)param >= VOLUME0 && (unsigned)param < VOLUME0 + MAX_NUM_VOLUMES) {
if (setVolumeRampVariables(*reinterpret_cast<float*>(value),
//是否渐变音量
target == RAMP_VOLUME ? mFrameCount : 0,
//指针读取track音量成员地址,param - VOLUME0就是获取数组的index位置
&track->volume[param - VOLUME0],
&track->prevVolume[param - VOLUME0],
&track->volumeInc[param - VOLUME0],
&track->mVolume[param - VOLUME0],
&track->mPrevVolume[param - VOLUME0],
&track->mVolumeInc[param - VOLUME0])) {
ALOGV("setParameter(%s, VOLUME%d: %04x)",
target == VOLUME ? "VOLUME" : "RAMP_VOLUME", param - VOLUME0,
track->volume[param - VOLUME0]);
invalidate();
}
}
setVolumeRampVariables看看关键的setVolumeRampVariables函数:
newVolume 应用层设置的音量
ramp渐变音量的话就是音频帧数 否则就是0
pIntSetVolum = volume
pIntPreVolume = preVolume
pIntVolumeInc = volumeInc
pSetVolume = mVolume
pPreVolume = mPreVolume
pVolumeInc = mVolumeInc
static inline bool setVolumeRampVariables(float newVolume, int32_t ramp,
int16_t *pIntSetVolume, int32_t *pIntPrevVolume, int32_t *pIntVolumeInc,
float *pSetVolume, float *pPrevVolume, float *pVolumeInc) {
//如果要新设置newValue等于之前设置的音量,就说明音量没变化,无需走下面的逻辑
if (newVolume == *pSetVolume) {
return false;
}
if (newVolume < 0) {
newVolume = 0; // 音量不允许有负值
} else {
//判断newVolume属于无效值、无穷大、0或者正常数
switch (fpclassify(newVolume)) {
//亚正常和无效值
case FP_SUBNORMAL:
case FP_NAN:
newVolume = 0;
break;
//0
case FP_ZERO:
break; // zero volume is fine
//无穷大
case FP_INFINITE:
//无穷大去最大值UNITY_GAIN_FLOAT即可,也就是1.0f最大值
newVolume = AudioMixer::UNITY_GAIN_FLOAT;
break;
//正常值
case FP_NORMAL:
default:
if (newVolume > AudioMixer::UNITY_GAIN_FLOAT) {
newVolume = AudioMixer::UNITY_GAIN_FLOAT;
}
break;
}
}
// ramp渐变音量不为0 则为播放的音频帧数count
if (ramp != 0) {
//计算渐变音量每次音量增量,除以ramp帧数,也就是每帧的音量变化值
const float inc = (newVolume - *pPrevVolume) / ramp;
// could be inf, cannot be nan, subnormal
const float maxv = std::max(newVolume, *pPrevVolume);
//inc是0不正常 非0正常
if (isnormal(inc) // inc must be a normal number (no subnormals, infinite, nan)
&& maxv + inc != maxv) { // inc must make forward progress
*pVolumeInc = inc; //将每次变化的音量赋值给pVolumInc,这里是float类型
} else {
ramp = 0; // 不渐变音量
}
}
//设置的音量newVolume是float类型且在0~1之间,把它乘UNITY_GAIN_INT(16bit最大值)转换为16bit整型
const float scaledVolume = newVolume * AudioMixer::UNITY_GAIN_INT;
const int32_t intVolume = (scaledVolume >= (float)AudioMixer::UNITY_GAIN_INT) ?
AudioMixer::UNITY_GAIN_INT : (int32_t)scaledVolume; //强转为32bit
if (ramp != 0) {
//intVolume是以16bit来强转32bit的,要扩展到真正的32位就移动至高位即可,
//在减去上一次pIntPrevVolume音量值,除音频帧数ramp就得到音量增量
const int32_t inc = ((intVolume << 16) - *pIntPrevVolume) / ramp;
if (inc != 0) {
*pIntVolumeInc = inc; //将音频增量赋值给pVolumInc,这里是int类型
} else {
ramp = 0; // ramp not allowed
}
}
// 如果不使用渐变音量ramp,则track相关渐变音量的变量都为0
if (ramp == 0) {
*pVolumeInc = 0; //渐变音量的步长音量设置为0,也就是无渐变
*pPrevVolume = newVolume; //上一次的初始音量等于设置音量 float类型
*pIntVolumeInc = 0; //同pVolumeInc,只是是整型
//上一次音量整型值,因为intVolume是用newVolume按照16bit转换的,现在要转成32bit,把低16移动到高位即可
*pIntPrevVolume = intVolume << 16;
}
*pSetVolume = newVolume; //设置的音量float
*pIntSetVolume = intVolume; //设置音量整型int
return true;
}
此函数就是为Track结构体内相关音量的成员变量赋值,mVolume成员就为设置的新音量值,volume为设置音量float转换的Int类型;如果是ramp_volume渐变音量,则volumeInc=(volume - preVolume)/frameCount,就是每次的音频音量增量,mVolume赋值同理。
关于Track内的其他成员赋值,在AudioMixer的setParameter函数内都会涉及到,这里就不具体展开讲解,只把最终Track内部的各个成员变量的意思介绍即可!
//bufferProvider实质是PlaybackThread创建的Track,AudioBufferProvider是它的基类
AudioBufferProvider* bufferProvider;
//buffer会从bufferProvider中读取到真实的音频数据
mutable AudioBufferProvider::Buffer buffer; // 8 bytes
//hook函数相当重要,它可以作为函数指针,通常保存混音函数,
//AudioMixer也有一个叫mHook的成员,要注意和它的区别
hook_t hook;
//无论是否重采样,buffer的音频裸数据raw会写入到mIn,
const void *mIn; // current location in buffer
//重采样器
std::unique_ptr<AudioResampler> mResampler;
//此Tack混音前的采样率
uint32_t sampleRate;
//混音后音频数据输出地址,存放的是真实音频数据
//这个地址一般是外部Track的MainBuffer,而MainBuffer又是PlaybackThread的mSinkBuffer指定
int32_t* mainBuffer;
/** 混音后辅助音频输出地址,这个辅助音频值为真实音频数据的极小部分,
* 几分之一那种, 最后还要乘以mAuxLevel;
* 这个地址一般由外部Track的auxBuffer指定,而auxBuffer又是
**/
int32_t* auxBuffer;
//最终混音之后的输出的格式,分析代码为AUDIO_FORMAT_PCM_16_BIT
audio_format_t mMixerFormat; // output mix format: AUDIO_FORMAT_PCM_(FLOAT|16_BIT)
//track源格式
audio_format_t mFormat; // input track format
//混音内部格式 分析代码统一为AUDIO_FORMAT_PCM_FLOAT
audio_format_t mMixerInFormat; // mix internal format AUDIO_FORMAT_PCM_(FLOAT|16_BIT)
到这里prepareTracks_l函数差不多就完成了,当然这个函数还有其他部分如处理underrun数据不够的部分,不在此篇混音的重点,暂忽略!
总结一下:
prepareTracks_l函数就是混音之前做好准备工作,为AudioMixer混音前创建一一对应的内部Track,并为Track赋值音量、buffer、format、重采样管理器等参数,为混音前做好准备工作。
2.7.2.2 threadLoop_mix()
prepareTracks_l完成后就会执行到threadLoop_mix混音函数,不难猜想到混音函数肯定是对上个步骤各个Track进行混音,进去看看:
void AudioFlinger::MixerThread::threadLoop_mix()
{
// mix buffers...
mAudioMixer->process();
mCurrentWriteLength = mSinkBufferSize;
if ((mSleepTimeUs == 0) && (sleepTimeShift > 0)) {
sleepTimeShift--;
}
mSleepTimeUs = 0;
mStandbyTimeNs = systemTime() + mStandbyDelayNs;
//TODO: delay standby when effects have a tail
}
很简单,就是执行混音前AudioMixer的process函数:
void process() {
for (const auto &pair : mTracks) {
// 清除缓冲区的buffer
const std::shared_ptr<Track> &t = pair.second;
if (t->mKeepContractedChannels) {
t->clearContractedBuffer();
}
}
//执行混音函数mHook
(this->*mHook)();
processHapticData();
}
重点就是执行mHook函数指针保存的函数,mHook是AudioMixer的成员变量,它指向了谁呢?
还记得setParameter时,经常会调用invalidate()函数,这个函数就是给mHook赋值的,如下:
void invalidate() {
mHook = &AudioMixer::process__validate;
}
直接去看process__validate函数:
//AudioMixer中name为mainBuffer, value为数组集合存储的name
std::unordered_map<void * /* mainBuffer */, std::vector<int /* name */>> mGroups;
// AudioMixer中保存所有的已经enable的track的name
std::vector<int /* name */> mEnabled;
// AudioMixer中保存所有的Track
std::map<int /* name */, std::shared_ptr<Track>> mTracks;
void AudioMixer::process__validate()
{
bool all16BitsStereoNoResample = true;
bool resampling = false;
bool volumeRamp = false;
mEnabled.clear();
//存储此次混音的Tracks
mGroups.clear();
//遍历AudioMixer内部所有的Track
for (const auto &pair : mTracks) {
const int name = pair.first;
const std::shared_ptr<Track> &t = pair.second;
//每个track在prepareTracks时会enable
if (!t->enabled) continue;
//emplace_back是往map里面添加name,但是效率push_back高,push_back会依次调用构造函数和复制函数
//emplace_back根据参数调用构造函数,
mEnabled.emplace_back(name); // we add to mEnabled in order of name.
//注意mGroup key-mainBuffer混音后的输出地址 value是Track的唯一表示name的集合vector;
//因为可能有很多Track都使用同一个输出地址
mGroups[t->mainBuffer].emplace_back(name); // mGroups also in order of name.
//n是一个状态字段,每个bit代表不同意思
uint32_t n = 0;
// FIXME can overflow (mask is only 3 bits)
n |= NEEDS_CHANNEL_1 + t->channelCount - 1;
//是否需要重采样,查看t的重采样成员是否为空
if (t->doesResample()) {
n |= NEEDS_RESAMPLE;
}
//AUX是啥? aux in辅助音频接入接口,外部音源可以通过此接口接入到内部,不知道这里是不是这个意思
if (t->auxLevel != 0 && t->auxBuffer != NULL) {
n |= NEEDS_AUX;
}
//volumeInc保存渐变音量每次的增量,如果有值说明需要混音的时候要有渐变音量
if (t->volumeInc[0]|t->volumeInc[1]) {
volumeRamp = true;
} else if (!t->doesResample() && t->volumeRL == 0) {
//没有重采样管理器 音量也为0 就静默此track
n |= NEEDS_MUTE;
}
t->needs = n; //将状态赋给track的needs
if (n & NEEDS_MUTE) {
//沉默静音就不做任何处理,也不读取音频数据
t->hook = &Track::track__nop; //track__nop函数为空
} else {
if (n & NEEDS_AUX) {
//双声道不采样为false,就是不重采样
all16BitsStereoNoResample = false;
}
//需要重采样
if (n & NEEDS_RESAMPLE) {
all16BitsStereoNoResample = false;
resampling = true;
//决定track的混音函数hook是什么? mMixerChannelCount为源track的通道数
//mMixerInFormat为混音内部格式 mMixerFormat为混音之后的格式
t->hook = Track::getTrackHook(TRACKTYPE_RESAMPLE, t->mMixerChannelCount,
t->mMixerInFormat, t->mMixerFormat);
//不需要重采样
} else {
//单通道
if ((n & NEEDS_CHANNEL_COUNT__MASK) == NEEDS_CHANNEL_1){
t->hook = Track::getTrackHook(
//目的是多通道 源是单通道 type就是TRACKTYPE_NORESAMPLEMONO; 否则就是TRACKTYPE_NORESAMPLE
(t->mMixerChannelMask == AUDIO_CHANNEL_OUT_STEREO // TODO: MONO_HACK
&& t->channelMask == AUDIO_CHANNEL_OUT_MONO)
? TRACKTYPE_NORESAMPLEMONO : TRACKTYPE_NORESAMPLE,
t->mMixerChannelCount,
t->mMixerInFormat, t->mMixerFormat);
all16BitsStereoNoResample = false;
}
//多通道
if ((n & NEEDS_CHANNEL_COUNT__MASK) >= NEEDS_CHANNEL_2){
//这里只是指定Track的hook混音函数,到底是谁在哪个地方用的呢?在本页内,重采样操作之后就会调用Track的hook函数进行混音
t->hook = Track::getTrackHook(TRACKTYPE_NORESAMPLE, t->mMixerChannelCount,
t->mMixerInFormat, t->mMixerFormat);
}
}
}
}
//mHook重新制定函数指针process__nop,也就是去读取音频数据的函数
mHook = &AudioMixer::process__nop;
//mEnabled保存了需要混音track的名字集合
if (mEnabled.size() > 0) {
//需要重采样
if (resampling) {
//创建混音、重采样的缓冲区,缓冲区大小等于声道数*帧数
if (mOutputTemp.get() == nullptr) {
mOutputTemp.reset(new int32_t[MAX_NUM_CHANNELS * mFrameCount]);
}
if (mResampleTemp.get() == nullptr) {
mResampleTemp.reset(new int32_t[MAX_NUM_CHANNELS * mFrameCount]);
}
//mHook是AudioMixerc重采样函数
mHook = &AudioMixer::process__genericResampling;
//不需要重采样
} else {
// 指定mHook为不走重采样函数
mHook = &AudioMixer::process__genericNoResampling;
if (all16BitsStereoNoResample && !volumeRamp) {
//如果使用的track数量只有一个
if (mEnabled.size() == 1) {
const std::shared_ptr<Track> &t = mTracks[mEnabled[0]];
if ((t->needs & NEEDS_MUTE) == 0) { //并且还是mute静音
mHook = getProcessHook(PROCESSTYPE_NORESAMPLEONETRACK,
t->mMixerChannelCount, t->mMixerInFormat, t->mMixerFormat);
}
}
}
}
}
//因为上面步骤已经重新制定mHook和Track内的hook函数指针,这里process会执行mHook,也就是混音函数
process();
.......
}
上面函数关键点有以下几个:
-
getTrackHook函数为当前track指定hook函数指针成员指定专属他自己的混音函数
-
为AudioMixer指定mHook函数指针成员,处理当前混音器下面所有Track逻辑
-
mHook内部也有可能调用track内部的hook,根据业务不同来
(1)getTrackHook确定track自身的混音函数
如下,因为是制定hook函数指针,肯定有一堆if-else或者switch-case来选定,还真的是:
AudioMixer::hook_t AudioMixer::Track::getTrackHook(int trackType, uint32_t channelCount,
audio_format_t mixerInFormat, audio_format_t mixerOutFormat __unused)
{
.....
switch (trackType) {
case TRACKTYPE_NOP:
return &Track::track__nop;
//需要重采样
case TRACKTYPE_RESAMPLE:
switch (mixerInFormat) {
//默认的内部格式mixerInFormat都是FLOAT
case AUDIO_FORMAT_PCM_FLOAT:
return (AudioMixer::hook_t) &Track::track__Resample<
MIXTYPE_MULTI, float /*TO*/, float /*TI*/, TYPE_AUX>;
case AUDIO_FORMAT_PCM_16_BIT:
return (AudioMixer::hook_t) &Track::track__Resample<
MIXTYPE_MULTI, int32_t /*TO*/, int16_t /*TI*/, TYPE_AUX>;
default:
LOG_ALWAYS_FATAL("bad mixerInFormat: %#x", mixerInFormat);
break;
}
break;
//单声道且不需要重采样
case TRACKTYPE_NORESAMPLEMONO:
switch (mixerInFormat) {
case AUDIO_FORMAT_PCM_FLOAT:
//MIXTYPE_MONOEXPAND单通道要扩展为多通道吗?
return (AudioMixer::hook_t) &Track::track__NoResample<
MIXTYPE_MONOEXPAND, float /*TO*/, float /*TI*/, TYPE_AUX>;
case AUDIO_FORMAT_PCM_16_BIT:
return (AudioMixer::hook_t) &Track::track__NoResample<
MIXTYPE_MONOEXPAND, int32_t /*TO*/, int16_t /*TI*/, TYPE_AUX>;
default:
LOG_ALWAYS_FATAL("bad mixerInFormat: %#x", mixerInFormat);
break;
}
break;
//多声道不需要重采样
case TRACKTYPE_NORESAMPLE:
switch (mixerInFormat) {
case AUDIO_FORMAT_PCM_FLOAT:
//源可能有多个通道
return (AudioMixer::hook_t) &Track::track__NoResample<
MIXTYPE_MULTI, float /*TO*/, float /*TI*/, TYPE_AUX>;
case AUDIO_FORMAT_PCM_16_BIT:
return (AudioMixer::hook_t) &Track::track__NoResample<
MIXTYPE_MULTI, int32_t /*TO*/, int16_t /*TI*/, TYPE_AUX>;
default:
LOG_ALWAYS_FATAL("bad mixerInFormat: %#x", mixerInFormat);
break;
}
break;
default:
LOG_ALWAYS_FATAL("bad trackType: %d", trackType);
break;
}
return NULL;
}
以上就是制定hook函数指针的地方,最终真的混音逻辑,外层3个case抽一个进去了解即可,它们的实现都是大同小异的!
(2)track真实混音hook
此处分析TRACKTYPE_RESAMPLE需要重采样的混音。选取这个hook函数指针:
case AUDIO_FORMAT_PCM_FLOAT:
return (AudioMixer::hook_t) &Track::track__Resample<
MIXTYPE_MULTI, float /*TO*/, float /*TI*/, TYPE_AUX>;
track__Resample的泛型参数记住,很关键,决定最终选取哪个函数:
template <int MIXTYPE, typename TO, typename TI, typename TA>
//TO* out混音后输出地址 outFrameCount音频输出帧数
//temp为缓存地址,保存当前track的数据,但是最终都会累加到out地址上
void AudioMixer::Track::track__Resample(TO* out, size_t outFrameCount, TO* temp, TA* aux)
{
ALOGVV("track__Resample\n");
mResampler->setSampleRate(sampleRate);
const bool ramp = needsRamp();
if (ramp || aux != NULL) {
//重采样音量设置为最大
mResampler->setVolume(UNITY_GAIN_FLOAT, UNITY_GAIN_FLOAT);
memset(temp, 0, outFrameCount * mMixerChannelCount * sizeof(TO));
//开始重采样,重采样后的数据输出到temp缓冲区
mResampler->resample((int32_t*)temp, outFrameCount, bufferProvider);
//开始混音
volumeMix<MIXTYPE, is_same<TI, float>::value /* USEFLOATVOL */, true /* ADJUSTVOL */>(
out, outFrameCount, temp, aux, ramp);
} else { // constant volume gain
mResampler->setVolume(mVolume[0], mVolume[1]);
mResampler->resample((int32_t*)out, outFrameCount, bufferProvider);
}
}
这里重采样就不展开了,重采样会对track的采样率进行转换,转换为统一采样率,这样才可以混音;mResampler内部也比较复杂,深入进去又会牵扯很多,这里就不继续展开!重点看看volumeMix如何混音! 注意volumeMix的泛型,其中is_same是对TI和float类型判断,相同为true,不同为false,继续进入volumeMix看看:
template <int MIXTYPE, bool USEFLOATVOL, bool ADJUSTVOL,
typename TO, typename TI, typename TA>
void AudioMixer::Track::volumeMix(TO *out, size_t outFrames,
const TI *in, TA *aux, bool ramp)
{
//USEFLOATVOL根据传值为true
if (USEFLOATVOL) {
if (ramp) {
volumeRampMulti<MIXTYPE>(mMixerChannelCount, out, outFrames, in, aux,
mPrevVolume, mVolumeInc,
#ifdef FLOAT_AUX
&mPrevAuxLevel, mAuxInc
#else
&prevAuxLevel, auxInc
#endif
);
......
} else {
//先看看简单的
volumeMulti<MIXTYPE>(mMixerChannelCount, out, outFrames, in, aux,
mVolume,
#ifdef FLOAT_AUX
mAuxLevel
#else
auxLevel
#endif
);
}
} else {
if (ramp) {
volumeRampMulti<MIXTYPE>(mMixerChannelCount, out, outFrames, in, aux,
prevVolume, volumeInc, &prevAuxLevel, auxInc);
if (ADJUSTVOL) {
adjustVolumeRamp(aux != NULL);
}
} else {
volumeMulti<MIXTYPE>(mMixerChannelCount, out, outFrames, in, aux,
volume, auxLevel);
}
}
}
不难看出,最终都会走到volumeMulti和volumeRampMultivolumeRampMulti两个函数,注意传递的参数:
-
mMixerChannelCount:当前track的声道数
-
out:混音后数据输出地址
-
outFrame:混音的音频帧数
-
in: 混音前的数据地址
-
aux: 不清楚是啥意思
-
volume: 混音音量(非ramp渐变音量)
-
preVolume: 上一次的音量(ramp)
-
volumeInc: 此次音量的增量(ramp)
(3)普通音量混音:volumeMulti
然后看混音函数了,先看看普通音量的混音:
template <int MIXTYPE,
typename TO, typename TI, typename TV, typename TA, typename TAV>
static void volumeMulti(uint32_t channels, TO* out, size_t frameCount,
const TI* in, TA* aux, const TV *vol, TAV vola)
{
//通道数 进入2
switch (channels) {
case 1:
volumeMulti<MIXTYPE, 1>(out, frameCount, in, aux, vol, vola);
break;
case 2:
volumeMulti<MIXTYPE, 2>(out, frameCount, in, aux, vol, vola);
break;
case 3:
volumeMulti<MIXTYPE_MONOVOL(MIXTYPE), 3>(out, frameCount, in, aux, vol, vola);
break;
case 4:
volumeMulti<MIXTYPE_MONOVOL(MIXTYPE), 4>(out, frameCount, in, aux, vol, vola);
break;
case 5:
volumeMulti<MIXTYPE_MONOVOL(MIXTYPE), 5>(out, frameCount, in, aux, vol, vola);
break;
case 6:
volumeMulti<MIXTYPE_MONOVOL(MIXTYPE), 6>(out, frameCount, in, aux, vol, vola);
break;
case 7:
volumeMulti<MIXTYPE_MONOVOL(MIXTYPE), 7>(out, frameCount, in, aux, vol, vola);
break;
case 8:
volumeMulti<MIXTYPE_MONOVOL(MIXTYPE), 8>(out, frameCount, in, aux, vol, vola);
break;
}
}
以2声道的为例进入看看,还是挺复杂的,又是一大堆switch-case,不过离我们最终混音已经很接近了:
template <int MIXTYPE, int NCHAN,
typename TO, typename TI, typename TV, typename TA, typename TAV>
//in地址保存了混音之前的数据 混音之后数据保存在out
//vol是在AudioMixer的Track中的mVolume,它是一个数组
//vola为AudioMixer的Track的mAuxLevel
//aux为AudioMixer的Track的auxBuffer
inline void volumeMulti(TO* out, size_t frameCount,
const TI* in, TA* aux, const TV *vol, TAV vola)
{
#ifdef ALOGVV
ALOGVV("volumeMulti MIXTYPE:%d\n", MIXTYPE);
#endif
if (aux != NULL) {
do {
TA auxaccum = 0;
switch (MIXTYPE) {
case MIXTYPE_MULTI:
//NCHAN代表有几个声道
for (int i = 0; i < NCHAN; ++i) {
//out就是已经累加的混音数据,in代表的是当前track的音频数据, vol[i]是当前的音量数据
//MixMulAux内部就是音频数据与音量相乘,不信就进去看看,还有一个auxaccum是传入地址进去
*out++ += MixMulAux<TO, TI, TV, TA>(*in++, vol[i], &auxaccum);
}
break;
case MIXTYPE_MONOEXPAND:
//声道扩张的情况,out多声道存储的都是单声道的值,此处in并为加加
for (int i = 0; i < NCHAN; ++i) {
*out++ += MixMulAux<TO, TI, TV, TA>(*in, vol[i], &auxaccum);
}
in++;
break;
case MIXTYPE_MULTI_SAVEONLY:
for (int i = 0; i < NCHAN; ++i) {
//auxaccum传入MixMulAux函数中,会累加自身和in的一部分;auxaccum += (1.0/*in类型<<字节长度-1)
*out++ = MixMulAux<TO, TI, TV, TA>(*in++, vol[i], &auxaccum);
}
break;
case MIXTYPE_MULTI_MONOVOL:
for (int i = 0; i < NCHAN; ++i) {
*out++ += MixMulAux<TO, TI, TV, TA>(*in++, vol[0], &auxaccum);
}
break;
case MIXTYPE_MULTI_SAVEONLY_MONOVOL:
for (int i = 0; i < NCHAN; ++i) {
*out++ = MixMulAux<TO, TI, TV, TA>(*in++, vol[0], &auxaccum);
}
break;
default:
LOG_ALWAYS_FATAL("invalid mixtype %d", MIXTYPE);
break;
}
//多个通道求平均
auxaccum /= NCHAN;
//aux作为辅助通道保存输出
*aux++ += MixMul<TA, TA, TAV>(auxaccum, vola);
} while (--frameCount);
} else {
do {
switch (MIXTYPE) {
//正常有多个混音也可能走这里
case MIXTYPE_MULTI:
for (int i = 0; i < NCHAN; ++i) {
//+= 有一个累加的过程,这里用到vol,它是Track的mVloume成员,2个长度的数组
*out++ += MixMul<TO, TI, TV>(*in++, vol[i]);
}
break;
case MIXTYPE_MONOEXPAND:
for (int i = 0; i < NCHAN; ++i) {
*out++ += MixMul<TO, TI, TV>(*in, vol[i]);
}
in++;
break;
case MIXTYPE_MULTI_SAVEONLY:
for (int i = 0; i < NCHAN; ++i) {
/** 做乘法,in*vol,并让结果不超过out类型的最大值
* 还要注意一点,如果NCHAN是2个通道,那out存储会依次
* 存储左右通道的数据
* **/
*out++ = MixMul<TO, TI, TV>(*in++, vol[i]);
}
break;
case MIXTYPE_MULTI_MONOVOL:
for (int i = 0; i < NCHAN; ++i) {
*out++ += MixMul<TO, TI, TV>(*in++, vol[0]);
}
break;
case MIXTYPE_MULTI_SAVEONLY_MONOVOL:
for (int i = 0; i < NCHAN; ++i) {
*out++ = MixMul<TO, TI, TV>(*in++, vol[0]);
}
break;
default:
LOG_ALWAYS_FATAL("invalid mixtype %d", MIXTYPE);
break;
}
} while (--frameCount);
}
}
inline TO MixMulAux(TI value, TV volume, TA *auxaccum) {
MixAccum<TA, TI>(auxaccum, value);
return MixMul<TO, TI, TV>(value, volume);
}
//都是做乘法,音量乘音频数据,还需要考虑数据溢出、大小的问题 大量的重载函数
template <>
inline int32_t MixMul<int32_t, int16_t, int16_t>(int16_t value, int16_t volume) {
return value * volume;
}
template <>
inline int32_t MixMul<int32_t, int32_t, int16_t>(int32_t value, int16_t volume) {
return (value >> 12) * volume;
}
template <>
inline int32_t MixMul<int32_t, int16_t, int32_t>(int16_t value, int32_t volume) {
return value * (volume >> 16);
}
......
看到这里可能有点晕了,用一幅图总结下这个混音流程:
(4)渐变音量混音:volumeRampMulti
与普通音量混音不同点在AudioMixerOps::volumeRampMulti函数上,如下提取的关键部分:
for (int i = 0; i < NCHAN; ++i) {
*out++ += MixMulAux<TO, TI, TV, TA>(*in++, vol[i], &auxaccum);
//每次音量都会增加一个音量增量volinc,这个就是在setParamter时计算的volumeInc音量增量
vol[i] += volinc[i];
}
所以在播放音频时,ramp方式的混音音量会有渐变效果!
(5)确定AudioMixer的mHook
到这里当前Track的混音流程就完成了!但是我们还不知道如何调起混音hook函数的,这里就要到AudioMixer的mHook函数指针去找找了;因为在process__validate函数中,mHook可能是:
//需要重采样
mHook = &AudioMixer::process__genericResampling;
//不需要重采样
mHook = &AudioMixer::process__genericNoResampling;
这两种可能,我们逐一分析
(6)不需要重采样process__genericNoResampling
void AudioMixer::process__genericNoResampling()
{
ALOGVV("process__genericNoResampling\n");
int32_t outTemp[BLOCKSIZE * MAX_NUM_CHANNELS] __attribute__((aligned(32)));
//按mainBuffer遍历mGroup
for (const auto &pair : mGroups) {
const auto &group = pair.second;
//group是一个vector集合,包含了许多Track,他们的输出mainBuffer地址是一致的
for (const int name : group) {
const std::shared_ptr<Track> &t = mTracks[name];
t->buffer.frameCount = mFrameCount;
//提取AudioTrack客户端共享内存的音频数据
t->bufferProvider->getNextBuffer(&t->buffer);
t->frameCount = t->buffer.frameCount;
t->mIn = t->buffer.raw;
}
//out应该是Track的MainBuffer,也就是混音之后的输出地址
int32_t *out = (int *)pair.first;
size_t numFrames = 0;
do {
const size_t frameCount = std::min((size_t)BLOCKSIZE, mFrameCount - numFrames);
//outTemp作为临时的混音输出地址,最后会将数据转到out,也就是mainBuffer
memset(outTemp, 0, sizeof(outTemp));
for (const int name : group) {
const std::shared_ptr<Track> &t = mTracks[name];
int32_t *aux = NULL;
//auxBuffer地址 在混音时会读取原始音频数据的一部分值
if (CC_UNLIKELY(t->needs & NEEDS_AUX)) {
aux = t->auxBuffer + numFrames;
}
for (int outFrames = frameCount; outFrames > 0; ) {
if (t->mIn == nullptr) {
break;
}
//inframe作为下面hook此次要混音的音频数据个数;
size_t inFrames = (t->frameCount > outFrames)?outFrames:t->frameCount;
if (inFrames > 0) {
//这里没有把mIn音频数据传入到hook函数中,是因为hook指针指向的函数是Track类自身的函数
//,这个函数可以调用Track内部的成员,也就是mIn,拿到mIn的原始音频数据后混音输出到outTemp
//完成混音到数据转移;这个hook函数可以理解为混音函数,涉及多音轨音频数据叠加的
(t.get()->*t->hook)(
outTemp + (frameCount - outFrames) * t->mMixerChannelCount,
inFrames, mResampleTemp.get() /* naked ptr */, aux);
t->frameCount -= inFrames;
outFrames -= inFrames;
if (CC_UNLIKELY(aux != NULL)) {
aux += inFrames;
}
}
//混音完成后,释放buffer
if (t->frameCount == 0 && outFrames) {
t->bufferProvider->releaseBuffer(&t->buffer);
t->buffer.frameCount = (mFrameCount - numFrames) -
(frameCount - outFrames);
t->bufferProvider->getNextBuffer(&t->buffer);
t->mIn = t->buffer.raw;
if (t->mIn == nullptr) {
break;
}
t->frameCount = t->buffer.frameCount;
}
}
}
const std::shared_ptr<Track> &t1 = mTracks[group[0]];
//拷贝混音后的数据out,out也就是音频最终的输出地址
convertMixerFormat(out, t1->mMixerFormat, outTemp, t1->mMixerInFormat,
frameCount * t1->mMixerChannelCount);
//输出地址改变偏移,因为已经存储了部分数据了
out = reinterpret_cast<int32_t*>((uint8_t*)out
+ frameCount * t1->mMixerChannelCount
* audio_bytes_per_sample(t1->mMixerFormat));
numFrames += frameCount;
} while (numFrames < mFrameCount);
// 释放每个track的buffer
for (const int name : group) {
const std::shared_ptr<Track> &t = mTracks[name];
t->bufferProvider->releaseBuffer(&t->buffer);
}
}
}
代码功能很简单:
首先,从应用端取出音频数据getNextBuffer
其次,调用track的hook混音函数,进行混音并把混音的结果保存在缓存outTemp中
最后,在把混音后数据从outTemp转移到out缓存buffer,也就是混音后输出的mainbuffer地址
(7)需要重采样process__genericResampling
void AudioMixer::process__genericResampling()
{
//mOutTemp是process__validate函数中创建的缓冲区
int32_t * const outTemp = mOutputTemp.get(); // naked ptr
size_t numFrames = mFrameCount;
for (const auto &pair : mGroups) {
const auto &group = pair.second;
//t1表示group中的第一个Track
const std::shared_ptr<Track> &t1 = mTracks[group[0]];
// 初始化outtemp缓冲区
memset(outTemp, 0, sizeof(*outTemp) * t1->mMixerChannelCount * mFrameCount);
for (const int name : group) {
const std::shared_ptr<Track> &t = mTracks[name];
int32_t *aux = NULL;
if (CC_UNLIKELY(t->needs & NEEDS_AUX)) {
aux = t->auxBuffer;
}
//如果确实要重采样就直接调用hook,因为hook里面自己包含了取数据相关操作
if (t->needs & NEEDS_RESAMPLE) {
(t.get()->*t->hook)(outTemp, numFrames, mResampleTemp.get() /* naked ptr */, aux);
} else { //不重采样情况
size_t outFrames = 0;
//自己取数据
while (outFrames < numFrames) {
t->buffer.frameCount = numFrames - outFrames;
t->bufferProvider->getNextBuffer(&t->buffer);
t->mIn = t->buffer.raw;
// t->mIn == nullptr can happen if the track was flushed just after having
// been enabled for mixing.
if (t->mIn == nullptr) break;
//混音
(t.get()->*t->hook)(
outTemp + outFrames * t->mMixerChannelCount, t->buffer.frameCount,
mResampleTemp.get() /* naked ptr */,
aux != nullptr ? aux + outFrames : nullptr);
outFrames += t->buffer.frameCount;
//释放缓冲区
t->bufferProvider->releaseBuffer(&t->buffer);
}
}
}
//将混音数据保存在t1的mainBuffer中,因为group中所有的track的mainBuffer是一样
convertMixerFormat(t1->mainBuffer, t1->mMixerFormat,
outTemp, t1->mMixerInFormat, numFrames * t1->mMixerChannelCount);
}
}
功能很简单,看上面代码注释即可;
(8)混音总结
混音业务调用如下:
mHook、hook不仅仅只会指定图中的几个函数,这里只是针对混音业务时,可能的取值。
混音的两个重要点,先确定是否需要重采样Resample,然后确定是否普通音量混音还是渐变音量混音;而且在混音时,如果track音轨数少于目标音轨,还需要进行音轨扩张;
(9)hook设计模式思考
第一次看到这个Hook设计模式感觉还不错,不停的修改Hook函数指针,达到转换业务的逻辑,降低了模块不同业务之间的耦合,这种结构组织松散了代码结构,但是相关业务却紧密连接在一起!如下图:
如上图,如果不用hook模式,这种业务组合有4种调用方式,可能需要加if-else来判断业务怎么走,但是hook方式的话,就不用了,如下图:
提前将业务指定到hook,一条业务调用即可!
2.7.2.3 threadLoop_write()
threadLoop_write用于混音后的音频输出,代码实现如下:
ssize_t AudioFlinger::MixerThread::threadLoop_write()
{
if (mFastMixer != 0) {
//...fastMixer处理
}
return PlaybackThread::threadLoop_write();
}
继续分析PlaybackThread::threadLoop_write(),代码实现如下:
// shared by MIXER and DIRECT, overridden by DUPLICATING
ssize_t AudioFlinger::PlaybackThread::threadLoop_write()
{
// FIXME rewrite to reduce number of system calls
mLastWriteTime = systemTime();
mInWrite = true;
ssize_t bytesWritten;
const size_t offset = mCurrentWriteLength - mBytesRemaining;
// If an NBAIO sink is present, use it to write the normal mixer's submix
if (mNormalSink != 0) {
//将Buffer写到声卡上
ssize_t framesWritten = mNormalSink->write((char *)mSinkBuffer + offset, count);
//...
// otherwise use the HAL / AudioStreamOut directly
} else {
//如果用fastMixer的话其实会走该分支,先忽略
// Direct output and offload threads
bytesWritten = mOutput->stream->write(mOutput->stream,
(char *)mSinkBuffer + offset, mBytesRemaining);
//...
}
//...
mNumWrites++;
mInWrite = false;
mStandby = false;
return bytesWritten;//返回输出的音频数据量
}
2.7.3 DirectOutputThread
2.7.3.1 threadLoop_mix()
服务端的数据读取过程和客户端的写入其实有很多相近之处,从cblk中得到front指针:
void AudioFlinger::DirectOutputThread::threadLoop_mix()
{
size_t frameCount = mFrameCount;
int8_t *curBuf = (int8_t *)mSinkBuffer;
// output audio to hardware
while (frameCount) {
AudioBufferProvider::Buffer buffer;
buffer.frameCount = frameCount;
status_t status = mActiveTrack->getNextBuffer(&buffer);
if (status != NO_ERROR || buffer.raw == NULL) {
// no need to pad with 0 for compressed audio
if (audio_has_proportional_frames(mFormat)) {
memset(curBuf, 0, frameCount * mFrameSize);
}
break;
}
//拷贝音频数据到curBuf,也就是mSinkBuffer中去
memcpy(curBuf, buffer.raw, buffer.frameCount * mFrameSize);
frameCount -= buffer.frameCount;
curBuf += buffer.frameCount * mFrameSize;
mActiveTrack->releaseBuffer(&buffer);
}
mCurrentWriteLength = curBuf - (int8_t *)mSinkBuffer;
mSleepTimeUs = 0;
mStandbyTimeNs = systemTime() + mStandbyDelayNs;
mActiveTrack.clear();
}
这里使用的DirectOutputThread,这种线程里面只能有一个Track,所以这里是mActiveTrack,并不是一个复数;如果是MixerThread,内部用mActiveTracks保存多个Track,而且还涉及到混音等;
2.7.3.2 threadloop_write()
上面就是一个简单的读取过程,找到已写入数据的内存,将数据拷贝出来,拷贝到mSinkBuffer成员中,在threadloop_write时将数据在写出去:
ssize_t AudioFlinger::PlaybackThread::threadLoop_write()
{
LOG_HIST_TS();
mInWrite = true;
ssize_t bytesWritten;
const size_t offset = mCurrentWriteLength - mBytesRemaining;
// If an NBAIO sink is present, use it to write the normal mixer's submix
if (mNormalSink != 0) {
const size_t count = mBytesRemaining / mFrameSize;
ATRACE_BEGIN("write");
// update the setpoint when AudioFlinger::mScreenState changes
uint32_t screenState = AudioFlinger::mScreenState;
if (screenState != mScreenState) {
mScreenState = screenState;
MonoPipe *pipe = (MonoPipe *)mPipeSink.get();
if (pipe != NULL) {
pipe->setAvgFrames((mScreenState & 1) ?
(pipe->maxFrames() * 7) / 8 : mNormalFrameCount * 2);
}
}
ssize_t framesWritten = mNormalSink->write((char *)mSinkBuffer + offset, count);
ATRACE_END();
if (framesWritten > 0) {
bytesWritten = framesWritten * mFrameSize;
#ifdef TEE_SINK
mTee.write((char *)mSinkBuffer + offset, framesWritten);
#endif
} else {
bytesWritten = framesWritten;
}
// otherwise use the HAL / AudioStreamOut directly
} else {
// Direct output and offload threads
if (mUseAsyncWrite) {
ALOGW_IF(mWriteAckSequence & 1, "threadLoop_write(): out of sequence write request");
mWriteAckSequence += 2;
mWriteAckSequence |= 1;
ALOG_ASSERT(mCallbackThread != 0);
mCallbackThread->setWriteBlocked(mWriteAckSequence);
}
// FIXME We should have an implementation of timestamps for direct output threads.
// They are used e.g for multichannel PCM playback over HDMI. mSinkBuffer写入了音频数据,这里往mOutput写入
bytesWritten = mOutput->write((char *)mSinkBuffer + offset, mBytesRemaining);
if (mUseAsyncWrite &&
((bytesWritten < 0) || (bytesWritten == (ssize_t)mBytesRemaining))) {
// do not wait for async callback in case of error of full write
mWriteAckSequence &= ~1;
ALOG_ASSERT(mCallbackThread != 0);
mCallbackThread->setWriteBlocked(mWriteAckSequence);
}
}
mNumWrites++;
mInWrite = false;
mStandby = false;
return bytesWritten;
}
音频数据已经写到mSinkBuffer中去了,这里就会开始讲数据写往HAL了;对于PlaybackThread来说,数据有可能往mOutput、mPipeSink和mNormalSink这三个输出,其中前面mOutput是共享内存来实现写到HAL,mPipeSink则是管道写入数据,mNormalSink是根据业务选择前面两个变量来作为自己。
这里我们分析mOutput共享内存业务写入即可。
3.HAL层数据传输
3.1 向HAL层写入音频数据
通过audio open output业务一文可以得知,mOutput变量的类型是AudioStreamOut类型,其往下层持有的引用类型如下图:
而业务逻辑依次是:
AudioStreamOut.write() -> StreamOutHalHidl.write();直接看第二个处理函数:
status_t StreamOutHalHidl::write(const void *buffer, size_t bytes, size_t *written) {
......
status_t status;
//mDataMQ共享内存
if (!mDataMQ) {
//获取缓存区大小
size_t bufferSize;
if ((status = getCachedBufferSize(&bufferSize)) != OK) {
return status;
}
if (bytes > bufferSize) bufferSize = bytes;
//申请内存
if ((status = prepareForWriting(bufferSize)) != OK) {
return status;
}
}
//执行写入HAL数据
status = callWriterThread(
WriteCommand::WRITE, "write", static_cast<const uint8_t*>(buffer), bytes,
[&] (const WriteStatus& writeStatus) {
*written = writeStatus.reply.written;
// Diagnostics of the cause of b/35813113.
ALOGE_IF(*written > bytes,
"hal reports more bytes written than asked for: %lld > %lld",
(long long)*written, (long long)bytes);
});
mStreamPowerLog.log(buffer, *written);
return status;
}
------------------------------------------------------------------------------
status_t StreamOutHalHidl::callWriterThread(
WriteCommand cmd, const char* cmdName,
const uint8_t* data, size_t dataSize, StreamOutHalHidl::WriterCallback callback) {
//写入命令CMD
if (!mCommandMQ->write(&cmd)) {
ALOGE("command message queue write failed for "%s"", cmdName);
return -EAGAIN;
}
if (data != nullptr) {
size_t availableToWrite = mDataMQ->availableToWrite();
if (dataSize > availableToWrite) {
ALOGW("truncating write data from %lld to %lld due to insufficient data queue space",
(long long)dataSize, (long long)availableToWrite);
dataSize = availableToWrite;
}
//向共享内存写入数据data
if (!mDataMQ->write(data, dataSize)) {
ALOGE("data message queue write failed for "%s"", cmdName);
}
}
mEfGroup->wake(static_cast<uint32_t>(MessageQueueFlagBits::NOT_EMPTY));
......
}
主要就是通过mDataMQ写入data数据,那么mDataMQ哪里来的?实质就是在prepareForWriting函数中:
status_t StreamOutHalHidl::prepareForWriting(size_t bufferSize) {
std::unique_ptr<CommandMQ> tempCommandMQ;
std::unique_ptr<DataMQ> tempDataMQ;
std::unique_ptr<StatusMQ> tempStatusMQ;
Result retval;
pid_t halThreadPid, halThreadTid;
Return<void> ret = mStream->prepareForWriting(
1, bufferSize,
[&](Result r,
const CommandMQ::Descriptor& commandMQ,
const DataMQ::Descriptor& dataMQ,
const StatusMQ::Descriptor& statusMQ,
const ThreadInfo& halThreadInfo) {
retval = r;
if (retval == Result::OK) {
tempCommandMQ.reset(new CommandMQ(commandMQ));
tempDataMQ.reset(new DataMQ(dataMQ));
....
}
});
i
mCommandMQ = std::move(tempCommandMQ);
mDataMQ = std::move(tempDataMQ);
mStatusMQ = std::move(tempStatusMQ);
mWriterClient = gettid();
return OK;
}
最终会调用到hardware/interface/audio/core/all-version/default/StreamOut.cpp的prepareForWriting方法,当然是通过HIDL跨进程调用的,调用成功后返回dataMQ,赋值给mDataMQ用于数据写入;
Return<void> StreamOut::prepareForWriting(uint32_t frameSize, uint32_t framesCount,
prepareForWriting_cb _hidl_cb) {
.......
//创建共享内存
std::unique_ptr<DataMQ> tempDataMQ(new DataMQ(frameSize * framesCount, true /* EventFlag */));
......
// 创建线程,会读取从Framework层的音频写入数据
auto tempWriteThread =
std::make_unique<WriteThread>(&mStopWriteThread, mStream, tempCommandMQ.get(),
tempDataMQ.get(), tempStatusMQ.get(), tempElfGroup.get());
if (!tempWriteThread->init()) {
ALOGW("failed to start writer thread: %s", strerror(-status));
sendError(Result::INVALID_ARGUMENTS);
return Void();
}
status = tempWriteThread->run("writer", PRIORITY_URGENT_AUDIO);
mCommandMQ = std::move(tempCommandMQ);
mDataMQ = std::move(tempDataMQ);
.......
//回调到Framework层
_hidl_cb(Result::OK, *mCommandMQ->getDesc(), *mDataMQ->getDesc(), *mStatusMQ->getDesc(),
threadInfo);
return Void();
}
emplate <typename T, MQFlavor flavor>
MessageQueue<T, flavor>::MessageQueue(size_t numElementsInQueue, bool configureEventFlagWord) {
......
/*
*创建匿名共享内存
*/
int ashmemFd = ashmem_create_region("MessageQueue", kAshmemSizePageAligned);
ashmem_set_prot_region(ashmemFd, PROT_READ | PROT_WRITE);
/*
* The native handle will contain the fds to be mapped.
*/
native_handle_t* mqHandle =
native_handle_create(1 /* numFds */, 0 /* numInts */);
if (mqHandle == nullptr) {
return;
}
mqHandle->data[0] = ashmemFd;
mDesc = std::unique_ptr<Descriptor>(new (std::nothrow) Descriptor(kQueueSizeBytes,
mqHandle,
sizeof(T),
configureEventFlagWord));
if (mDesc == nullptr) {
return;
}
initMemory(true);
}
到这里就知道在HAL层创建匿名共享内存,传递到Framework层的StreamHalHidl中write时候的写入即可;创建共享内存的时候顺带创建了一个WriteThread线程,用于读取音频数据;总结如下图:
3.2 HAL层读取音频数据
通过上小节的总结图,可以得知HAL层开启了WriteThread线程,不难猜到这个线程里面就会读取音频数据,看下线程的threadloop函数:
bool WriteThread::threadLoop() {
while (!std::atomic_load_explicit(mStop, std::memory_order_acquire)) {
uint32_t efState = 0;
mEfGroup->wait(static_cast<uint32_t>(MessageQueueFlagBits::NOT_EMPTY), &efState);
if (!(efState & static_cast<uint32_t>(MessageQueueFlagBits::NOT_EMPTY))) {
continue; // Nothing to do.
}
if (!mCommandMQ->read(&mStatus.replyTo)) {
continue; // Nothing to do.
}
switch (mStatus.replyTo) {
//应用层传递的是WRITE
case IStreamOut::WriteCommand::WRITE:
doWrite();
break;
case IStreamOut::WriteCommand::GET_PRESENTATION_POSITION:
doGetPresentationPosition();
break;
case IStreamOut::WriteCommand::GET_LATENCY:
doGetLatency();
break;
default:
ALOGE("Unknown write thread command code %d", mStatus.replyTo);
mStatus.retval = Result::NOT_SUPPORTED;
break;
}
.....
}
return false;
}
------------------------------------------------------------------------------
void WriteThread::doWrite() {
const size_t availToRead = mDataMQ->availableToRead();
mStatus.retval = Result::OK;
mStatus.reply.written = 0;
//从mDataMQ共享内存读取数据,这里还只是HAL层的外部HIDL处
if (mDataMQ->read(&mBuffer[0], availToRead)) {
//转手在写入HAL层,这个mStream是通过hal的adev_open_output_stream获得的
ssize_t writeResult = mStream->write(mStream, &mBuffer[0], availToRead);
if (writeResult >= 0) {
mStatus.reply.written = writeResult;
} else {
mStatus.retval = Stream::analyzeStatus("write", writeResult);
}
}
}
到上面的代码,已经从HIDL的服务端读取出了音频数据buffer,转手用mStream写入HAL层;mStream是什么?
3.3 HAL层音频数据写入Kernel驱动
接上头,mStream实质是一个audio_stream_out类型,由于HAL层不同的产商实现不一样,这里基于Qcom高通来分析;代码位于/harware/qcom/audio/目录下,audio_hw.c定义了HAL的设备函数节点;并且在adev_open_output_stream函数中创建了audio_stream_out类型,并配置了流支持的方法,大致如下:
static int adev_open_output_stream(struct audio_hw_device *dev,
audio_io_handle_t handle,
audio_devices_t devices,
audio_output_flags_t flags,
struct audio_config *config,
struct audio_stream_out **stream_out,
const char *address __unused)
{
struct audio_device *adev = (struct audio_device *)dev;
struct stream_out *out;
int i, ret = 0;
bool is_hdmi = devices & AUDIO_DEVICE_OUT_AUX_DIGITAL;
bool is_usb_dev = audio_is_usb_out_device(devices) &&
(devices != AUDIO_DEVICE_OUT_USB_ACCESSORY);
bool force_haptic_path =
property_get_bool("vendor.audio.test_haptic", false);
if (is_usb_dev && !is_usb_ready(adev, true /* is_playback */)) {
return -ENOSYS;
}
*stream_out = NULL;
out = (struct stream_out *)calloc(1, sizeof(struct stream_out));
......
out->stream.set_callback = out_set_callback;
out->stream.pause = out_pause;
out->stream.resume = out_resume;
out->stream.drain = out_drain;
out->stream.flush = out_flush;
out->stream.write = out_write;
......
*stream_out = &out->stream;
}
后续流程相对复杂,主要就是open打开/dev/snd/pcmCXDX驱动设备,在ioctl向设备里面写入buffer数据;写入函数位于/hardware/qcom/aduio/legacy/libalsa-intf/alsa_pcm.c里面pcm_write函数,将会涉及到音频数据的写入驱动;这部分是高通对ALSA框架的封装,其他厂商大致也是这个路径。