FFMPEG填坑之旅(叁----视频播放(二)画面显示)

1,044 阅读1分钟

上一篇说到解码,已经得到了Frame数据了

这里来说视频画面处理。

首先要准备一个画布

AVCodecContext *codec_context = player->video_codec_context;                                          
int videoWidth = codec_context->width;                                                                
int videoHeight = codec_context->height;                                                              
player->native_window = ANativeWindow_fromSurface(env, player->surface);                              
if (player->native_window == NULL) {                                                                  
    LOGE("Player Error : Can not create native window");                                              
    return FAIL_CODE;                                                                                 
}                                                                                                     
int result = ANativeWindow_setBuffersGeometry(player->native_window, videoWidth, videoHeight, WINDOW_FORMAT_RGBA_8888);                               
if (result < 0) {                                                                                     
    LOGE("Player Error : Can not set native window buffer");                                          
    ANativeWindow_release(player->native_window);                                                     
    return FAIL_CODE;                                                                                 
}                                                                                                        

这里由Java层传入的Surface创建了一个ANativeWindow来承接渲染

player->rgba_frame = av_frame_alloc();                                                                
int buffer_size = av_image_get_buffer_size(AV_PIX_FMT_RGBA, videoWidth, videoHeight, 1);              
player->video_out_buffer = (uint8_t *) av_malloc(buffer_size * sizeof(uint8_t));                      
av_image_fill_arrays(player->rgba_frame->data, player->rgba_frame->linesize, player->video_out_buffer, AV_PIX_FMT_RGBA, videoWidth, videoHeight, 1);                                    
player->sws_context = sws_getContext(videoWidth, videoHeight, codec_context->pix_fmt, videoWidth, videoHeight, AV_PIX_FMT_RGBA, SWS_BICUBIC, NULL, NULL, NULL);  

同时创建了渲染的变量,rgba_frame输出的RGBA的帧,因为Surface上需要用到RGBA的编码格式来进行渲染。 sws_context是用于缩放的,用于视频画面适应屏幕的宽高

double audio_clock = player->audio_clock;
double timestamp;
if (packet->pts == AV_NOPTS_VALUE) {
    timestamp = 0;
} else {
    timestamp = av_frame_get_best_effort_timestamp(frame) * av_q2d(stream->time_base);
}
double frame_rate = av_q2d(stream->avg_frame_rate);
frame_rate += frame->repeat_pict * (frame_rate * 0.5);
if (timestamp == 0.0) {
    usleep((unsigned long) (frame_rate * 1000));
} else {
    if (fabs(timestamp - audio_clock) > AV_SYNC_THRESHOLD_MIN &&
        fabs(timestamp - audio_clock) < AV_NOSYNC_THRESHOLD) {
            if (timestamp > audio_clock) {
                usleep((unsigned long) ((timestamp - audio_clock) * 1000000));
            }
        }
    }
}

这里就是对视频的某一帧的处理了,av_frame_get_best_effort_timestamp(frame) * av_q2d(stream->time_base)设置了这一帧画面最合适的时间戳,后续的usleep设置了下一帧读取以及显示的间隔。

处理完,就接着输出给Surface。

int video_height = player->video_codec_context->height;                   
ANativeWindow_lock(player->native_window, &(player->window_buffer), NULL);
sws_scale(
        player->sws_context,      
        (const uint8_t *const *) frame->data, frame->linesize,
        0, video_height,
        player->rgba_frame->data, player->rgba_frame->linesize);
uint8_t *bits = (uint8_t *) player->window_buffer.bits;
for (int h = 0; h < video_height; h++) {
    memcpy(bits + h * player->window_buffer.stride * 4,
           player->video_out_buffer + h *player->rgba_frame->linesize[0],
           player->rgba_frame->linesize[0]);
}     
ANativeWindow_unlockAndPost(player->native_window);

这样就把数据输出到Surface上了,画面就有了。

至此,画面输出就完成了