Live-client-6-直播功能-1-实现

986 阅读31分钟

音视频的相关知识见前一篇文章:Live-client-6-直播功能-0-音视频基础和协议

一、架构设计

首先来介绍一下rtmp直播的要点:

  1. 服务器:通过使用nginx+nginx-rtmp-module模块来实现rtmp的流媒体服务器,提供用户连接、数据中转、直播数据存储等功能,该服务器的搭建见Live-Server-9-Maven打包,部署+Nginx服务器这篇文章。
  2. 客户端:客户端要处理的东西就非常多了,分为两大部分:推流和拉流。
  • 推流:
    采集麦克风的音频数据,通过faac编码成aac格式的音频;
    采集相机数据,通过x264编码成H.264格式的视频;
    然后通过rtmp将aac音频和H.264视频封装成flv的视频格式,然后向nginx-rtmp服务器发送数据。
  • 拉流:通过支持rtmp直播流的播放器拉取nginx-rtmp中的数据,并播放。播放器可供选择的有:Bilibili的ijkplayer播放器、ffmpeg播放器。
    推流拉流过程.png

二、推流

推流部分是最复杂的一部分,也是拓展性最强的部分。目前的FAAC、x264、RTMP只是最简单的方案,还可以对视频图像进行人像优化、添加滤镜、图像裁剪、图标等;也可以对麦克风的pcm音频进行变声、变调、去杂音、添加背景声音等。

为了便于实现该部分功能和添加拓展性,将推流部分按如下代码结构来实现:

推流代码结构

总体实现思路如下:

  1. 定义一个抽象类BasePusher,并定义开始直播、停止直播、释放资源等控制直播的抽象方法;
  2. 定义视频推流器VideoPusher、音频推流器AudioPusher继承抽象类BasePusher,实现上述抽象方法。
    VideoPusher要控制手机摄像头的开关及数据采集转码等,
    AudioPusher要控制麦克风开关和音频数据录取转码等;
  3. 定义直播推流器LivePusher控制视频推流器和音频推流器的执行。
  4. 视频推流器、音频推流器、直播推流器都各自拥有PushNative类对象,该类是NDK的native方法定义,用来控制NDK原生代码实现视频编码、音频编码、RTMP推流等功能。

(一)PushNative

在代码结构图中,AudioPusher、VideoPusher、LivePusher中都拥有一个PusherNative类对象,而且这个类又需要和JN i的C/C++打交道,那么很明显这个类是Java代码控制Native代码的一个类。那么我们先来看看这个类需要做什么:

  • 视频编码:设置视频编码格式、发送视频数据包
  • 音频编码:设置音频编码格式、发送音频数据包
  • 推流:开始推流、停止推流、释放资源

在推流部分可以看到PushNative需要和BasePusher类一样定义了startPush()、stopPush()、release()三个方法来控制直播推流。除了上述几个方法以外,还定义setVideoOptions、setAudioOptions、fireVideo、fireAudio四个方法来实现音视频的格式参数的设置和发送音视频数据包,最后还用了一个LiveStateChangeListener用来监听native代码的异常,在Activity中实现监听的回调就可以完成对native代码异常的处理。

/**
 * 调用C代码进行编码和推流
 * @author Ljh 2019/6/15 14:07
 */
public class PushNative {

    /**
     * 异常标识
     * 
     * CONNECT_FAILED:连接异常
     * INIT_FAILED:初始化异常
     * WHAT_FAILED:未知异常
     */
    public static final int CONNECT_FAILED = 101;
    public static final int INIT_FAILED = 102;
    public static final int WHAT_FAILED = 103;

    /**
     * 异常回调监听
     */
    LiveStateChangeListener liveStateChangeListener;
    
    /**
     * 接受Native层抛出的错误
     * @param code
     */
    public void throwNativeError(int code){
        if(liveStateChangeListener != null) {
            liveStateChangeListener.onError(code);
        }
    }

    /**
     * 开始推流
     * @param url 推流地址
     */
    public native void startPush(String url);

    /**
     * 停止推流
     */
    public native void stopPush();

    /**
     * 释放资源
     */
    public native void release();

    /**
     * 设置视频参数
     * @param width 视频宽
     * @param height 视频高
     * @param bitrate 比特率(码率)
     * @param fps 帧率
     */
    public native void setVideoOptions(int width, int height, int bitrate, int fps);

    /**
     * 设置音频参数
     * @param sampleRateInHz 采样率
     * @param channel 声道数
     */
    public native void setAudioOptions(int sampleRateInHz, int channel);

    /**
     * 发送视频数据
     * @param data 视频数据
     */
    public native void fireVideo(byte[] data, int width, int height);

    /**
     * 发送音频数据
     * @param data 音频数据
     * @param len 数据长度
     */
    public native void fireAudio(byte[] data, int len);

    public void removeLiveStateChangeListener() {
        this.liveStateChangeListener = null;
    }

    public void setLiveStateChangeListener(LiveStateChangeListener liveStateChangeListener) {
        this.liveStateChangeListener = liveStateChangeListener;
    }

    static {
        System.loadLibrary("faac");
        System.loadLibrary("x2641");
        System.loadLibrary("rtmp");
        System.loadLibrary("native-lib");
    }
}

(二)LivePusher直播推流器

接下来看下直播推流器需要做什么?在上面的推流代码结构图中,LivePusher可以看成是直播推流的一个封装类,封装好所需要的接口提供给Activity、Presenter来使用。

前面提到BasePusher中定义了开始推流、停止推流和释放资源三个方法,在VideoPusher、AudioPusher、PushNative中都有实现,那么这些实现方法由谁调用呢?既然是LivePusher来封装,那肯定是由LivePusher来调用,于是在LivePusher中又定义了这三个方法,分别调用推流器中对应的方法。那么问题又来了,如果直接进行推流(startPush()方法),那么怎么知道我推流的音视频编码格式和封装格式呢?不能每次推流前才去设置吧,如果我直播直到一半,突然想暂停,过几分钟又开始直播,就没必要重新设置音视频格式了吧?那需要在LivePusher中事先设置好音视频编码格式等属性,于是定义一个prepare()方法,并调用PushNative中的setAudio(Video)Options来实现。

在直播中可能需要切换摄像头,让观众看看主播那精致的面容,那就定义一个切换摄像头的操作,提供给上一层来调用吧。同时还需要将相机预览的数据显示在屏幕上,就需要从Activity中获取一个可以显示预览数据的TextureView给下一层的VideoPusher来进行处理。

public class LivePusher2 implements TextureView.SurfaceTextureListener {

    private TextureView textureView;
    private VideoPusher2 videoPusher;
    private AudioPusher audioPusher;
    private PushNative pushNative;

    public LivePusher2(TextureView textureView, Context context) {
        this.textureView = textureView;
        textureView.setSurfaceTextureListener(this);
        prepare(context);
    }

    //准备音频、视频推流器
    private void prepare(Context context) {
        pushNative = new PushNative();
        //实例化视频推流器
        VideoParam videoParam = new VideoParam(480, 360, Camera.CameraInfo.CAMERA_FACING_BACK);;

        videoPusher = new VideoPusher2(textureView, videoParam, pushNative, context);

        //实例化音频推流器
        AudioParam audioParam = new AudioParam();
        audioPusher = new AudioPusher(audioParam, pushNative);
    }

    /**
     * 切换摄像头
     */
    public void switchCamera() {
        videoPusher.switchCamera();
    }

    /**
     * 开始推流
     *
     * @param url 推流服务器地址
     */
    public void startPush(final String url, LiveStateChangeListener liveStateChangeListener) {
        pushNative.startPush(url);
        videoPusher.startPusher();
        audioPusher.startPusher();
        pushNative.setLiveStateChangeListener(liveStateChangeListener);
    }

    /**
     * 停止推流
     */
    public void stopPush() {
        videoPusher.stopPusher();
        audioPusher.stopPusher();
        pushNative.stopPush();
        pushNative.removeLiveStateChangeListener();
    }

    /**
     * 释放资源
     */
    public void release() {
        videoPusher.release();
        audioPusher.release();
        pushNative.release();
    }

    @Override
    public void onSurfaceTextureAvailable(SurfaceTexture surface, int width, int height) {

    }

    @Override
    public void onSurfaceTextureSizeChanged(SurfaceTexture surface, int width, int height) {

    }

    @Override
    public boolean onSurfaceTextureDestroyed(SurfaceTexture surface) {
        stopPush();
        release();
        return true;
    }

    @Override
    public void onSurfaceTextureUpdated(SurfaceTexture surface) {

    }
}

(三)音频推流与AudioPusher

音频推流的过程是设置音频格式;通过麦克风获取PCM音频数据,然后给native层的faac库编码成aac格式的音频数据,随后通过flv格式和RTMP产生音频数据包。

1、设置音频格式

从上文中得知PushNative类中有setAudioOptions的方法,设置音频格式就是给这个方法传入采样率、声道数等数据,然后在Jni中实现native方法。设置音频格式主要是设置faac编码器的参数。

/**
 * 设置音频参数
 */
extern "C"
JNIEXPORT void JNICALL
Java_com_ljh_live_jni_PushNative_setAudioOptions(JNIEnv *env, jobject instance, jint sampleRateInHz,
                                                 jint numChannel) {
    audio_encode_handle = faacEncOpen(sampleRateInHz, numChannel, &nInputSamples, &nMaxOutputBytes);
    if (!audio_encode_handle) {
        LOGE("%s", "音频编码器打开失败");
        return;
    }
    //设置音频参数
    faacEncConfigurationPtr p_config = faacEncGetCurrentConfiguration(audio_encode_handle);
    p_config->mpegVersion = MPEG4;
    p_config->allowMidside = 1;
    p_config->aacObjectType = LOW;
    p_config->outputFormat = 0; //输出是否包含ADTS头
    p_config->useTns = 1; //时域噪音控制,大概是消除爆破音
    p_config->useLfe = 0;
    p_config->quantqual = 100;
    p_config->bandWidth = 0; //频宽
    p_config->shortctl = SHORTCTL_NORMAL;

    if (!faacEncSetConfiguration(audio_encode_handle, p_config)) {
        LOGE("%s", "音频编码器配置失败");
        throwNativeError(env, INIT_FAILED);
        return;
    }
    LOGI("%s", "音频编码器配置成功");
}

2、推流实现

(1)Java层代码

在Android系统中有这样的一个API:AudioRecord,该类是用于启动麦克风,录制音频并产生PCM音频数据。AudioRecord的使用步骤如下:

  1. 创建最小缓冲区 最小缓冲区需要采样率、声道数、采样(量化)精度来设定。
minBufferSize = AudioRecord.getMinBufferSize(audioParam.getSampleRateInHz(), channelConfig, AudioFormat.ENCODING_PCM_16BIT);
  1. 创建AudioRecord对象 AudioRecord对象的创建需要声音源、采样率、声道数、采样(量化)精度、最小缓冲区来创建。
audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC,
                audioParam.getSampleRateInHz(),
                channelConfig,
                AudioFormat.ENCODING_PCM_16BIT,
                minBufferSize);
  1. 开始录制
audioRecord.startRecording();
  1. 获取PCM数据
while(true){
    //通过AudioRecord不断读取音频数据
    byte[] buffer = new byte[minBufferSize];
    int len = audioRecord.read(buffer, 0, buffer.length);
}

audioRecord.read()运行在当前线程中,如果不断的调用该方法,会导致Android UI线程阻塞,导致ANR,因此需要创建一个新的线程来执行。

class AudioRecordTask implements Runnable {
        @Override
        public void run() {
            //开始录音
            audioRecord.startRecording();

            while (isPushing) {
                //通过AudioRecord不断读取音频数据
                byte[] buffer = new byte[minBufferSize];
                int len = audioRecord.read(buffer, 0, buffer.length);
            }
        }
    }

既然已经在子线程中获取到了PCM数据,那么何不直接将其传给PushNative来完成编码和推送呢?于是在循环中添加如下代码:

  if (len > 0) {
        //传给Native代码,进行音频编码
        pushNative.fireAudio(buffer, len);
 }

(2)Native代码

接下来又是Native代码的编写。需要明确的是音频推流的native部分主要是faac库、rtmp库和flv协议格式的使用,在设置音频格式时,已经初始化了audio_encode_handle(faacEncHandle),也就是初始化了faac音频编码处理器,获取到数据时,就可以直接进行AAC编码了,编码完成后就加入到RTMP消息队列中。

每次从实时的pcm音频队列中读出量化位数为8的pcm数据,用8个二进制位来表示一个采样量化点(模数转换),然后调用faacEncEncode这个函数来编码,需要传入编码处理器audio_encode_handle、转换后的pcm流数组pcmbuf、采样数量audioLength(采样的pcm数组大小)、编码后的aac音频数组、最大输出字节数。

/**
 * 对音频采样数据进行AAC编码
 */
extern "C"
JNIEXPORT void JNICALL
Java_com_ljh_live_jni_PushNative_fireAudio(JNIEnv *env, jobject instance, jbyteArray buffer,
                                           jint len) {
    //转换后的pcm流数组
    int *pcmbuf;
    //编码后的数据buff
    unsigned char *bitbuf;
    jbyte *b_buffer = env->GetByteArrayElements(buffer, NULL);
    if (b_buffer == NULL) {
        LOGI("%s", "音频数据为空");
    }
    pcmbuf = (int *) malloc(nInputSamples * sizeof(int));
    bitbuf = (unsigned char *) malloc(nMaxOutputBytes * sizeof(unsigned char));
    int nByteCount = 0;
    unsigned int nBufferSize = (unsigned int) len / 2;
    unsigned short *buf = (unsigned short *) b_buffer;
    while (nByteCount < nBufferSize) {
        int audioLength = nInputSamples;
        if ((nByteCount + nInputSamples) >= nBufferSize) {
            audioLength = nBufferSize - nByteCount;
        }
        int i;
        for (i = 0; i < audioLength; ++i) {
            //每次从实时的pcm音频队列中读取量化位数为8的pcm数据
            int s = ((int16_t *) buf + nByteCount)[i];
            pcmbuf[i] = s << 8;  //用8个二进制位来表示一个采样量化点(模数转换)
        }
        nByteCount += nInputSamples;

        //利用FAAC进行编码,pcmbuf为转换后的pcm流数组,audioLength为调用faacEncOpen时得到的输入采样数
        //bitbuf为编码后的数据buff,nMaxOutputBytes为调用faacEncOpen时得到的最大输出字节数
        int byteslen = faacEncEncode(audio_encode_handle, pcmbuf, audioLength, bitbuf,
                                     nMaxOutputBytes);
        if (byteslen < 1) {
            continue;
        }
        add_aac_body(bitbuf, byteslen); //从bitbuf中得到编码后的aac数据流,放到数据队列
    }

    env->ReleaseByteArrayElements(buffer, b_buffer, 0);
    if (bitbuf) {
        free(bitbuf);
    }
    if (pcmbuf) {
        free(pcmbuf);
    }
}

完成AAC编码后,接下来就要将AAC数据传给rtmp封装成RTMP Packet重点来了)。由于RTMP推送的音视频流的封装形式和FLV格式相似,向流媒体服务器推送H264和AAC直播流时,需要首先发送"AVC sequence header"和"AAC sequence header",如果没有这两项数据包,会导致解码器无法解码,因此在音频推流时,先要发送AAC sequence header(该格式可以参照前一篇博文),再发送aac音频数据。

/**
 * 添加aac头信息
 */
void add_aac_sequence_header() {
    //获取aac头信息的长度
    unsigned char *buf;
    unsigned long len; //长度
    faacEncGetDecoderSpecificInfo(audio_encode_handle, &buf, &len);
    int body_size = 2 + len;
    RTMPPacket *packet = (RTMPPacket *) malloc(sizeof(RTMPPacket));
    ///RTMPPacket初始化
    RTMPPacket_Alloc(packet, body_size);
    RTMPPacket_Reset(packet);
    char *body = packet->m_body;
    //头信息配置
    // AF 00 + AAC RAW data (FLV tag头)
    body[0] = 0xAF; //10 5 SoundFormat(4bits):10=AAC,SoundRate(2bits):3=44kHz,SoundSize(1bit):1=16-bit
    body[1] = 0x00;//AACPacketType:0表示AAC sequence header
    memcpy(&body[2], buf, len); /*spec_buf是AAC sequence header数据*/
    packet->m_packetType = RTMP_PACKET_TYPE_AUDIO;   //message包中的message Type, 08为audio
    packet->m_nBodySize = body_size;
    packet->m_nChannel = 0x04;  //chunk包中的channel Id, 04表示audio和video通道
    packet->m_hasAbsTimestamp = 0;
    packet->m_nTimeStamp = 0;
    packet->m_headerType = RTMP_PACKET_SIZE_MEDIUM;
    add_rtmp_packet(packet);
    free(buf);
}

aac音频数据的封装和aac sequence header的封装类似:只不过是将包定义成aac raw data,和添加相对第一帧音频的时间戳。

/**
 * 添加AAC rtmp packet
 */
void add_aac_body(unsigned char *buf, int len) {
    int body_size = 2 + len;
    RTMPPacket *packet = (RTMPPacket *) malloc(sizeof(RTMPPacket));
    //RTMPPacket初始化
    RTMPPacket_Alloc(packet, body_size);
    RTMPPacket_Reset(packet);
    char *body = packet->m_body;
    //头信息配置
    /*AF 00 + AAC RAW data*/
    body[0] = 0xAF;//10 5 SoundFormat(4bits):10=AAC,SoundRate(2bits):3=44kHz,SoundSize(1bit):1=16-bit samples,SoundType(1bit):1=Stereo sound
    body[1] = 0x01;//AACPacketType:1表示AAC raw
    memcpy(&body[2], buf, len); /*spec_buf是AAC raw数据*/
    packet->m_packetType = RTMP_PACKET_TYPE_AUDIO;
    packet->m_nBodySize = body_size;
    packet->m_nChannel = 0x04;
    packet->m_hasAbsTimestamp = 0;
    packet->m_headerType = RTMP_PACKET_SIZE_LARGE;
    packet->m_nTimeStamp = RTMP_GetTime() - start_time;
    add_rtmp_packet(packet);
}

(三)视频推流和VideoPusher

视频推流这部分是整个直播功能中最难的一部分,再来描述一下视频推流的过程:通过Android Camera API从相机中获取视频图像数据(YUV_420_888、YUV_444_888、RGB_565等)-> 图像数据分两部分进行处理:1.预览在屏幕上 2.推流 -> 推流部分:将YUV_420_888转为NV21并旋转、镜面反转等 -> 传入到JNI中,通过x264将处理好的nv21数据编码成H.264 -> 获取H.264数据后,判断是否关键帧,并封装成FLV格式的RTMP Packet。

为什么说这部分很难也很复杂呢?

  1. Android SDK提供了3个Camera相关的API:

    • Camera:接口比较简单,使用也比较简单,但是功能太少,不能满足大部分需求,同时该API已被弃用。
    • Camera2:为应用提供更接近底层的相机控件,包括高效的连拍/视频流以及曝光、增益、白平衡增益、颜色转换、去噪、锐化等方面的控件。同时采用了管道机制进行设计,在使用时需要创建CameraManager、CameraCharacteristics、CameraDevice、CameraCaptureSession等类,导致初次使用会感觉到复杂。
    • CameraX:随着AndroidX而诞生,目前文档较少,还没认真了解。
  2. Camera2不能直接获取到NV21格式的视频图像数据,导致需要手动进行转码。

  3. 设备方向与预览图像方向存在差值,而且前后摄像头的角度又不一致(存在一个镜面反转的问题)。角度不一致这还不是重点,重点是在于当设备方向改变时,通过角度差值进行旋转计算的话,可能会导致预览数据更加混乱。这部分功能的实现需要非常清晰的认知和逻辑。旋转计算还可能导致效率问题。

  4. 不同机型、不同设备所规定的预览图像大小不一致,使用原始数据推流会导致难以维持一个稳定的图像大小。(可以使用ffmpeg来实现缩放)

  5. Camera HAL中分了几个层次,需要根据不同的Android version进行适配。

  6. 视频中存在帧率、分辨率、码率等参数,在弱网情况下,需要不断调整参数以达到视频不卡顿,而且还比较清晰。

  7. 封装FLV的RTMP Packet时,可能会封装不到关键帧,导致视频解码失败。

Android Camera的使用这里就暂不做详细介绍,推荐几篇文章给大家:

Camera官方文档

Android Camera2 教程 · 第一章 · 概览

Android开发实践:屏幕旋转的处理2

1、Java代码

既然Camera开发那么复杂,为了代码更加清晰、便于维护,那么就将Camera的相关操作,比如打开相机、关闭相机、数据旋转和转码等封装到CameraUtil工具类中。在上文讲到LivePusher时,从Activity获取到了TextureView组件,用来展示相机预览图像,因此需要将该组件传入到CameraUtil中。

(1)CameraUtils
public class Camera2Utils {
    //从屏幕旋转转换为JPEG方向。
    private static final SparseIntArray ORIENTATIONS = new SparseIntArray();
    public static final int REQUEST_CAMERA_PERMISSION = 1;
    public static final String CAMERA_FRONT = "1";
    public static final String CAMERA_BACK = "0";

    //预览、拍照等会话
    private CameraCaptureSession mCaptureSession;
    private CameraManager mCameraManager;
    //相机设备引用
    private CameraDevice mCameraDevice;
    //预览大小
    private Size mPreviewSize;
    //用于运行不应阻止UI的任务的附加线程。
    private HandlerThread mBackgroundThread;
    //用于在后台运行任务的Handler。
    private Handler mBackgroundHandler;
    //一个处理静态图像捕获的ImageReader
    private ImageReader mImageReader;
    //相机预览Builder
    private CaptureRequest.Builder mPreviewRequestBuilder;
    //由Builder生成的CaptureRequest
    private CaptureRequest mPreviewRequest;
    //一个{@link Semaphore}可以在关闭相机之前阻止应用程序退出。
    private Semaphore mCameraOpenCloseLock = new Semaphore(1);
    //当前相机设备是否支持闪光
    private boolean mFlashSupported;
    //相机传感器角度
    private int mSensorOrientation;
    //Camera2 API保证的最大预览宽高
    private static final int MAX_PREVIEW_WIDTH = 1280;
    private static final int MAX_PREVIEW_HEIGHT = 720;

    private int mDisplayOrientation;
    
    private Context mContext;
    private TextureView mTextureView;
    
    /*********************需要设置的参数start**********************/
    private OnPreviewFrameCallback previewFrameCallback = null;
    //回传数据大小
    private Size mDateSize;
    //相机id
    private String mCameraId = CAMERA_BACK;
    //是否使用闪光
    private boolean mIsFlash;
    /*********************需要设置的参数end**********************/

    public Camera2Utils2(Context context, TextureView textureView) {
        this.mContext = context;
        this.mTextureView = textureView;

        init();
    }

    private void init() {
        //初始化线程
        startBackgroundThread();
    }

    /*******************外部方法start*********************/
    //设置预览回调
    public void setOnPreviewFrameCallback(OnPreviewFrameCallback onPreviewFrameCallback) {
        this.previewFrameCallback = onPreviewFrameCallback;
    }

    //设置是否支持闪光灯
    public void setFlashSupported(boolean flash) {
        this.mIsFlash = flash;
    }

    //设置期待的预览大小(数据回传大小)
    public void setDataSize(int width, int height) {
        this.mDateSize = new Size(width, height);
    }

    //获取实际的预览数据大小
    public Size getDataSize() {
        return this.mDateSize;
    }

    //转换相机
    public void switchCamera() {
        Log.d(TAG, "switchCamera: 转换相机");
        if (mCameraId.equals(CAMERA_FRONT)) {
            mCameraId = CAMERA_BACK;
            closeCamera();
            startPreview();
        } else if (mCameraId.equals(CAMERA_BACK)) {
            mCameraId = CAMERA_FRONT;
            closeCamera();
            startPreview();
        }
    }

    //开始预览
    public void startPreview() {
        // 当屏幕关闭并重新打开时,SurfaceTexture已经可用,并且不会调用“onSurfaceTextureAvailable”。
        // 在这种情况下,我们可以打开一个摄像头并从这里开始预览(否则,我们要等到surface在SurfaceTextureListener中准备好)。
        if (mTextureView.isAvailable()) {
            Log.d(TAG, "startPreview: 开始预览openCamera");
            openCamera(mTextureView.getWidth(), mTextureView.getHeight());
        } else {
            Log.d(TAG, "startPreview: 开始预览设置回调");
            mTextureView.setSurfaceTextureListener(mSurfaceTextureListener);
        }
    }

    //关闭当前相机
    public void closeCamera() {
        try {
            mCameraOpenCloseLock.acquire();
            if (null != mCaptureSession) {
                mCaptureSession.close();
                mCaptureSession = null;
            }
            if (null != mCameraDevice) {
                mCameraDevice.close();
                mCameraDevice = null;
            }
            if (null != mImageReader) {
                mImageReader.close();
                mImageReader = null;
            }
        } catch (InterruptedException e) {
            throw new RuntimeException("Interrupted while trying to lock camera closing.", e);
        } finally {
            mCameraOpenCloseLock.release();
        }
    }

    private MyOrientationDetector myOrientationDetector;
    /***********************外部方法end***************************/
    
    //SurfaceTextureListener 处理Texture的生命周期
    private final TextureView.SurfaceTextureListener mSurfaceTextureListener = new TextureView.SurfaceTextureListener() {
        @Override
        public void onSurfaceTextureAvailable(SurfaceTexture surface, int width, int height) {
            //texture可用
            //打开摄像头
            openCamera(width, height);
            myOrientationDetector = new MyOrientationDetector(mContext);
            myOrientationDetector.enable();
        }

        @Override
        public void onSurfaceTextureSizeChanged(SurfaceTexture surface, int width, int height) {
        }

        @Override
        public boolean onSurfaceTextureDestroyed(SurfaceTexture surface) {
            stopBackgroundThread();
            myOrientationDetector.disable();
            return false;
        }

        @Override
        public void onSurfaceTextureUpdated(SurfaceTexture surface) {

        }
    };

    //通过mCameraId 打开相机实例
    private void openCamera(int width, int height) {
        //获取权限
        if (ContextCompat.checkSelfPermission(mContext, Manifest.permission.CAMERA)
                != PackageManager.PERMISSION_GRANTED) {
            requestCameraPermission();
            return;
        }
        //设置与摄像头相关的成员变量。
        setUpCameraOutputs(width, height);
        CameraManager manager = (CameraManager) mContext.getSystemService(Context.CAMERA_SERVICE);
        try {
            if (!mCameraOpenCloseLock.tryAcquire(2500, TimeUnit.MILLISECONDS)) {
                throw new RuntimeException("Time out waiting to lock camera opening.");
            }
            manager.openCamera(mCameraId, mStateCallback, null);
        } catch (CameraAccessException e) {
            e.printStackTrace();
        } catch (InterruptedException e) {
            throw new RuntimeException("Interrupted while trying to lock camera opening.", e);
        }
    }


    /**
     * 设置摄像头相关的成员变量
     *
     * @param width  相机预览的可用尺寸宽度
     * @param height 相机预览的可用尺寸高度
     */
    private void setUpCameraOutputs(int width, int height) {
        //获取CameraManager
        mCameraManager = (CameraManager) mContext.getSystemService(Context.CAMERA_SERVICE);
        try {
            //获取所有相机id
            for (String cameraId : mCameraManager.getCameraIdList()) {
                //获取摄像头特性
                CameraCharacteristics characteristics = mCameraManager.getCameraCharacteristics(cameraId);

                if ((!cameraId.equals(CAMERA_FRONT) && (!cameraId.equals(CAMERA_BACK)) || (!cameraId.equals(mCameraId)))) {
                    continue;
                }

                StreamConfigurationMap map = characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);
                if (map == null) {
                    continue;
                }
                //对于静态捕获,使用最大可用大小
                Size largest = Collections.max(Arrays.asList(map.getOutputSizes(ImageFormat.YUV_420_888)), new CompareSizesByArea());

                //找出我们是否需要交换尺寸以获得相对于传感器坐标的预览尺寸。
                WindowManager windowManager = (WindowManager) mContext.getSystemService(Context.WINDOW_SERVICE);
                int displayRotation = windowManager.getDefaultDisplay().getRotation();
                //noinspection ConstantConditions
                mSensorOrientation = characteristics.get(CameraCharacteristics.SENSOR_ORIENTATION);
                boolean swappedDimensions = false;
                switch (displayRotation) {
                    case Surface.ROTATION_0:
                    case Surface.ROTATION_180:
                        if (mSensorOrientation == 90 || mSensorOrientation == 270) {
                            swappedDimensions = true;
                        }
                        break;
                    case Surface.ROTATION_90:
                    case Surface.ROTATION_270:
                        if (mSensorOrientation == 0 || mSensorOrientation == 180) {
                            swappedDimensions = true;
                        }
                        break;
                    default:
                        Log.e(TAG, "Display rotation is invalid: " + displayRotation);
                }

                Point displaySize = new Point();
                windowManager.getDefaultDisplay().getSize(displaySize);
                int rotatedPreviewWidth = width;
                int rotatedPreviewHeight = height;
                int maxPreviewWidth = displaySize.x;
                int maxPreviewHeight = displaySize.y;

                //交换宽高
                if (swappedDimensions) {
                    rotatedPreviewWidth = height;
                    rotatedPreviewHeight = width;
                    maxPreviewWidth = displaySize.y;
                    maxPreviewHeight = displaySize.x;
                }

                if (maxPreviewWidth > MAX_PREVIEW_WIDTH) {
                    maxPreviewWidth = MAX_PREVIEW_WIDTH;
                }

                if (maxPreviewHeight > MAX_PREVIEW_HEIGHT) {
                    maxPreviewHeight = MAX_PREVIEW_HEIGHT;
                }

                //预览大小(如果预览大小太大,会导致预览卡顿)
                mPreviewSize = getCloselyPreSize(maxPreviewWidth, maxPreviewHeight, map.getOutputSizes(SurfaceTexture.class));
                mDateSize = getCloselyPreSize(mDateSize.getWidth(), mDateSize.getHeight(), map.getOutputSizes(ImageFormat.JPEG));

                mImageReader = ImageReader.newInstance(mDateSize.getWidth(), mDateSize.getHeight(),
                        ImageFormat.YUV_420_888, /*maxImages*/1);

                mImageReader.setOnImageAvailableListener(
                        mOnImageAvailableListener, mBackgroundHandler);

                //检查是否支持自动对焦。
                Boolean available = characteristics.get(CameraCharacteristics.FLASH_INFO_AVAILABLE);
                mFlashSupported = available == null ? false : available;

                mCameraId = cameraId;
                return;
            }
        } catch (CameraAccessException e) {
            e.printStackTrace();
        } catch (NullPointerException e) {
            e.printStackTrace();
        }
    }

    protected Size getCloselyPreSize(int surfaceWidth, int surfaceHeight,
                                     Size[] preSizeList) {
        int ReqTmpWidth;
        int ReqTmpHeight;
        ReqTmpWidth = surfaceWidth;
        ReqTmpHeight = surfaceHeight;

        //收集小于预览Surface的支持的分辨率
        List<Size> notBigEnough = new ArrayList<>();

        for (Size option : preSizeList) {  //根据textureView的width和height大小分类
            Log.d(TAG, "getCloselyPreSize: " + option.getWidth() + " " + option.getHeight());
            if (option.getWidth() <= surfaceWidth && option.getHeight() <= surfaceHeight) {
                notBigEnough.add(option);
            }
        }

        //先查找preview中是否存在与surfaceview相同宽高的尺寸
        for (Size size : notBigEnough) {
            if ((size.getWidth() == ReqTmpWidth) && (size.getHeight() == ReqTmpHeight)) {
                return size;
            }
        }

        // 得到与传入的宽高比最接近的size
        float reqRatio = ((float) ReqTmpWidth) / ReqTmpHeight;
        float curRatio, deltaRatio;
        float deltaRatioMin = Float.MAX_VALUE;
        Size retSize = null;
        for (Size size : notBigEnough) {
            curRatio = ((float) size.getWidth()) / size.getHeight();
            deltaRatio = Math.abs(reqRatio - curRatio);
            if (deltaRatio < deltaRatioMin) {
                deltaRatioMin = deltaRatio;
                retSize = size;
            }
        }

        return retSize;
    }

    //StateCallback,当CameraDevice状态改变时,会调用该回调
    private final CameraDevice.StateCallback mStateCallback = new CameraDevice.StateCallback() {
        @Override
        public void onOpened(@NonNull CameraDevice cameraDevice) {
            //打开相机时会调用此方法。我们在这里开始相机预览。
            mCameraOpenCloseLock.release();
            mCameraDevice = cameraDevice;
            //创建previewSession
            createCameraPreviewSession();
        }

        @Override
        public void onDisconnected(@NonNull CameraDevice cameraDevice) {
            mCameraOpenCloseLock.release();
            cameraDevice.close();
            mCameraDevice = null;
        }

        @Override
        public void onError(@NonNull CameraDevice cameraDevice, int error) {
            mCameraOpenCloseLock.release();
            cameraDevice.close();
            mCameraDevice = null;
        }
    };

    //处理捕获的回调数据
    private final ImageReader.OnImageAvailableListener mOnImageAvailableListener = new ImageReader.OnImageAvailableListener() {
        @Override
        public void onImageAvailable(ImageReader reader) {
            Image image = reader.acquireNextImage();
            //这里获取回调的数据
            byte[] bytes = ImageUtil.getDataFromImage(image, ImageUtil.COLOR_FormatNV21);
            byte[] bytes1;
            if (mCameraId.equals(CAMERA_FRONT)) {
                bytes1 = NV21_mirror(bytes, image.getWidth(), image.getHeight());
                bytes1 = NV21_rotate_to_90(bytes1, image.getWidth(), image.getHeight());
            } else {
                bytes1 = NV21_rotate_to_90(bytes, image.getWidth(), image.getHeight());
            }

            imageRunnable.setData(bytes1);
            imageRunnable.setWidth(image.getHeight());
            imageRunnable.setHeight(image.getWidth());
            mBackgroundHandler.post(imageRunnable);
            image.close();
        }
    };

    // 优化后的rotate start
    //NV21: YYYY VUVU
    byte[] NV21_mirror(byte[] nv21_data, int width, int height) {
        int i;
        int left, right;
        byte temp;
        int startPos = 0;

        // mirror Y
        for (i = 0; i < height; i++) {
            left = startPos;
            right = startPos + width - 1;
            while (left < right) {
                temp = nv21_data[left];
                nv21_data[left] = nv21_data[right];
                nv21_data[right] = temp;
                left++;
                right--;
            }
            startPos += width;
        }

        // mirror U and V
        int offset = width * height;
        startPos = 0;
        for (i = 0; i < height / 2; i++) {
            left = offset + startPos;
            right = offset + startPos + width - 2;
            while (left < right) {
                temp = nv21_data[left];
                nv21_data[left] = nv21_data[right];
                nv21_data[right] = temp;
                left++;
                right--;

                temp = nv21_data[left];
                nv21_data[left] = nv21_data[right];
                nv21_data[right] = temp;
                left++;
                right--;
            }
            startPos += width;
        }
        return nv21_data;
    }

    private byte[] NV21_rotate_to_90(byte[] nv21_data, int width, int height) {
        int y_size = width * height;
        int buffser_size = y_size * 3 / 2;
        byte[] nv21_rotated = new byte[buffser_size];
        // Rotate the Y luma


        int i = 0;
        int startPos = (height - 1) * width;
        for (int x = 0; x < width; x++) {
            int offset = startPos;
            for (int y = height - 1; y >= 0; y--) {
                nv21_rotated[i] = nv21_data[offset + x];
                i++;
                offset -= width;
            }
        }

        // Rotate the U and V color components
        i = buffser_size - 1;
        for (int x = width - 1; x > 0; x = x - 2) {
            int offset = y_size;
            for (int y = 0; y < height / 2; y++) {
                nv21_rotated[i] = nv21_data[offset + x];
                i--;
                nv21_rotated[i] = nv21_data[offset + (x - 1)];
                i--;
                offset += width;
            }
        }
        return nv21_rotated;
    }
    // 优化后的rotate end

    private void requestCameraPermission() {
    }

    //开启子线程
    private void startBackgroundThread() {
        mBackgroundThread = new HandlerThread("CameraBackground");
        mBackgroundThread.start();
        mBackgroundHandler = new Handler(mBackgroundThread.getLooper());
    }

    //停止子线程和handler
    private void stopBackgroundThread() {
        mBackgroundThread.quitSafely();
        try {
            mBackgroundThread.join();
            mBackgroundThread = null;
            mBackgroundHandler = null;
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
    }

    //创建CameraCaptureSession
    private void createCameraPreviewSession() {
        try {
            SurfaceTexture texture = mTextureView.getSurfaceTexture();
            assert texture != null;

            //我们将默认缓冲区的大小配置为我们想要的相机预览的大小。
            texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());

            //这是我们开始预览所需的输出Surface。
            Surface surface = new Surface(texture);

            //我们使用输出Surface设置CaptureRequest.Builder。
            mPreviewRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
            WindowManager windowManager = (WindowManager) mContext.getSystemService(Context.WINDOW_SERVICE);
            int displayRotation = windowManager.getDefaultDisplay().getRotation();
            mPreviewRequestBuilder.set(CaptureRequest.JPEG_ORIENTATION, getOrientation(displayRotation));

            mPreviewRequestBuilder.addTarget(surface);
            mPreviewRequestBuilder.addTarget(mImageReader.getSurface());

            //在这里,我们为相机预览创建一个CameraCaptureSession。
            mCameraDevice.createCaptureSession(Arrays.asList(surface, mImageReader.getSurface()),
                    new CameraCaptureSession.StateCallback() {

                        @Override
                        public void onConfigured(@NonNull CameraCaptureSession cameraCaptureSession) {
                            if (null == mCameraDevice) {
                                return;
                            }
                            // When the session is ready, we start displaying the preview.
                            mCaptureSession = cameraCaptureSession;
                            try {
                                ////对于相机预览,自动对焦应该是连续的。
                                mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AF_MODE,
                                        CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_VIDEO);

                                ////必要时自动启用Flash。
                                setAutoFlash(mPreviewRequestBuilder);

                                // Finally, we start displaying the camera preview.
                                mPreviewRequest = mPreviewRequestBuilder.build();

                                mCaptureSession.setRepeatingRequest(mPreviewRequest,
                                        null, mBackgroundHandler);
                            } catch (CameraAccessException e) {
                                e.printStackTrace();
                            }
                        }

                        @Override
                        public void onConfigureFailed(
                                @NonNull CameraCaptureSession cameraCaptureSession) {
                            Log.d(TAG, "Create CaptureSession Failed.");
                        }
                    }, null
            );
        } catch (CameraAccessException e) {
            e.printStackTrace();
        }
    }

    //从指定的屏幕旋转中检索JPEG方向。
    private int getOrientation(int rotation) {
        //对于大多数设备,传感器方向为90,对于某些设备,传感器方向为270(例如,Nexus 5X)
        //我们必须考虑到这一点并正确旋转JPEG。
        // 对于方向为90的设备,我们只需从ORIENTATIONS返回我们的映射。
        // 对于方向为270的设备,我们需要将JPEG旋转180度。
        return (mDisplayOrientation + mSensorOrientation + 270) % 360;
    }

    //设置自动闪光
    private void setAutoFlash(CaptureRequest.Builder requestBuilder) {
        if (mFlashSupported && mIsFlash) {
            requestBuilder.set(CaptureRequest.CONTROL_AE_MODE,
                    CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH);
        }
    }

    //根据区域比较两个{@code Size}。
    static class CompareSizesByArea implements Comparator<Size> {

        @Override
        public int compare(Size lhs, Size rhs) {
            // We cast here to ensure the multiplications won't overflow
            return Long.signum((long) lhs.getWidth() * lhs.getHeight() -
                    (long) rhs.getWidth() * rhs.getHeight());
        }
    }

    public interface OnPreviewFrameCallback {
        void onImageAvailable(byte[] bytes, int width, int height);
    }

    private ImageRunnable imageRunnable = new ImageRunnable();
    private boolean isPushing = false;

    public void startPushing() {
        isPushing = true;
    }

    public void stopPushing() {
        isPushing = false;
    }

    private class ImageRunnable implements Runnable {

        private byte[] data;
        private int width;
        private int height;

        public void setData(byte[] data) {
            this.data = data;
        }

        public void setWidth(int width) {
            this.width = width;
        }

        public void setHeight(int height) {
            this.height = height;
        }

        @Override
        public void run() {
            if (previewFrameCallback != null && isPushing) {
                previewFrameCallback.onImageAvailable(data, width, height);
            }
        }
    }
}

在上述代码中可以看到关键的代码其实并不多。打开相机后,给相机预览CaptureRequest.Builder添加两个target,就完成了数据的获取了。

mPreviewRequestBuilder.addTarget(surface); //该surface从TextureView中获取
mPreviewRequestBuilder.addTarget(mImageReader.getSurface());  //从ImageReader中获取

预览图像数据有了,那就进行转码和旋转、镜面反转等,然后通过接口回调,将数据回传到VideoPusher中。

(2)VideoPusher

接下来看下VideoPusher如何调用CameraUtil以及如何实现。首先VideoPusher需要实现BasePusher的三个方法,其次要实现CameraUtil中的接口回调。

public class VideoPusher2 extends BasePusher implements Camera2Utils2.OnPreviewFrameCallback {

    private VideoParam videoParam;
    private boolean isPushing = false;
    private PushNative pushNative;

    //Camera2 API保证的最大预览宽高
    private static final int MAX_PREVIEW_WIDTH = 720;
    private static final int MAX_PREVIEW_HEIGHT = 720;

    private Context context;
    private TextureView mTextureView;

    private ByteBuffer doubleBuffer, tmpBuffer;
    private byte[] tmpCopy;

    private Camera2Utils2 camera2Utils;

    public VideoPusher2(TextureView textureView, VideoParam videoParam, PushNative pushNative, Context context) {
        this.mTextureView = textureView;
        this.videoParam = videoParam;
        this.pushNative = pushNative;
        this.context = context;
        initCameraUtil2();
    }

    private void initCameraUtil2() {
        camera2Utils = new Camera2Utils2(context, mTextureView);
        camera2Utils.setFlashSupported(true);
        camera2Utils.setDataSize(320,240);

        camera2Utils.setOnPreviewFrameCallback(this);
        camera2Utils.startPreview();
    }


    @Override
    public void startPusher() {
        Size dataSize = camera2Utils.getDataSize();
        videoParam.setWidth(dataSize.getWidth());
        videoParam.setHeight(dataSize.getHeight());

        // ======  for text ======= exchange width & height  nv21旋转后需要转换宽高
        pushNative.setVideoOptions(videoParam.getHeight(), videoParam.getWidth(), videoParam.getBitrate(), videoParam.getFps());
        //接收方开始接收,发送方Camera2Utils再发送
        isPushing = true;
        camera2Utils.startPushing();
    }

    @Override
    public void stopPusher() {
        camera2Utils.stopPushing();
        isPushing = false;
    }

    @Override
    public void release() {
        if(camera2Utils != null) {
            camera2Utils.closeCamera();
            camera2Utils = null;
        }
        context = null;
    }

    /**
     * 切换摄像头
     */
    public void switchCamera() {
        if(camera2Utils != null) {
            camera2Utils.switchCamera();
            Size dataSize = camera2Utils.getDataSize();
            if(dataSize.getWidth() != videoParam.getWidth() || dataSize.getHeight() != videoParam.getHeight()) {
                videoParam.setWidth(dataSize.getWidth());
                videoParam.setHeight(dataSize.getHeight());
                // ======  for text ======= exchange width & height  nv21旋转后需要转换宽高
                pushNative.setVideoOptions(videoParam.getHeight(), videoParam.getWidth(), videoParam.getBitrate(), videoParam.getFps());
            }
        }
    }

    private void stopPreview() {
    }

    @Override
    public void onImageAvailable(byte[] bytes, int width, int height) {
        if (isPushing) {
            try {
                if(bytes.length == 0) {
                    Log.d(TAG, "onImageAvailable: byte is null!!!");
                }
                pushNative.fireVideo(bytes, width, height);
            } catch (Exception e) {
                e.printStackTrace();
            }
        }
    }
}

2、Native代码

已经将获取到的视频图像数据(nv21格式)传入到Native中进行处理了,但是别忘了,在处理视频图像之前,还需要设置H.264的编码格式,比如level、profile、分辨率、码率等等。

(1)设置H.264编码格式
extern "C"
JNIEXPORT void JNICALL
Java_com_ljh_live_jni_PushNative_setVideoOptions(JNIEnv *env, jobject instance, jint width,
                                                 jint height, jint bitrate, jint fps) {
    LOGI("宽:%d, 高:%d", width, height);
    x264_param_t param;
    //x264_param_default_preset 设置, zerolatency:零延时
    x264_param_default_preset(&param, "ultrafast", "zerolatency");
    //编码输入的像素格式YUV420P
    param.i_csp = X264_CSP_I420;
    param.i_width = width;
    param.i_height = height;

    y_len = width * height;
    u_len = y_len / 4;
    v_len = u_len;

    //参数i_rc_method表示码率控制,CQP(恒定质量)、CRF(恒定码率)、ABR(平均码率)
    //恒定码率:会尽量控制在固定码率
    param.rc.i_rc_method = X264_RC_CRF;
    param.rc.i_bitrate = bitrate / 1000; //码率单位(Kbps)
    param.rc.i_vbv_max_bitrate = bitrate / 1000 * 1.2; //瞬时最大码率

    //码率控制不通过timebase和timestamp,而是通过fps
    param.b_vfr_input = 0;
    param.i_fps_num = fps; //帧率分子
    param.i_fps_den = 1; //帧率分母
    param.i_timebase_den = param.i_fps_num;
    param.i_timebase_num = param.i_fps_den;
    param.i_threads = 1; //并行编码线程数量,0默认为多线程

    //是否把sps和pps放入每一个关键帧
    //SPS Sequence Parameter Set 序列参数集,PPS Picture Parameter Set 图像参数集
    //提高图像的纠错能力
    param.b_repeat_headers = 1;
    //设置level级别
    param.i_level_idc = 51;
    //设置profile档次
    //baseline级别,没有B帧
    x264_param_apply_profile(&param, "baseline");

    //x264_picture_t(输入图像)初始化
    x264_picture_alloc(&pic_in, param.i_csp, param.i_width, param.i_height);
//    pic_in.i_pts = 0; //配置图像顺序
    //打开编码器
    video_encode_handle = x264_encoder_open(&param);
    if (video_encode_handle) {
        LOGI("打开视频编码器成功");
    } else {
        throwNativeError(env, INIT_FAILED);
    }
}

其中有这样的一个设置 pic_in.i_pts = 0; 配置图像顺序,图像会一帧接一帧进行解析,原本以为会使图像更加流畅,但是RTMP是基于TCP协议的,在弱网情况下如果某一个数据包丢失,会不断尝试重传,导致大量的图像数据堆积,反而会导致视频卡顿,当然在网络环境较好的情况下,使用,确实能改善图片马赛克的情况。

(2)处理视频图像数据

设置编码格式时,已经初始化好了视频编码器video_encode_handle,获取到图像数据后,可以直接进行编码得到x264_nal_t数据单元,在该单元中有个类型的判断,判断该单元是SPS、PPS还是普通数据,然后根据这个类型封装成RTMP Packet。

/**
 * 将采集到的视频数据进行编码
 */
extern "C"
JNIEXPORT void JNICALL
Java_com_ljh_live_jni_PushNative_fireVideo(JNIEnv *env, jobject instance, jbyteArray buffer_,
                                           jint width, jint height) {
    if (is_pushing) {
        //将视频数据转为YUV420p(NV21->YUV420p)
        int len = (int) env->GetArrayLength(buffer_);
        if (len == 0) {
            LOGI("%s", "数据为空!!!!");
            return;
        } 
        jbyte *nv21_buffer = env->GetByteArrayElements(buffer_, NULL);
        y_len = width * height;
        u_len = v_len = y_len / 4;
        jbyte *u = (jbyte *) pic_in.img.plane[1];
        jbyte *v = (jbyte *) pic_in.img.plane[2];
        //nv21 4:2:0 Formats, 12 Bits per Pixel
        //nv21与yuv420p,y个数一致,uv位置对调
        //nv21转yuv420p  y = w*h,u/v=w*h/4
        //nv21 = yvu yuv420p=yuv y=y u=y+1+1 v=y+1
        memcpy(pic_in.img.plane[0], nv21_buffer, y_len);
        int i;
        for (i = 0; i < u_len; ++i) {
            *(u + i) = *(nv21_buffer + y_len + i * 2 + 1);
            *(v + i) = *(nv21_buffer + y_len + i * 2);
        }
        //h264编码得到NALU数组
        x264_nal_t *nal = NULL; //NALU数组
        int n_nal = -1;  //NALU个数
        //进行x264编码
        if (x264_encoder_encode(video_encode_handle, &nal, &n_nal, &pic_in, &pic_out) < 0) {
            LOGE("%s", "视频编码失败");
            return;
        }
        //使用rtmp协议将h264编码的视频数据发送给流媒体服务器
        //帧分为关键帧和普通帧,为了提高画面的纠错率,关键帧应该包含SPS和PPS数据
        int sps_len, pps_len;
        unsigned char sps[100];
        unsigned char pps[100];
        memset(sps, 0, 100);
        memset(pps, 0, 100);
//    pic_in.i_pts += 1; //图像顺序累加
        //遍历NALU数组,根据NALU的类型判断
        for (i = 0; i < n_nal; i++) {
            if (nal[i].i_type == NAL_SPS) {
                //复制SPS数组
                sps_len = nal[i].i_payload - 4;
                memcpy(sps, nal[i].p_payload + 4, sps_len); //不复制四字节起始码
            } else if (nal[i].i_type == NAL_PPS) {
                //复制PPS数据
                pps_len = nal[i].i_payload - 4;
                memcpy(pps, nal[i].p_payload + 4, pps_len); //不复制四字节起始码

                //发送序列信息
                //h264关键帧会包含SPS和PPS数据(通过SPS和PPS构建关键帧)
//            add_264_sequence_header(pps, sps, pps_len, sps_len);
                send_video_sps_pps(pps, sps, pps_len, sps_len);
            } else {
                //发送帧信息
                add_264_body(nal[i].p_payload, nal[i].i_payload);
            }
        }
        if (env->ExceptionCheck()) {
            //发生异常
            LOGI("%s", "发生未知异常");

            throwNativeError(env, WHAT_FAILED);
        }
        env->ReleaseByteArrayElements(buffer_, nv21_buffer, 0);
    }
}

当捕获到SPS和PPS数据后,就认定该帧为关键帧,因此要发送AVC sequence header,视频头信息。

void add_264_sequence_header(unsigned char *pps, unsigned char *sps, int pps_len, int sps_len) {
    LOGI("%s", "添加视频头");
    int body_size = 16 + sps_len + pps_len; //按照H264标准配置SPS和PPS,共使用了16字节
    RTMPPacket *packet = (RTMPPacket *) malloc(sizeof(RTMPPacket));
    //RTMPPacket初始化
    RTMPPacket_Alloc(packet, body_size);
    RTMPPacket_Reset(packet);

    // 封装H264数据(AVC格式) ---start---
    char *body = packet->m_body;
    int i = 0;
    //二进制表示:00010111
    body[i++] = 0x17;//VideoHeaderTag:FrameType(1=key frame)+CodecID(7=AVC)
    body[i++] = 0x00;//AVCPacketType = 0表示设置AVCDecoderConfigurationRecord
    //composition time 0x000000 24bit ?
    body[i++] = 0x00;
    body[i++] = 0x00;
    body[i++] = 0x00;

    /*AVCDecoderConfigurationRecord*/
    //由于CodecID = 7 所以需要配置AVCDecoderConfigurationRecord
    body[i++] = 0x01;//configurationVersion,版本为1
    body[i++] = sps[1];//AVCProfileIndication
    body[i++] = sps[2];//profile_compatibility
    body[i++] = sps[3];//AVCLevelIndication
    //?
    body[i++] = 0xFF;//lengthSizeMinusOne,H264 视频中 NALU的长度,计算方法是 1 + (lengthSizeMinusOne & 3),实际测试时发现总为FF,计算结果为4.

    /*sps*/
    body[i++] = 0xE1;//numOfSequenceParameterSets:SPS的个数,计算方法是 numOfSequenceParameterSets & 0x1F,实际测试时发现总为E1,计算结果为1.
    body[i++] = (sps_len >> 8) & 0xff;//sequenceParameterSetLength:SPS的长度
    body[i++] = sps_len & 0xff;//sequenceParameterSetNALUnits
    memcpy(&body[i], sps, sps_len);
    i += sps_len;

    /*pps*/
    body[i++] = 0x01;//numOfPictureParameterSets:PPS 的个数,计算方法是 numOfPictureParameterSets & 0x1F,实际测试时发现总为E1,计算结果为1.
    body[i++] = (pps_len >> 8) & 0xff;//pictureParameterSetLength:PPS的长度
    body[i++] = (pps_len) & 0xff;//PPS
    memcpy(&body[i], pps, pps_len);
    i += pps_len;
    // 封装H264数据(AVC格式) ---end---

    //封装RTMPPacket信息 ---start---
    //Message Type,RTMP_PACKET_TYPE_VIDEO:0x09
    packet->m_packetType = RTMP_PACKET_TYPE_VIDEO;
    //Payload Length
    packet->m_nBodySize = i;
    //Time Stmp:4字节
    //记录了每一个tag相对于第一个tag(File Header)的相对时间。
    //以毫秒为单位。而File Header的time stamp永远为0。
    packet->m_nTimeStamp = 0;
    packet->m_hasAbsTimestamp = 0;
    packet->m_nChannel = 0x04; //Channel ID,Audio和Vidio通道
    packet->m_headerType = RTMP_PACKET_SIZE_LARGE; //?
    //封装RTMPPacket信息 ---end---
    //将RTMPPacket加入队列
    add_rtmp_packet(packet);
}

普通视频数据的封装与头信息大体类似,都是严格按照flv、h.264等协议格式来进行编码

void add_264_body(unsigned char *buf, int len) {
//去掉起始码(界定符)
    if (buf[2] == 0x00) {  //00 00 00 01
        buf += 4;
        len -= 4;
    } else if (buf[2] == 0x01) { // 00 00 01
        buf += 3;
        len -= 3;
    }
    int body_size = len + 9;
    RTMPPacket *packet = (RTMPPacket *) malloc(sizeof(RTMPPacket));
    RTMPPacket_Alloc(packet, body_size);

    unsigned char *body = (unsigned char *) packet->m_body;
    //当NAL头信息中,type(5位)等于5,说明这是关键帧NAL单元
    //buf[0] NAL Header与运算,获取type,根据type判断关键帧和普通帧
    //00000101 & 00011111(0x1f) = 00000101
    int type = buf[0] & 0x1f;
    //Inter Frame 帧间压缩
    body[0] = 0x27;//VideoHeaderTag:FrameType(2=Inter Frame)+CodecID(7=AVC)
    //IDR I帧图像
    if (type == NAL_SLICE_IDR) {
        body[0] = 0x17;//VideoHeaderTag:FrameType(1=key frame)+CodecID(7=AVC)
    }
    //AVCPacketType = 1
    body[1] = 0x01; /*nal unit,NALUs(AVCPacketType == 1)*/
    body[2] = 0x00; //composition time 0x000000 24bit
    body[3] = 0x00;
    body[4] = 0x00;

    //写入NALU信息,右移8位,一个字节的读取?
    body[5] = (len >> 24) & 0xff;
    body[6] = (len >> 16) & 0xff;
    body[7] = (len >> 8) & 0xff;
    body[8] = (len) & 0xff;

    /*copy data*/
    memcpy(&body[9], buf, len);

    packet->m_hasAbsTimestamp = 0;
    packet->m_nBodySize = body_size;
    packet->m_packetType = RTMP_PACKET_TYPE_VIDEO;//当前packet的类型:Video
    packet->m_nChannel = 0x04;
    packet->m_headerType = RTMP_PACKET_SIZE_LARGE;
    packet->m_nTimeStamp = RTMP_GetTime() - start_time;//记录了每一个tag相对于第一个tag(File Header)的相对时间
    add_rtmp_packet(packet);
}

(四)RTMPdump推流

好了,经过千锤百炼,终于得到了音频和视频的RTMP Packet,接下来就要看下怎么将数据推到流媒体服务器中。按照国际惯例,还是要查看一下官方文档

RTMP推流步骤是这样的:

  1. RTMP_Alloc():给RTMP对象分配空间
  2. RTMP_Init(rtmp):初始化RTMP对象
  3. RTMP_SetupURL(rtmp, rtmp_path):设置流媒体地址
  4. RTMP_EnableWrite(rtmp):允许发布流媒体数据
  5. RTMP_Connect(rtmp, NULL):建立与服务器的链接(TCL握手)
  6. RTMP_ConnectStream(rtmp, 0):连接流
  7. RTMP_SendPacket(rtmp, packet, TRUE):将数据添加到队列中,并发送

推流结束后:

  1. RTMP_Close(rtmp):关闭流
  2. RTMP_Free(rtmp):释放rtmp对象

代码实现:

void *push_thread(void *args) {
    LOGD("%s", "启动推流线程");
    JNIEnv *env; //获取当前线程的JNIEnv
    if (javaVM == NULL) {
        LOGI("%s", "JavaVM 为NULL");
        return 0;
    } else {
        javaVM->AttachCurrentThread(&env, NULL);
    }
    //建立RTMP链接
    RTMP *rtmp = NULL;
    rtmp = RTMP_Alloc();
    RTMP_Init(rtmp);
    if (rtmp == NULL) {
        LOGI("%s", "rtmp初始化失败");
        //..异常处理
        return 0;
    } 
    //设置流媒体地址
    RTMP_SetupURL(rtmp, rtmp_path);
    //发布rtmp数据流
    RTMP_EnableWrite(rtmp);
    //建立连接
    if (!RTMP_Connect(rtmp, NULL)) {
        //..异常处理
        return 0;
    } 
    //计时
    start_time = RTMP_GetTime();
    if (!RTMP_ConnectStream(rtmp, 0)) { //连接流
        LOGI("%s", "RTMP连接流异常");
        //..异常处理
        return 0;
    } 
    //发送AAC头信息
    add_aac_sequence_header();
    while (is_pushing) {
        //发送
        pthread_mutex_lock(&mutex);
        pthread_cond_wait(&cond, &mutex);
        if (is_pushing == FALSE) {
            LOGI("%s", "循环中获取isPushing break");
            break;
        }
        try {
            //取出队列中的RTMPPacket
            RTMPPacket *packet = (RTMPPacket *) queue_get_first();
            if (packet) {
                int flag = queue_delete_first();//移除
                if (flag != 0) {
                    LOGI("%s", "移除失败");
                }
                packet->m_nInfoField2 = rtmp->m_stream_id;  //RTMP协议,stream_id数据
                int i = RTMP_SendPacket(rtmp, packet, TRUE);  //TRUE表示放入librtmp队列中,并不是立即发送
                if (!i) {
                    LOGE("%s", "RTMP断开");
                    //..异常处理
                    return 0;
                }
                RTMPPacket_Free(packet);
            } 
        } catch (...) {
            LOGI("%s", "未知异常");
            throwNativeError(env, WHAT_FAILED);
        }
        pthread_mutex_unlock(&mutex);
    }
    LOGI("%s", "结束推流线程释放资源");
    free(rtmp_path);
    RTMP_Close(rtmp);
    RTMP_Free(rtmp);
    javaVM->DetachCurrentThread();
}

文章到这里,整个推流过程已经完成了。要实现推流的过程,不仅要对整个流程十分熟悉,还要明白如何实现各种协议,各种库的使用、数据格式的转换、旋转等。

(五)NDK

既然写了原生代码,在Android系统中就需要将其打包成静态库,或者动态库的形式供Java层调用,这里使用cmake来来构建。

cmake_minimum_required(VERSION 3.4.1)

add_library( # Sets the name of the library.
        native-lib

        # Sets the library as a shared library.
        SHARED

        # Provides a relative path to your source file(s).
        src/main/cpp/queue.c src/main/cpp/native-lib.cpp)

set(my_lib_path ${CMAKE_SOURCE_DIR}/libs)
#-------faac------
add_library(
        libfaac
        SHARED
        IMPORTED)

set_target_properties(
        libfaac
        PROPERTIES IMPORTED_LOCATION
        ${my_lib_path}/${ANDROID_ABI}/libfaac.so)

#-------faac------
add_library(
        librtmp
        SHARED
        IMPORTED)

set_target_properties(
        librtmp
        PROPERTIES IMPORTED_LOCATION
        ${my_lib_path}/${ANDROID_ABI}/librtmp.so)

#-------x264------
add_library(
        libx2641
        SHARED
        IMPORTED)

set_target_properties(
        libx2641
        PROPERTIES IMPORTED_LOCATION
        ${my_lib_path}/${ANDROID_ABI}/libx2641.so)


find_library( # Sets the name of the path variable.
        log-lib

        # Specifies the name of the NDK library that
        # you want CMake to locate.
        log)

#导入路径,为了让编译时能够寻找到该文件夹
include_directories(src/main/cpp/include src/main/cpp/include/bzip2d)

target_link_libraries( # Specifies the target library.
        native-lib
        bspatch
        libfaac
        librtmp
        libx2641
        # Links the target library to the log library
        # included in the NDK.
        ${log-lib})

三、拉流

拉流的难度相对于推流来说就简直就是走路和开飞机的差别。在流媒体服务器部署好了,流媒体数据封装好后,想要播放流媒体数据,只需要一个支持流媒体协议的播放器就可以实现了。

流媒体播放器有:ijkplayer、ffplay等。BiliBili开源的ijkplayer非常的强大,其底层是使用ffmpeg来实现的。如果直接导入ijkplayer播放器来播放直播,可能会出现声音无法解码或视频无法解码的情况。因为直接导入的ijkplayer不支持mpeg2和mpeg4的编码,需要手动编译,编译完成后,将ijkplayer导入到项目中,就可以使用了,但是又会出现一个问题,需要手写播放器控件(UI、控制逻辑等),为了更加方便的拉流(更快的写完代码),这里使用了GSYVideoPlayer这个库,这个库集成了非常多的编码格式和非常多的播放控件,而且UI还比较好看,使用也比较简单。

videoPlayer = root.findViewById(R.id.video_player);

        String videoName = getActivity().getIntent().getStringExtra("VIDEO_NAME");
        if (videoName == null || TextUtils.isEmpty(videoName)) {
            videoName = "视频";
        }

        videoPlayer.setUp(VIDEO_PATH, true, videoName);

        //增加title
        videoPlayer.getTitleTextView().setVisibility(View.VISIBLE);
        //设置返回键
        videoPlayer.getBackButton().setVisibility(View.VISIBLE);
        //设置旋转
        orientationUtils = new OrientationUtils(getActivity(), videoPlayer);
        //设置全屏按键功能,这是使用的是选择屏幕,而不是全屏
        videoPlayer.getFullscreenButton().setOnClickListener(new View.OnClickListener() {
            @Override
            public void onClick(View v) {
                orientationUtils.resolveByClick();
            }
        });
        //是否可以滑动调整
        if (videoType == NETWORK_VIDEO) {
            videoPlayer.setIsTouchWiget(false);
        } else {
            videoPlayer.setIsTouchWiget(true);
        }
        //设置返回按键功能
        videoPlayer.getBackButton().setOnClickListener(new View.OnClickListener() {
            @Override
            public void onClick(View v) {
                onBackPressed();
                getActivity().finish();
            }
        });
        videoPlayer.startPlayLogic();
        //播放状态回调
        videoPlayer.setVideoAllCallBack(new VideoAllCallBack() {
            @Override
            public void onStartPrepared(String url, Object... objects) {
                Log.d(TAG, "onStartPrepared: ");
            }

            @Override
            public void onPrepared(String url, Object... objects) {
                //加载成功
                Log.d(TAG, "onPrepared: ");
                if (videoType == NETWORK_VIDEO) {
                    mPresenter.startPlay(roomName, roomId);
                    isWatching = true;
                }
            }

            @Override
            public void onClickStartIcon(String url, Object... objects) {
                Log.d(TAG, "onClickStartIcon: ");
            }

            @Override
            public void onClickStartError(String url, Object... objects) {
                Log.d(TAG, "onClickStartError: ");
            }

            @Override
            public void onClickStop(String url, Object... objects) {
                Log.d(TAG, "onClickStop: ");
                //当视频类型为网络视频和正在观看时,点击停止按钮,发送更新观看的直播数据
                if (videoType == NETWORK_VIDEO && isWatching) {
                    mPresenter.stopPlay(roomName, roomId);
                    //不在观看
                    isWatching = false;
                }
            }

            @Override
            public void onClickStopFullscreen(String url, Object... objects) {
                Log.d(TAG, "onClickStopFullscreen: ");
            }

            @Override
            public void onClickResume(String url, Object... objects) {
                Log.d(TAG, "onClickResume: ");
            }

            @Override
            public void onClickResumeFullscreen(String url, Object... objects) {
                Log.d(TAG, "onClickResumeFullscreen: ");
            }

            @Override
            public void onClickSeekbar(String url, Object... objects) {
                Log.d(TAG, "onClickSeekbar: ");
            }

            @Override
            public void onClickSeekbarFullscreen(String url, Object... objects) {
                Log.d(TAG, "onClickSeekbarFullscreen: ");
            }

            @Override
            public void onAutoComplete(String url, Object... objects) {
                Log.d(TAG, "onAutoComplete: ");
            }

            @Override
            public void onEnterFullscreen(String url, Object... objects) {
                Log.d(TAG, "onEnterFullscreen: ");
            }

            @Override
            public void onQuitFullscreen(String url, Object... objects) {
                Log.d(TAG, "onQuitFullscreen: ");
            }

            @Override
            public void onQuitSmallWidget(String url, Object... objects) {
                Log.d(TAG, "onQuitSmallWidget: ");
            }

            @Override
            public void onEnterSmallWidget(String url, Object... objects) {
                Log.d(TAG, "onEnterSmallWidget: ");
            }

            @Override
            public void onTouchScreenSeekVolume(String url, Object... objects) {
                Log.d(TAG, "onTouchScreenSeekVolume: ");
            }

            @Override
            public void onTouchScreenSeekPosition(String url, Object... objects) {
                Log.d(TAG, "onTouchScreenSeekPosition: ");
            }

            @Override
            public void onTouchScreenSeekLight(String url, Object... objects) {
                Log.d(TAG, "onTouchScreenSeekLight: ");
            }

            @Override
            public void onPlayError(String url, Object... objects) {
                Log.d(TAG, "onPlayError: ");
            }

            @Override
            public void onClickStartThumb(String url, Object... objects) {
                Log.d(TAG, "onClickStartThumb: ");
            }

            @Override
            public void onClickBlank(String url, Object... objects) {
                Log.d(TAG, "onClickBlank: ");
            }

            @Override
            public void onClickBlankFullscreen(String url, Object... objects) {
                Log.d(TAG, "onClickBlankFullscreen: ");
            }
        });

写到这里,整个直播功能已经实现了。若有不足之处,欢迎大家指正!!! (这篇文章部分内容由于某些原因写了两次,让我知道一个可靠的技术交流平台是有多么重要!!!)