iOS视觉-- (09) OpenGL ES+GLSL实现摄像头录制渲染解析

231 阅读4分钟

前面都是讲一些关于图片的一些操作,下一步进击视频相关的东西。由浅入深的学习,因为万事开头难,不要想着一步登天。静下心来一步一步的往上爬。每天能学到一点东西就是进步,持之以恒才是最重要的。千里之行始于足下... 本文借鉴:落影大神-摄像头采集数据和渲染

本文Demo: 码云Github

首先我们明确一下我们要实现的东西:
  • 1.摄像头录制
  • 2.使用OpenGL ES渲染视频帧
  • 1. 摄像头录制

这里我们主要是学习OpenGL ES怎么渲染视频帧的,摄像头录制这方面,网上这里有很多写录制的逻辑与流程的,这里就不再赘述了。有兴趣请自行百度、谷歌。这里直接上代码了

class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {
    
    var mCaptureSession: AVCaptureSession! //负责输入和输出设备之间的数据传递
    var mCaptureDeviceInput: AVCaptureDeviceInput! //负责从AVCaptureDevice获得输入数据
    var mCaptureDeviceOutput: AVCaptureVideoDataOutput! //output
    var mProcessQueue: DispatchQueue!


    @IBOutlet var renderView: DDView!
    override func viewDidLoad() {
        super.viewDidLoad()
        
        self.mCaptureSession = AVCaptureSession()
        self.mCaptureSession.sessionPreset = AVCaptureSession.Preset.high
        
        mProcessQueue = DispatchQueue(label: "mProcessQueue")
        
        var inputCamera: AVCaptureDevice!
        let devices = AVCaptureDevice.devices(for: AVMediaType.video)
        for device in devices {
            if (device.position == AVCaptureDevice.Position.back)
            {
                inputCamera = device;
            }
        }
        
        self.mCaptureDeviceInput = try? AVCaptureDeviceInput(device: inputCamera)//[[ alloc] initWithDevice:inputCamera error:nil];
        
        if (self.mCaptureSession.canAddInput(self.mCaptureDeviceInput)) {
            self.mCaptureSession.addInput(self.mCaptureDeviceInput)
        }

        
        self.mCaptureDeviceOutput = AVCaptureVideoDataOutput()
        self.mCaptureDeviceOutput.alwaysDiscardsLateVideoFrames = false
        
//        self.mGLView.isFullYUVRange = YES;
        //kCVPixelFormatType_32BGRA
        self.mCaptureDeviceOutput.videoSettings = [String(kCVPixelBufferPixelFormatTypeKey) : kCVPixelFormatType_420YpCbCr8BiPlanarFullRange]
        self.mCaptureDeviceOutput.setSampleBufferDelegate(self, queue: self.mProcessQueue)
        
        if (self.mCaptureSession.canAddOutput(self.mCaptureDeviceOutput)) {
            self.mCaptureSession.addOutput(self.mCaptureDeviceOutput)
        }
        
        let connection: AVCaptureConnection = self.mCaptureDeviceOutput.connection(with: AVMediaType.video)!
//        connection.isVideoMirrored = false
        connection.videoOrientation = AVCaptureVideoOrientation.portrait
    }
    
    override func viewDidLayoutSubviews() {
        super.viewDidLayoutSubviews()
        self.mCaptureSession.startRunning()
    }

    
    //MARK: - AVCaptureVideoDataOutputSampleBufferDelegate
    func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
        DispatchQueue.main.async {
            let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
            self.renderView.renderBuffer(pixelBuffer: pixelBuffer!)  
        }
    }
    
}

注意⚠️: 这里有一个注意点就是视频帧格式(kCVPixelBufferPixelFormatTypeKey)的配置,OpenGL 要以对应的格式去取才能取到正确的视频帧,否则会出现黑屏。 iOS通常支持三种格式: 1、kCVPixelFormatType_420YpCbCr8BiPlanarFullRange 2、kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange 3、kCVPixelFormatType_32BGRA

  • 2.使用OpenGL ES渲染视频帧

之前我们把图片加载成纹理的时候都是使用 glTexImage2D方式,视频本身不过是一系列静止图像的组合而已。

图片转纹理

但是摄像机录制的是 CMSampleBuffer,如何将CMSampleBuffer转成纹理呢?

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) 

苹果为我们封装了一个通过 CVImageBuffer 创建 CVOpenGLESTexture (纹理)的方法,如下:

CVOpenGLESTextureCacheCreateTextureFromImage(_ allocator: CFAllocator?, _ textureCache: CVOpenGLESTextureCache, _ sourceImage: CVImageBuffer, _ textureAttributes: CFDictionary?, _ target: GLenum, _ internalFormat: GLint, _ width: GLsizei, _ height: GLsizei, _ format: GLenum, _ type: GLenum, _ planeIndex: Int, _ textureOut: UnsafeMutablePointer<CVOpenGLESTexture?>)

CMSampleBuffer转纹理

  • CVImageBuffer是何物?它与CVPixelBuffer又是什么关系? CVPixelBuffer 解析: CVPixelBuffer 给的官方解释,是其主内存存储所有像素点数据的一个对象.那么什么是主内存了? 其实它并不是我们平常所操作的内存,它指的是存储区域存在于缓存之中. 我们在访问这个块内存区域,需要先锁定这块内存区域.
//1.锁定内存区域:
    CVPixelBufferLockBaseAddress(pixel_buffer,0);
 //2.读取该内存区域数据到NSData对象中
    Void *data = CVPixelBufferGetBaseAddress(pixel_buffer);
 //3.数据读取完毕后,需要释放锁定区域
    CVPixelBufferRelease(pixel_buffer); 

public typealias CVPixelBuffer = CVImageBuffer,CVPixelBuffer是CVImageBuffer的别名

  • 1、如果照相机设置的视频帧格式是 kCVPixelFormatType_32BGRA,那么读取的方式就是:GL_BGRA
  • 2、如果是其他两个,那么它们录制的视频是YUV格式的视频。YUV视频帧分为亮度和色度两个纹理,分别用GL_LUMINANCE格式和GL_LUMINANCE_ALPHA格式读取。
  • 部分核心代码:
//设置纹理
    func renderBuffer(pixelBuffer: CVPixelBuffer) {
        if (self.textureCache != nil) {//注意⚠️:释放内存,要不然会卡住
            if textureY != nil { textureY = nil }
            if textureUV != nil { textureUV = nil }
            CVOpenGLESTextureCacheFlush(self.textureCache!, 0)
        }

        let colorAttachments: CFTypeRef = CVBufferGetAttachment(pixelBuffer, kCVImageBufferYCbCrMatrixKey, nil)!.takeRetainedValue()
        //"\(colorAttachments)" == String(kCVImageBufferYCbCrMatrix_ITU_R_601_4)
        if (CFEqual(colorAttachments, kCVImageBufferYCbCrMatrix_ITU_R_601_4)) {
            if (self.isFullYUVRange) {
                preferredConversion = kColorConversion601FullRange
            }
            else {
                preferredConversion = kColorConversion601
            }
        }
        else {
            preferredConversion = kColorConversion709
        }

        glActiveTexture(GLenum(GL_TEXTURE0))
        // Create a CVOpenGLESTexture from the CVImageBuffer
        let frameWidth = CVPixelBufferGetWidth(pixelBuffer)
        let frameHeight = CVPixelBufferGetHeight(pixelBuffer)

        //亮度纹理 使用:GL_LUMINANCE
        let ret: CVReturn = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
                                                                         textureCache!,
                                                                         pixelBuffer,
                                                                         nil,
                                                                         GLenum(GL_TEXTURE_2D),
                                                                         GL_LUMINANCE,
                                                                         GLsizei(frameWidth),
                                                                         GLsizei(frameHeight),
                                                                         GLenum(GL_LUMINANCE),
                                                                         GLenum(GL_UNSIGNED_BYTE),
                                                                         0,
                                                                         &textureY)
        if ((ret) != 0) {
            NSLog("CVOpenGLESTextureCacheCreateTextureFromImage ret: %d", ret)
            /*
             ⚠️注意:error: -6683 是录制时配置的 kCVPixelBufferPixelFormatTypeKey 与获取的颜色格式不对应
             1、kCVPixelFormatType_32BGRA -->
                CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
                                                                              textureCache!,
                                                                              pixelBuffer,
                                                                              nil,
                                                                              GLenum(GL_TEXTURE_2D),
                                                                              GL_RGBA,
                                                                              GLsizei(frameWidth),
                                                                              GLsizei(frameHeight),
                                                                              GLenum(GL_BGRA),
                                                                              GLenum(GL_UNSIGNED_BYTE),
                                                                              0,
                                                                              &texture);

             */
            return
        }
        glBindTexture(CVOpenGLESTextureGetTarget(textureY!), CVOpenGLESTextureGetName(textureY!))
        glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_MAG_FILTER), GL_LINEAR)
        glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_MIN_FILTER), GL_LINEAR)
        glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_S), GL_CLAMP_TO_EDGE)
        glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_T), GL_CLAMP_TO_EDGE)


        glActiveTexture(GLenum(GL_TEXTURE1))
        //色度纹理 使用:GL_LUMINANCE_ALPHA
        let retUV: CVReturn = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
                                                                         textureCache!,
                                                                         pixelBuffer,
                                                                         nil,
                                                                         GLenum(GL_TEXTURE_2D),
                                                                         GL_LUMINANCE_ALPHA,
                                                                         GLsizei(frameWidth / 2),
                                                                         GLsizei(frameHeight / 2),
                                                                         GLenum(GL_LUMINANCE_ALPHA),
                                                                         GLenum(GL_UNSIGNED_BYTE),
                                                                         1,
                                                                         &textureUV)
        if ((retUV) != 0) {
            NSLog("CVOpenGLESTextureCacheCreateTextureFromImage retUV: %d", retUV)
            return
        }
        glBindTexture(CVOpenGLESTextureGetTarget(textureUV!), CVOpenGLESTextureGetName(textureUV!))
        glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_MAG_FILTER), GL_LINEAR)
        glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_MIN_FILTER), GL_LINEAR)
        glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_S), GL_CLAMP_TO_EDGE)
        glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_T), GL_CLAMP_TO_EDGE)

        //绘制
        renderLayer()
        
    }

  • 有同学可能会有疑问?

CVOpenGLESTextureCacheCreateTextureFromImageglTexImage2D 这两个区别是什么? 我的理解是:glTexImage2D是标准的OpenGL ES的API,而 CVOpenGLESTextureCacheCreateTextureFromImage是苹果对其进行上层封装的API,所以我们也可以通过glTexImage2D实现渲染 转换

    /// CVPixelBuffer -> UIImage
    class func pixelBufferToImage(pixelBuffer: CVPixelBuffer, outputSize: CGSize? = nil) -> UIImage? {
//        let type = CVPixelBufferGetPixelFormatType(pixelBuffer)
        
        let width = CVPixelBufferGetWidth(pixelBuffer)
        let height = CVPixelBufferGetHeight(pixelBuffer)
        let bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer)
        
        CVPixelBufferLockBaseAddress(pixelBuffer, .readOnly)
        guard let context = CGContext(data: CVPixelBufferGetBaseAddress(pixelBuffer),
                                      width: width,
                                      height: height,
                                      bitsPerComponent: 8,
                                      bytesPerRow: bytesPerRow,
                                      space: CGColorSpaceCreateDeviceRGB(),
                                      bitmapInfo: CGBitmapInfo.byteOrder32Little.rawValue | CGImageAlphaInfo.noneSkipFirst.rawValue),
            let imageRef = context.makeImage() else
        {
                return nil
        }
        
        let newImage = outputSize != nil ? UIImage(cgImage: imageRef, scale: 1, orientation: UIImage.Orientation.up).resizedImage(outputSize: outputSize!) : UIImage(cgImage: imageRef, scale: 1, orientation: UIImage.Orientation.up)
        CVPixelBufferUnlockBaseAddress(pixelBuffer, .readOnly)
        
        return newImage
    }
    

2022.11.24 补充

上面的方式通过了2次 CGContext 解码,当时就觉得奇怪,只怪自己才疏学浅,后面再学习中发现有更简单的方式,直接跳过转成UIImage的方式,如下图

4.png 直接把CVPixelBufferGetBaseAddress(pixelBuffer)传给glTexImage2D 即可,已在项目里更新

5.png