持续创作,加速成长!这是我参与「掘金日新计划 · 10 月更文挑战」的第3天,点击查看活动详情
AVFoundation 是Apple iOS和OS X系统中用于处理基于时间的媒体数据的高级框架,通过开发所需的工具提供了强大的功能集,让开发者能够基于苹果平台创建当下最先进的媒体应用程序,其针对64位处理器设计,充分利用了多核硬件优势,会自动提供硬件加速操作,确保大部分设备能以最佳性能运行,是iOS开发接触音视频开发必学的框架之一。
参与掘金日新计划,持续记录AVFoundation学习,Demo学习地址,里面封装了一些工具类,可以直接使用,这篇文章主要讲述AVAssetWriter将捕获到的Buffer封装成视频文件,其他类的相关用法可查看我的其他文章。
AVAssetWriter
当我们给捕捉会话配置了AVCaptureVideoDataOutput、AVCaptureAudioDataOutput,就无法使用AVCaptureMovieFileOutput来记录输出的视频了,这时可以使用AVAssetWriter,将输出的视频流音频流Buffer封装成一个视频文件保存
利用AVAssetWriter封装工具
- AVAssetWriter通过多个(音频、视频等)AVAssetWriterInput对象配置。AVAssetWriterInput通过mediaType和outputSettings来初始化,我们可以在outputSettings中进行视频比特率、视频宽高、关键帧间隔等细致的配置,这也是AVAssetWrite相比AVAssetExportSession明显的优势。AVAssetWriterInput在附加数据后会在最终输出时生成一个独立的AVAssetTrack.
- AssetWriterInput都期望接收CMSampelBufferRef格式的数据,如果是CVPixelBuffer格式的数据,就需要通过adaptor来格式化后再写入。
- 一定要注意kCVPixelBufferPixelFormatTypeKey的格式,输出的视频流是什么这里就用什么,我Demo里是kCVPixelFormatType_32BGRA
protocol MovieWriterDelegate {
func didWriteMovieSuccess(at url: URL)
func didWriteMovieFailed()
}
extension MovieWriterDelegate {
func didWriteMovieSuccess(at url: URL) {}
func didWriteMovieFailed() {}
}
final class MovieWriter {
/// 代理回调
var delegate: MovieWriterDelegate?
/// 是否正在写入标识
private(set) var isWritingFlag: Bool = false
/// coreImage上下文
var ciContext: CIContext = {
let eaglContext = EAGLContext(api: .openGLES2)!
let ciContext = CIContext(eaglContext: eaglContext, options: [CIContextOption.workingColorSpace: nil])
return ciContext
}()
private let videoSettings: [String: Any]?
private let audioSettings: [String: Any]?
private let inputPixelBufferAdaptor: AVAssetWriterInputPixelBufferAdaptor
private let assetWriter: AVAssetWriter!
private let videoInput: AVAssetWriterInput!
private let audioInput: AVAssetWriterInput!
/// 色彩空间
private let colorSpace: CGColorSpace = CGColorSpaceCreateDeviceRGB()
/// 是否是第一帧标识
private var isFirstSampleFlag: Bool = true
init(videoSettings: [String: Any]?, audioSettings: [String: Any]?) {
self.videoSettings = videoSettings
self.audioSettings = audioSettings
/**
AVAssetWriter通过多个(音频、视频等)AVAssetWriterInput对象配置。AVAssetWriterInput通过mediaType和outputSettings来初始化,我们可以在outputSettings中进行视频比特率、视频宽高、关键帧间隔等细致的配置,这也是AVAssetWrite相比AVAssetExportSession明显的优势。AVAssetWriterInput在附加数据后会在最终输出时生成一个独立的AVAssetTrack.
此处用到了PixelBufferAdaptor来附加CVPixelBuffer类型的数据,它在附加CVPixelBuffer对象的视频样本时能提供最优性能。
*/
// assetWriter
assetWriter = try! AVAssetWriter(url: WriteUtil.outputURL(), fileType: .mov)
// 添加videoInput
// 每个AssetWriterInput都期望接收CMSampelBufferRef格式的数据,如果是CVPixelBuffer格式的数据,就需要通过adaptor来格式化后再写入
videoInput = AVAssetWriterInput(mediaType: .video, outputSettings: videoSettings)
// 针对实时性进行优化
videoInput.expectsMediaDataInRealTime = true
videoInput.transform = WriteUtil.writeTransform(for: UIDevice.current.orientation)
if (assetWriter.canAdd(videoInput) == true) {
assetWriter.add(videoInput)
} else {
print("MovieWriter-无法添加视频输入.")
}
// 处理inputPixelBufferAdaptor,一定注意kCVPixelFormatType_32BGRA
var videoAttributes: [String: Any] = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA,
kCVPixelFormatOpenGLESCompatibility as String: true]
if let videoWidthKey = videoSettings?[AVVideoWidthKey] {
videoAttributes[kCVPixelBufferWidthKey as String] = videoWidthKey
}
if let videoHeightKey = videoSettings?[AVVideoHeightKey] {
videoAttributes[kCVPixelBufferHeightKey as String] = videoHeightKey
}
inputPixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoInput, sourcePixelBufferAttributes: videoAttributes)
// 添加audioInput
self.audioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: audioSettings)
// 针对实时性进行优化
audioInput.expectsMediaDataInRealTime = true
if (assetWriter.canAdd(audioInput) == true) {
assetWriter.add(audioInput)
} else {
print("MovieWriter-无法添加音频输入.")
}
}
deinit {
CQLog("MovieWriter-deinit")
}
}
开始写入结束写入
- 写入结束调用finishWriting即可封装成视频文件
// MARK: - Public Func
extension MovieWriter {
func startWriting() {
isWritingFlag = true
isFirstSampleFlag = true
}
func stopWriting() {
isWritingFlag = false;
assetWriter.finishWriting {
if self.assetWriter.status == .completed {
self.delegate?.didWriteMovieSuccess(at: WriteUtil.outputURL())
} else {
self.delegate?.didWriteMovieFailed()
print("MovieWriter-写入视频失败.")
}
}
}
}
写入
- 在视频流音频流的回调函数中调用这里将Buffer持续写入封装
- 这里要求传入CIImage是因为我Demo利用CIFilter做了滤镜,想要将保存的视频也有预览的滤镜效果,如果是用OpenGL做滤镜直接要求传入CVPixelBuffer即可
- 使用AVAssetWriter实时写入将加工后的媒体资源进行编码并写入到容器文件中。
- AVAssetWriterInput期望接收到的数据是CMSampleBuffer格式,但也可以通过pixelBufferAdaptor适配成它所期望的数据,这里改为了写入CVPixelBuffer。
func process(image: CIImage, atTime time: CMTime) {
guard isWritingFlag == true else { return }
if isFirstSampleFlag == true {
if assetWriter.startWriting() == true {
assetWriter.startSession(atSourceTime: time)
} else {
print("MovieWriter-开始写入失败.")
}
isFirstSampleFlag = false
}
guard let pixelBufferPool: CVPixelBufferPool = inputPixelBufferAdaptor.pixelBufferPool else {
print("MovieWriter-缓冲池为空.")
return
}
// 从池中创建一个新的PixelBuffer对象。
var outputRenderBuffer: CVPixelBuffer?
let createResult = CVPixelBufferPoolCreatePixelBuffer(nil, pixelBufferPool, &outputRenderBuffer)
if createResult != kCVReturnSuccess {
print("MovieWriter-无法从池中获取像素缓冲区.")
return
}
// 将CIImage渲染到CVPixelBuffer,再进行写入
ciContext.render(image, to: outputRenderBuffer!, bounds: image.extent, colorSpace: colorSpace)
if videoInput.isReadyForMoreMediaData == true {
let result = inputPixelBufferAdaptor.append(outputRenderBuffer!, withPresentationTime: time)
if result == false {
print("MovieWriter-附加像素缓冲区错误.")
}
}
}
func process(audioBuffer: CMSampleBuffer) {
guard isWritingFlag == true else { return }
guard isFirstSampleFlag == false else { return }
if audioInput.isReadyForMoreMediaData == true {
let result = audioInput.append(audioBuffer)
if result == false {
print("MovieWriter-附加音频样本缓冲区错误.")
}
}
}
struct WriteUtil {
/**
根据原始图像Rect和预览Rect裁剪
- parameter sourceRect: 原始图像Rect
- parameter previewRect: 预览Rect
- returns: 裁剪后Rect
*/
static func centerCropImageRect(sourceRect: CGRect, previewRect: CGRect) -> CGRect {
let sourceAspectRatio: CGFloat = sourceRect.size.width/sourceRect.size.height
let previewAspectRatio: CGFloat = previewRect.size.width/previewRect.size.height
// 想要保持屏幕大小,所以裁剪视频图像
var drawRect = sourceRect
if sourceAspectRatio>previewAspectRatio {
// 使用视频图像的全部高度,并中心裁剪宽度
let scaledHeight = drawRect.size.height*previewAspectRatio
drawRect.origin.x += (drawRect.self.width-scaledHeight)/2
drawRect.size.width = scaledHeight
} else {
// 使用视频图像的全宽度,并中心裁剪高度
drawRect.origin.y += (drawRect.size.height-drawRect.size.width/previewAspectRatio)/2
drawRect.size.height = drawRect.size.width/previewAspectRatio
}
return drawRect
}
/**
根据设备方向计算写入Transform
- parameter deviceOrientation: 设备方向
- returns: 写入Transform
*/
static func writeTransform(for deviceOrientation: UIDeviceOrientation) -> CGAffineTransform {
let result: CGAffineTransform
switch deviceOrientation {
case .landscapeRight:
result = CGAffineTransform(rotationAngle: Double.pi)
case .portraitUpsideDown:
result = CGAffineTransform(rotationAngle: Double.pi/2*3)
case .portrait, .faceUp, .faceDown, .unknown:
result = CGAffineTransform(rotationAngle: Double.pi/2)
case .landscapeLeft:
result = CGAffineTransform.identity
@unknown default:
result = CGAffineTransform.identity
}
return result
}
static func outputURL() -> URL {
let filePath = NSTemporaryDirectory() + "AVAssetWriter_movie.mov"
let filerUrl = URL(fileURLWithPath: filePath)
if FileManager.default.fileExists(atPath: filePath) {
try? FileManager.default.removeItem(atPath: filePath)
}
return filerUrl
}
}