iOS Ai 接口实现一键抠图

5,320 阅读2分钟

主要为学习 iOS Vision 库

效果

原始图片(图片网络下载若侵权通知修改)

4ab757dfcfdc4c7406aa90840c63ac84.jpeg

抠出来的图片

cropImage.png

细节部分还是比较粗糙脸抠没了😂

实现方式

1,通过 iOS 系统 Vision 框架提供的 VNGenerateObjectnessBasedSaliencyImageRequest 可获取图片显著性区域,注意使用的是ObjectnessBased 主要是基于物体的显著性区域。

获取的热力图

截屏2021-12-13 下午6.27.15.png


CIImage *ciOriginImg = [CIImage imageWithCGImage:originImage.CGImage];//原始图片

VNImageRequestHandler *imageHandler = [[VNImageRequestHandler alloc]

                                           initWithCIImage:ciOriginImg
                                                    options:nil];
                                                    
VNGenerateObjectnessBasedSaliencyImageRequest *attensionRequest = [[VNGenerateObjectnessBasedSaliencyImageRequest alloc] init];//基于物体的显著性区域检测请求
NSError *err = nil;
BOOL haveAttension =  [imageHandler performRequests:@[attensionRequest] error:&err];//有物品
if ( haveAttension ) {//有物品
     if(attensionRequest.results && [attensionRequest.results count] > 0) {
      VNSaliencyImageObservation *observation = [attensionRequest.results firstObject];
            //获取显著区域热力图,接下来对该图进行边缘检测
            [self heatMapProcess:observation.pixelBuffer catOrigin:ciOriginImg];
       }
 }      

2,获取显著性区域热力图之后之后,采用 VNDetectContoursRequest 边缘检测获取显著性区域在图片中的边缘,边缘数据存在 VNContoursObservation 的属性normalizedPathCGPathRef 的对象保存的一系列归一化的点,需把这些点最终转化为图片上的坐标。

CIImage *heatImage  = [CIImage imageWithCVPixelBuffer:hotRef];
VNDetectContoursRequest *contourRequest = [[VNDetectContoursRequest alloc] init];
contourRequest.revision = VNDetectContourRequestRevision1;
contourRequest.contrastAdjustment = 1.0;
contourRequest.detectDarkOnLight = NO;
contourRequest.maximumImageDimension = 512;
VNImageRequestHandler *handler = [[VNImageRequestHandler alloc] initWithCIImage:heatImage options:nil];
NSError *err = nil;

BOOL result = [handler performRequests:@[contourRequest] error:&err];

if(result) {

     VNContoursObservation *contoursObv = [contourRequest.results firstObject];

     CIContext *cxt = [[CIContext alloc] initWithOptions:nil];
    
     CGImageRef origin = [cxt createCGImage:catOrigin
                                   fromRect:catOrigin.extent];
                                   //抠图
     UIImage *clipImage = [self drawContourWith:contoursObv
                                         withCgImg:nil
                                         originImg:origin];                              
                                   
                                   
}

3,最终通过 image.layer 设置 mask 的方法抠出显著区域的内容。

抠图

截屏2021-12-13 下午6.56.20.png


- (UIImage *)drawContourWith:(VNContoursObservation *)contourObv

                   withCgImg:(CGImageRef)img

                   originImg:(CGImageRef)origin{

    CGSize size = CGSizeMake(CGImageGetWidth(origin), CGImageGetHeight(origin));

    UIImageView *originImgV = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, size.width, size.height)];

    originImgV.image = [UIImage imageWithCGImage:origin];

    CAShapeLayer *layer = [CAShapeLayer layer];

    CGAffineTransform flipMatrix =  CGAffineTransformMake(1, 0, 0, -1, 0, size.height);//坐标转换为底部为(0, 0)

    CGAffineTransform scaleTranform = CGAffineTransformScale(flipMatrix, size.width, size.height); //对path 进行按图尺寸放大

    CGPathRef scaedPath = CGPathCreateCopyByTransformingPath(contourObv.normalizedPath, &scaleTranform);//对归一化的path进行变换

    layer.path = scaedPath;

    [originImgV.layer setMask:layer];

    UIGraphicsBeginImageContext(originImgV.bounds.size);

    [originImgV.layer renderInContext:UIGraphicsGetCurrentContext()];

    UIImage *image = UIGraphicsGetImageFromCurrentImageContext();

    UIGraphicsEndImageContext();

    //对于扣出来的主要内容进行截取

    //原数据放大的范围是ui的

    CGAffineTransform originScale = CGAffineTransformMakeScale(size.width, size.height);

    CGPathRef originScalePath = CGPathCreateCopyByTransformingPath(contourObv.normalizedPath, &originScale);//归一化的path进行还原,并拿到在图中位置的框

    CGRect targetReact = CGPathGetBoundingBox(originScalePath);
    
    CIImage *getBoundImage = [[CIImage alloc] initWithImage:image];

    CIImage *targetBoundImg = [getBoundImage imageByCroppingToRect:targetReact];//截取范围的图片

    CIImage *cropImg = [ciOriginImg imageByCroppingToRect:normalRect];    

    return [UIImage imageWithCIImage:targetBoundImg];
    
}

最新补充:iOS 17 之后基于VNGenerateForegroundInstanceMaskRequest 进行抠图效果更佳也更简单

         VNGenerateForegroundInstanceMaskRequest *request = [[VNGenerateForegroundInstanceMaskRequest alloc] init];

          request.revision = VNGenerateForegroundInstanceMaskRequestRevision1;

          VNImageRequestHandler *instanceHandler = [[VNImageRequestHandler alloc] initWithCGImage:image.CGImage options:**nil**];

          NSError *error = **nil**;

          bool ret = [instanceHandler performRequests:@[request] error:&error];

          

          if (!ret || error) {

              return;

          }

          

          VNInstanceMaskObservation *instanceMask = request.results && [request.results count] > 0 ? [request.results firstObject] : **nil**;

         CVPixelBufferRef cropBuffer =  [instanceMask generateMaskedImageOfInstances:instanceMask.allInstances fromRequestHandler:instanceHandler croppedToInstancesExtent:**YES** error:&error];

         //获取扣出来的图片

        UIImage *clipImage = [[UIImage alloc] initWithCIImage:[CIImage imageWithCVPixelBuffer:cropBuffer]];