我有一些代码来调整图像的大小,这样我就可以得到图像中心的缩放块-我用这个来获取一个UIImage,并返回一个小的,正方形的图像表示,类似于在照片应用程序的相册视图中看到的。(我知道我可以使用UIImageView和调整裁剪模式来实现相同的结果,但这些图像有时显示在UIWebViews中)。

我已经开始注意到这段代码中的一些崩溃,我有点难住了。我有两种不同的理论,不知道哪一种是正确的。

理论1)我通过绘制到目标尺寸的屏幕外图像上下文来实现裁剪。因为我想要图像的中心部分,所以我将传递给drawwinrect的CGRect参数设置为比图像上下文的边界更大的值。我希望这是符合规定的,但我是不是在试图掩盖其他我不应该触及的记忆?

理论2)我在后台线程中做所有这些。我知道UIKit的某些部分被限制在主线程中。我假设/希望绘制到屏幕外的视图不是其中之一。我错了吗?

(哦,我真怀念NSImage的drawwinrect:fromRect:operation:fraction:方法。)


当前回答

看看https://github.com/vvbogdan/BVCropPhoto

- (UIImage *)croppedImage {
    CGFloat scale = self.sourceImage.size.width / self.scrollView.contentSize.width;

    UIImage *finalImage = nil;
    CGRect targetFrame = CGRectMake((self.scrollView.contentInset.left + self.scrollView.contentOffset.x) * scale,
            (self.scrollView.contentInset.top + self.scrollView.contentOffset.y) * scale,
            self.cropSize.width * scale,
            self.cropSize.height * scale);

    CGImageRef contextImage = CGImageCreateWithImageInRect([[self imageWithRotation:self.sourceImage] CGImage], targetFrame);

    if (contextImage != NULL) {
        finalImage = [UIImage imageWithCGImage:contextImage
                                         scale:self.sourceImage.scale
                                   orientation:UIImageOrientationUp];

        CGImageRelease(contextImage);
    }

    return finalImage;
}


- (UIImage *)imageWithRotation:(UIImage *)image {


    if (image.imageOrientation == UIImageOrientationUp) return image;
    CGAffineTransform transform = CGAffineTransformIdentity;

    switch (image.imageOrientation) {
        case UIImageOrientationDown:
        case UIImageOrientationDownMirrored:
            transform = CGAffineTransformTranslate(transform, image.size.width, image.size.height);
            transform = CGAffineTransformRotate(transform, M_PI);
            break;

        case UIImageOrientationLeft:
        case UIImageOrientationLeftMirrored:
            transform = CGAffineTransformTranslate(transform, image.size.width, 0);
            transform = CGAffineTransformRotate(transform, M_PI_2);
            break;

        case UIImageOrientationRight:
        case UIImageOrientationRightMirrored:
            transform = CGAffineTransformTranslate(transform, 0, image.size.height);
            transform = CGAffineTransformRotate(transform, -M_PI_2);
            break;
        case UIImageOrientationUp:
        case UIImageOrientationUpMirrored:
            break;
    }

    switch (image.imageOrientation) {
        case UIImageOrientationUpMirrored:
        case UIImageOrientationDownMirrored:
            transform = CGAffineTransformTranslate(transform, image.size.width, 0);
            transform = CGAffineTransformScale(transform, -1, 1);
            break;

        case UIImageOrientationLeftMirrored:
        case UIImageOrientationRightMirrored:
            transform = CGAffineTransformTranslate(transform, image.size.height, 0);
            transform = CGAffineTransformScale(transform, -1, 1);
            break;
        case UIImageOrientationUp:
        case UIImageOrientationDown:
        case UIImageOrientationLeft:
        case UIImageOrientationRight:
            break;
    }

    // Now we draw the underlying CGImage into a new context, applying the transform
    // calculated above.
    CGContextRef ctx = CGBitmapContextCreate(NULL, image.size.width, image.size.height,
            CGImageGetBitsPerComponent(image.CGImage), 0,
            CGImageGetColorSpace(image.CGImage),
            CGImageGetBitmapInfo(image.CGImage));
    CGContextConcatCTM(ctx, transform);
    switch (image.imageOrientation) {
        case UIImageOrientationLeft:
        case UIImageOrientationLeftMirrored:
        case UIImageOrientationRight:
        case UIImageOrientationRightMirrored:
            // Grr...
            CGContextDrawImage(ctx, CGRectMake(0, 0, image.size.height, image.size.width), image.CGImage);
            break;

        default:
            CGContextDrawImage(ctx, CGRectMake(0, 0, image.size.width, image.size.height), image.CGImage);
            break;
    }

    // And now we just create a new UIImage from the drawing context
    CGImageRef cgimg = CGBitmapContextCreateImage(ctx);
    UIImage *img = [UIImage imageWithCGImage:cgimg];
    CGContextRelease(ctx);
    CGImageRelease(cgimg);
    return img;

}

其他回答

CGSize size = [originalImage size];
int padding = 20;
int pictureSize = 300;
int startCroppingPosition = 100;
if (size.height > size.width) {
    pictureSize = size.width - (2.0 * padding);
    startCroppingPosition = (size.height - pictureSize) / 2.0; 
} else {
    pictureSize = size.height - (2.0 * padding);
    startCroppingPosition = (size.width - pictureSize) / 2.0;
}
// WTF: Don't forget that the CGImageCreateWithImageInRect believes that 
// the image is 180 rotated, so x and y are inverted, same for height and width.
CGRect cropRect = CGRectMake(startCroppingPosition, padding, pictureSize, pictureSize);
CGImageRef imageRef = CGImageCreateWithImageInRect([originalImage CGImage], cropRect);
UIImage *newImage = [UIImage imageWithCGImage:imageRef scale:1.0 orientation:originalImage.imageOrientation];
[m_photoView setImage:newImage];
CGImageRelease(imageRef);

我所见过的大多数响应只处理(x, y)的位置(0,0)。好吧,这是一种情况,但我希望我的裁剪操作居中。我花了一些时间才弄清楚WTF注释后面的那行。

让我们以纵向捕获的图像为例:

原始图像的高度高于它的宽度(哇,到目前为止还没有什么奇怪的!) CGImageCreateWithImageInRect方法在它自己的世界中想象的图像并不是一个真正的肖像,而是一个景观(这也是为什么如果你不使用imageWithCGImage构造函数中的方向参数,它将显示为180旋转)。 你可以把它想象成一个横向,(0,0)位置是图像的右上角。

希望这是有意义的!如果没有,尝试不同的值,你会发现在为你的cropRect选择正确的x、y、宽度和高度时,逻辑是颠倒的。

Swift 3版本

func cropImage(imageToCrop:UIImage, toRect rect:CGRect) -> UIImage{
    
    let imageRef:CGImage = imageToCrop.cgImage!.cropping(to: rect)!
    let cropped:UIImage = UIImage(cgImage:imageRef)
    return cropped
}


let imageTop:UIImage  = UIImage(named:"one.jpg")! // add validation

在这个桥接函数CGRectMake -> CGRect的帮助下(这个答案由@rob mayoff回答):

 func CGRectMake(_ x: CGFloat, _ y: CGFloat, _ width: CGFloat, _ height: CGFloat) -> CGRect {
    return CGRect(x: x, y: y, width: width, height: height)
}

用法是:

if var image:UIImage  = UIImage(named:"one.jpg"){
   let  croppedImage = cropImage(imageToCrop: image, toRect: CGRectMake(
        image.size.width/4,
        0,
        image.size.width/2,
        image.size.height)
    )
}

输出:

下面的代码片段可能会有所帮助。

import UIKit

extension UIImage {
    func cropImage(toRect rect: CGRect) -> UIImage? {
        if let imageRef = self.cgImage?.cropping(to: rect) {
            return UIImage(cgImage: imageRef)
        }
        return nil
    }
}

在Swift中裁剪UIImage的最佳解决方案,在精度方面,像素缩放…:

private func squareCropImageToSideLength(let sourceImage: UIImage,
    let sideLength: CGFloat) -> UIImage {
        // input size comes from image
        let inputSize: CGSize = sourceImage.size

        // round up side length to avoid fractional output size
        let sideLength: CGFloat = ceil(sideLength)

        // output size has sideLength for both dimensions
        let outputSize: CGSize = CGSizeMake(sideLength, sideLength)

        // calculate scale so that smaller dimension fits sideLength
        let scale: CGFloat = max(sideLength / inputSize.width,
            sideLength / inputSize.height)

        // scaling the image with this scale results in this output size
        let scaledInputSize: CGSize = CGSizeMake(inputSize.width * scale,
            inputSize.height * scale)

        // determine point in center of "canvas"
        let center: CGPoint = CGPointMake(outputSize.width/2.0,
            outputSize.height/2.0)

        // calculate drawing rect relative to output Size
        let outputRect: CGRect = CGRectMake(center.x - scaledInputSize.width/2.0,
            center.y - scaledInputSize.height/2.0,
            scaledInputSize.width,
            scaledInputSize.height)

        // begin a new bitmap context, scale 0 takes display scale
        UIGraphicsBeginImageContextWithOptions(outputSize, true, 0)

        // optional: set the interpolation quality.
        // For this you need to grab the underlying CGContext
        let ctx: CGContextRef = UIGraphicsGetCurrentContext()
        CGContextSetInterpolationQuality(ctx, kCGInterpolationHigh)

        // draw the source image into the calculated rect
        sourceImage.drawInRect(outputRect)

        // create new image from bitmap context
        let outImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()

        // clean up
        UIGraphicsEndImageContext()

        // pass back new image
        return outImage
}

调用此函数的指令:

let image: UIImage = UIImage(named: "Image.jpg")!
let squareImage: UIImage = self.squareCropImageToSideLength(image, sideLength: 320)
self.myUIImageView.image = squareImage

注意:最初的源代码灵感是用Objective-C写的,可以在Cocoanetics博客上找到。

wolf回答的快速版本,对我来说很管用:

public extension UIImage {
    func croppedImage(inRect rect: CGRect) -> UIImage {
        let rad: (Double) -> CGFloat = { deg in
            return CGFloat(deg / 180.0 * .pi)
        }
        var rectTransform: CGAffineTransform
        switch imageOrientation {
        case .left:
            let rotation = CGAffineTransform(rotationAngle: rad(90))
            rectTransform = rotation.translatedBy(x: 0, y: -size.height)
        case .right:
            let rotation = CGAffineTransform(rotationAngle: rad(-90))
            rectTransform = rotation.translatedBy(x: -size.width, y: 0)
        case .down:
            let rotation = CGAffineTransform(rotationAngle: rad(-180))
            rectTransform = rotation.translatedBy(x: -size.width, y: -size.height)
        default:
            rectTransform = .identity
        }
        rectTransform = rectTransform.scaledBy(x: scale, y: scale)
        let transformedRect = rect.applying(rectTransform)
        let imageRef = cgImage!.cropping(to: transformedRect)!
        let result = UIImage(cgImage: imageRef, scale: scale, orientation: imageOrientation)
        return result
    }
}