我有一些代码来调整图像的大小,这样我就可以得到图像中心的缩放块-我用这个来获取一个UIImage,并返回一个小的,正方形的图像表示,类似于在照片应用程序的相册视图中看到的。(我知道我可以使用UIImageView和调整裁剪模式来实现相同的结果,但这些图像有时显示在UIWebViews中)。

我已经开始注意到这段代码中的一些崩溃,我有点难住了。我有两种不同的理论,不知道哪一种是正确的。

理论1)我通过绘制到目标尺寸的屏幕外图像上下文来实现裁剪。因为我想要图像的中心部分,所以我将传递给drawwinrect的CGRect参数设置为比图像上下文的边界更大的值。我希望这是符合规定的,但我是不是在试图掩盖其他我不应该触及的记忆?

理论2)我在后台线程中做所有这些。我知道UIKit的某些部分被限制在主线程中。我假设/希望绘制到屏幕外的视图不是其中之一。我错了吗?

(哦,我真怀念NSImage的drawwinrect:fromRect:operation:fraction:方法。)


当前回答

你可以创建一个UIImage类别,并在任何你需要的地方使用它。基于HitScans的响应和咆哮的评论。

@implementation UIImage (Crop)

- (UIImage *)crop:(CGRect)rect {

    rect = CGRectMake(rect.origin.x*self.scale, 
                      rect.origin.y*self.scale, 
                      rect.size.width*self.scale, 
                      rect.size.height*self.scale);       

    CGImageRef imageRef = CGImageCreateWithImageInRect([self CGImage], rect);
    UIImage *result = [UIImage imageWithCGImage:imageRef 
                                          scale:self.scale 
                                    orientation:self.imageOrientation]; 
    CGImageRelease(imageRef);
    return result;
}

@end

你可以这样用:

UIImage *imageToCrop = <yourImageToCrop>;
CGRect cropRect = <areaYouWantToCrop>;   

//for example
//CGRectMake(0, 40, 320, 100);

UIImage *croppedImage = [imageToCrop crop:cropRect];

其他回答

在Swift中裁剪UIImage的最佳解决方案,在精度方面,像素缩放…:

private func squareCropImageToSideLength(let sourceImage: UIImage,
    let sideLength: CGFloat) -> UIImage {
        // input size comes from image
        let inputSize: CGSize = sourceImage.size

        // round up side length to avoid fractional output size
        let sideLength: CGFloat = ceil(sideLength)

        // output size has sideLength for both dimensions
        let outputSize: CGSize = CGSizeMake(sideLength, sideLength)

        // calculate scale so that smaller dimension fits sideLength
        let scale: CGFloat = max(sideLength / inputSize.width,
            sideLength / inputSize.height)

        // scaling the image with this scale results in this output size
        let scaledInputSize: CGSize = CGSizeMake(inputSize.width * scale,
            inputSize.height * scale)

        // determine point in center of "canvas"
        let center: CGPoint = CGPointMake(outputSize.width/2.0,
            outputSize.height/2.0)

        // calculate drawing rect relative to output Size
        let outputRect: CGRect = CGRectMake(center.x - scaledInputSize.width/2.0,
            center.y - scaledInputSize.height/2.0,
            scaledInputSize.width,
            scaledInputSize.height)

        // begin a new bitmap context, scale 0 takes display scale
        UIGraphicsBeginImageContextWithOptions(outputSize, true, 0)

        // optional: set the interpolation quality.
        // For this you need to grab the underlying CGContext
        let ctx: CGContextRef = UIGraphicsGetCurrentContext()
        CGContextSetInterpolationQuality(ctx, kCGInterpolationHigh)

        // draw the source image into the calculated rect
        sourceImage.drawInRect(outputRect)

        // create new image from bitmap context
        let outImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()

        // clean up
        UIGraphicsEndImageContext()

        // pass back new image
        return outImage
}

调用此函数的指令:

let image: UIImage = UIImage(named: "Image.jpg")!
let squareImage: UIImage = self.squareCropImageToSideLength(image, sideLength: 320)
self.myUIImageView.image = squareImage

注意:最初的源代码灵感是用Objective-C写的,可以在Cocoanetics博客上找到。

要裁剪视网膜图像,同时保持相同的比例和方向,在UIImage类别中使用以下方法(iOS 4.0及以上):

- (UIImage *)crop:(CGRect)rect {
    if (self.scale > 1.0f) {
        rect = CGRectMake(rect.origin.x * self.scale,
                          rect.origin.y * self.scale,
                          rect.size.width * self.scale,
                          rect.size.height * self.scale);
    }

    CGImageRef imageRef = CGImageCreateWithImageInRect(self.CGImage, rect);
    UIImage *result = [UIImage imageWithCGImage:imageRef scale:self.scale orientation:self.imageOrientation];
    CGImageRelease(imageRef);
    return result;
}

wolf回答的快速版本,对我来说很管用:

public extension UIImage {
    func croppedImage(inRect rect: CGRect) -> UIImage {
        let rad: (Double) -> CGFloat = { deg in
            return CGFloat(deg / 180.0 * .pi)
        }
        var rectTransform: CGAffineTransform
        switch imageOrientation {
        case .left:
            let rotation = CGAffineTransform(rotationAngle: rad(90))
            rectTransform = rotation.translatedBy(x: 0, y: -size.height)
        case .right:
            let rotation = CGAffineTransform(rotationAngle: rad(-90))
            rectTransform = rotation.translatedBy(x: -size.width, y: 0)
        case .down:
            let rotation = CGAffineTransform(rotationAngle: rad(-180))
            rectTransform = rotation.translatedBy(x: -size.width, y: -size.height)
        default:
            rectTransform = .identity
        }
        rectTransform = rectTransform.scaledBy(x: scale, y: scale)
        let transformedRect = rect.applying(rectTransform)
        let imageRef = cgImage!.cropping(to: transformedRect)!
        let result = UIImage(cgImage: imageRef, scale: scale, orientation: imageOrientation)
        return result
    }
}

看起来有点奇怪,但工作得很好,并考虑到图像方向:

var image:UIImage = ...

let img = CIImage(image: image)!.imageByCroppingToRect(rect)
image = UIImage(CIImage: img, scale: 1, orientation: image.imageOrientation)

以下是基于面条回答的Swift 3更新版本

func cropping(to rect: CGRect) -> UIImage? {

    if let cgCrop = cgImage?.cropping(to: rect) {
        return UIImage(cgImage: cgCrop)
    }
    else if let ciCrop = ciImage?.cropping(to: rect) {
        return UIImage(ciImage: ciCrop)
    }

    return nil
}