I attempted to detect major color by CIAreaMaximum. But I can only get the white color(rgb:255,255,255) as a result no matter whatever picture I input. Do I misunderstand the function of CIAreaMaximum? Any help will be appreciated.
struct ContentView: View {
let img = UIImage(named: "leaf") ?? UIImage()
var body: some View {
VStack {
Image(uiImage: img)
.resizable()
.scaledToFit()
Text("max")
Color(uiColor: img.maxColor ?? .systemPink)
.frame(height: 50)
.border(.red)
}
.padding()
}
}
extension UIImage {
var maxColor: UIColor? {
guard let inputImage = CIImage(image: self) else { return nil }
let extentVector = CIVector(x: inputImage.extent.origin.x, y: inputImage.extent.origin.y, z: inputImage.extent.size.width, w: inputImage.extent.size.height)
guard let filter = CIFilter(name: "CIAreaMaximum", parameters: [kCIInputImageKey: inputImage, kCIInputExtentKey: extentVector]) else { return nil }
guard let outputImage = filter.outputImage else { return nil }
var bitmap = [UInt8](repeating: 0, count: 4)
let context = CIContext(options: [.workingColorSpace: kCFNull as Any])
context.render(outputImage, toBitmap: &bitmap, rowBytes: 4, bounds: CGRect(x: 0, y: 0, width: 1, height: 1), format: .RGBA8, colorSpace: nil)
print("rgba",bitmap[0],bitmap[1],bitmap[2],bitmap[3])
return UIColor(red: CGFloat(bitmap[0])/255, green: CGFloat(bitmap[1])/255, blue: CGFloat(bitmap[2])/255, alpha: CGFloat(bitmap[3])/255)
}
}
The CIAreaMaximum
filter will calculate the maximum value per channel separately. That means, if you have a red, a blue, and a green pixel in your image, the result will be a white pixel.
If you want to find the most common color in the image, you can use the CIKMeans
filter for that, as @HangarRash pointed out. Just set the count
parameter to 1
to only get one color.
The outputImage
of CIKMeans
will be of size count x 1
, with each pixel representing a major color in your input. So if you want to get two major colors, you can set count
to 2
and increase your bitmap
array size from 4 to 8. You also need to adjust rowBytes
and bounds
in the render
call accordingly.
Alternatively, you can use our small CoreImageExtensions package to read the colors like that:
let majorColors = context.readFloat32PixelValue(from: outputImage, in: outputImage.extent)
let majorUIColors = majorColors.map { UIColor(red: $0.r, green: $0.g, blue: $0.b, alpha: $0.a }
I can also recommend downloading the free app Filter Magic on the Mac App Store and use it to experiment with the various filters and their parameters.