How to get the position of dominant colors in CGImage?

so, my app needs to find the dominant palette and the position in the image of the k-most dominant colors. I followed the very useful sample project from the vImage documentation

https://developer.apple.com/documentation/accelerate/bnns/calculating_the_dominant_colors_in_an_image

and the algorithm works fine although I can't wrap my head around how should I go on about and linking said colors with a point in the image. Since the algorithm works by filling storages first, I tried also filling an array of CGPoints called LocationStorage and working with that

//filling the array
for i in 0...width  {
            for j in 0...height {
                locationStorage.append(
                             CGPoint(x: i, y: j))
            }
.
.
.
//working with the array
let randomIndex = Int.random(in: 0 ..< width * height)
        
        centroids.append(Centroid(red: redStorage[randomIndex],
                                  green: greenStorage[randomIndex],
                                  blue: blueStorage[randomIndex],
                                  position: locationStorage[randomIndex]))
        }


struct Centroid {
        /// The red channel value.
        var red: Float
        
        /// The green channel value.
        var green: Float
        
        /// The blue channel value.
        var blue: Float
        
        /// The number of pixels assigned to this cluster center.
        var pixelCount: Int = 0
        
        var position: CGPoint = CGPointZero
        
        init(red: Float, green: Float, blue: Float, position: CGPoint) {
            self.red = red
            self.green = green
            self.blue = blue
            self.position = position
        }
    }

although it's not accurate.

I also tried force trying every pixel in the image to get as close to each color but I think it's too slow.

What do you think my approach should be? Let me know if you need additional info Please be kind I'm learning Swift.

Replies

How do you define precisely dominant color ? A single specific RGB value would not make a lot of sense.

Once you have defined this dominant color (probably ranges of R, G, B values: an RGB Cube), you have to test for each pixel if it falls within this cube.

An idea for a more basic solution:

  • split each color axis (0 to 255) in segments with width of 4:
  • 0-3; 4-7; 8-11 …
  • You will have 64 segments.
  • Then you create an array to count how many pixels in each of the 64 * 64 * 64 = 262144 items (if you split by 8, you can reduce to 323232 = 32768
  • var theCube : [Int] = Array(repeating: 0, count: 646464)
  • a function to compute the position of a color in the array:
func colorIndex(for color: UIColor) -> Int {
    // we return 64*64*r/4 + 64*g/4 + b/4, with r, g, b between 0 and 255
    var index = 0
    if let components = color.cgColor.components {
        if components.count < 3 { // We are grayscale, r, g, b will be equal
            let grayValue = Int(255*components[0] / 4) // We bring grayValue between 0 and 63
            index = (64*64 + 64 + 1) * grayValue
        } else { // components are between 0 and 1, convert to 0...255 first
            index = 64*64*Int(255*components[0] / 4)
            index = index + 64 * Int(255*components[1] / 4)
            index = index + Int(255*components[2] / 4)
        }
    }
    return index
}
  • test:
print(colorIndex(for: UIColor.red))     // 258048
print(colorIndex(for: UIColor.green))   // 4032
print(colorIndex(for: UIColor.blue))    // 63
print(colorIndex(for: UIColor.white))   // 262143 last item in cube
print(colorIndex(for: UIColor.black))   // 0 first item in cube
  • you then test for each pixel, computing the colorIndex and increment the cube[index] accordingly
  • max value in the array is the dominant color: its index is dominantColorIndex.

Once you have defined this dominant color search for points which color fall in this max, by computing its colorIndex and find if equal to dominantColorIndex

Hope that helps.

  • I have implemented a demo code. It is reasonably efficient even though not optimized).

    For a small image of 600 * 400, it takes 1 or 2 seconds to proceed ; for a large (3000 * 2000), it is about 30s (on iPhone 15 type device).

Add a Comment

As an alternative you could use the CoreImage kMeans filter - the code is quite a bit simpler.

https://developer.apple.com/documentation/coreimage/cifilter/3547110-kmeansfilter

The example code there shows how to run it an generate an image that has been mapped to the colors.

Once you've done that then you can run through each pixel in the mapped image and compare it with the palette colors.