Storing LAB colors as integers-Collection of common programming errors


  • XSL

    When using RGB values, .NET has built in ways for it to be converted from an integer and it’s straightforward to convert a color to an int. Is there a way of storing LAB colors as integers instead? Unlike RGB, LAB color values can be negative. I don’t want to have to store the colors in RGB and convert them at runtime if I can avoid it as I have no need for RGB values.


  • mmgp

    So the transformation being done is RGB -> XYZ with the old 2 degree observer and CIE Standard Illuminant D65 -> CIELAB. The code, for reference, for performing that is given below (R, G, B is assumed to be in [0, 1]).

    Considering these transformations starting from 8 bits per channel in RGB, the range for L* is [0, 100], a* (-85.92, 97.96], b* (-107.54, 94.20]. These values are close approximations. In theory, a* and b* are unbounded, but you will find some places that talk about a limit of +-128, +-110, etc. My suggestion is then to simply sum 128 to each value, multiply it by 100, and then round to integer (that should be precise enough for a color). Any given L*a*b triplet can then be represented by a 16 bits unsigned integer. You could pack them into a 64 bit integer. And after unpacking you would subtract 128 from each value. If you can keep three signed short integers, things get much simpler.

    def rgb_xyz(r, g, b): # 2o. D65
        triple = [r, g, b]
    
        v, d = 0.04045, 12.94
        for i, c in enumerate(triple):
            triple[i] = 100 * (c / d if c  t0 else a * c + b
    
        l = 116 * triple[0] - 16
        a = 500 * (triple[0] - triple[1])
        b = 200 * (triple[1] - triple[2])
        return l, a, b