# camera basics – Why are the color spaces we have access to incomplete?

## First Question

If all colors are combinations of red, green and blue, and my monitor’s pixels use all three, why is its color space limited to so small a portion of the actual complete color space? What colors are we not seeing and why?

The answer to this question is (relatively) simple. I’m going to reference the sRGB color space (depicted below) since it’s the most common color space for monitors, but this applies to all physically-realizable color spaces.

Imagine that all the visible colors are contained within the thick black horseshoe in the above diagram. The pure red, green, and blue colors that are displayed by your monitor are depicted by the respectively-colored dots (and white is depicted by the gray dot in the center).

Every color that your monitor can display must be a mixture of these three primaries (red, green, and blue), and any mixture of two or more colors appears in between those colors in the diagram. Therefore, all colors that are mixtures of red, green, and blue must fall within the shaded triangle, the “sRGB gamut.” Importantly, this means that:

Not all colors are mixtures of red, green, and blue!

All of the colors inside the black horseshoe curve but outside the sRGB gamut cannot be displayed on an sRGB monitor. This includes pretty much all colors of laser light, the colors in a prism or a rainbow, and many highly-saturated blue-green colors (like the 2013 color of the year).

Note that because the sides of the horseshoe are curved, no matter what three colors you choose, the triangle that those colors form will never include the entire horseshoe (as long as you choose real colors, but we’ll come back to that later).

In order to understand why this is, let’s talk about the CIE color spaces, the most basic of which is the XYZ color space.

Basically, we can find a way to assign a set of three numbers to any color such that two colors appear the same if and only if they get assigned the same three numbers. The way that these numbers are assigned is called a color space.

The XYZ color space assigns these three numbers (X, Y, and Z, unsurprisingly) by weighting the spectrum of the color with three functions of the wavelength. These functions (x-bar, y-bar, and z-bar) are shown below.

So far this is a little bit abstract, so I’ll give an example. Here is the spectrum of “standard daylight,” more specifically the CIE Standard Illuminant D65:

(Note that the y-axes of these diagrams are in arbitrary units. Since we’re dealing with the color of light and not brightness, the scale doesn’t matter as long as we scale all components the same way.)

The name D65 comes from the fact that this spectrum is close to that of an ideal blackbody radiator at a temperature of 6500 kelvin. This is a little hotter than the surface of the Sun (5780 kelvin) due to atmospheric absorption and scattering.

We compute the X, Y, and Z values of this color by multiplying its spectrum with the three color matching functions (x-bar, y-bar, and z-bar) and taking the area under the resulting three curves:

Typically the XYZ values are scaled so that white has a Y of 1, giving us:

``````X(D65) = 0.9505
Y(D65) = 1.0000
Z(D65) = 1.0888
``````

We often transform this to the xyY color space for convenience, where:

``````x = X / (X + Y + Z)
y = Y / (X + Y + Z)

x(D65) = 0.3135
y(D65) = 0.3236
``````

The two values x and y depend only on the color of the light, and not on the brightness, and they fully describe the color. I said before that three numbers are necessary to describe the color of light, but that’s only true when the brightness is included in “color.” Without brightness (one number) you only need two. The XYZ color space was designed so that Y represents the brightness of a color, which is why it is included in the xyY color space.

We can compute the x and y values of different wavelengths of monochromatic light and plot them on a diagram:

That is where the horseshoe diagram comes from! The ticks mark the wavelengths of light along the edge. Note that the bottom edge has no ticks: colors like magenta can’t be made from a single wavelength of light (there can’t be a magenta laser).

Pretty much all other color spaces, sRGB included, are defined in terms of the CIE color spaces. Usually they pick a red, green, and blue primary and a white point (described in the XYZ or xyY color space), which is enough to completely specify a color space.

Note that there are plenty of values of x and y that are outside the horseshoe. These don’t represent real colors. However, these “imaginary” colors can sometimes be useful. For example, the ProPhoto RGB color space uses “imaginary” green and blue primaries. This way it can represent more colors than a color space that uses three real colors for primaries. The downside is that you now have to be careful about “imaginary” colors that could be present in your files. The reason that larger color spaces like ProPhoto RGB and Adobe RGB aren’t often used outside of professional environments is that it isn’t worth being able to record colors that you can’t display!

In a similar vein, we can imagine negative amounts of color. Mathematically, you can solve for three RGB values that will represent any color, but one or more of the RGB values will be negative when you try to represent a color outside the gamut of your color space. It’s perfectly valid to use a negative R, G, or B value to represent a color, but most files only hold positive values, and physical displays can only show positive values (since you can’t emit “negative light”).

## Second Question

Similarly, if a camera captures all three (red, green, and blue light), why can it not capture the entire visible color space?

There are actually two different issues going on here. The first is related to the issue of limited gamuts above. For example, I have my camera set to record in the sRGB color space. The camera may be physically capable of detecting colors outside the sRGB gamut, but it isn’t able to record them!

Again, cameras typically limit themselves to the “small” sRGB color space because they will most likely be edited and viewed on sRGB displays, and recording colors that you can’t display is not worth the hassle for the average user.

The second issue is a little trickier, and deals with a phenomena called metamerism.

This is the same phenomena that causes some colors to look different under different lighting conditions like daylight, incandescent light, and fluorescent light. (For example, my camera bag usually looks black indoors, but has a slight brownish tint outdoors.)

This is caused by the fact that we reduced a continuous spectrum into only three numbers. Now it is still true that three numbers are sufficient to perceptually identify a color. However, getting those three numbers right is difficult. To see why this is, let’s look at an example. I’ll show the same D65 spectrum as before, but let’s also look at a metamer of it.

The two spectra look quite different, don’t they? Let’s repeat our steps from the first section to calculate the X, Y, and Z values of the metamer:

They happen to be exactly the same! This means that a light with the “metamer” spectrum will look identical to a light with a D65 spectrum. Since the spectrum of a light is continuous, there are an infinite number of metamers for every color.

Now let’s look at how a camera sees this pair of perceptually identical colors. Here are the same color matching functions from before, along with three new functions (Rcam, Gcam, and Bcam) that represent the sensitivity of an imaginary camera to different wavelengths of light.

To compute what raw RGB values the camera assigns to these three colors, we use the same procedure as for calculating the XYZ values: multiply the spectrum with the sensitivity curves, and take the area under each curve: (Note that I also scaled the areas so the maximum value would be less than 255.)

The two RGB values are different! Even after transformation to sRGB values, Adobe RGB values, or even xyY values, the two will remain different. Therefore these two colors will be recorded and subsequently displayed differently even though they appeared identical.

This wouldn’t be a problem if we could make filters that exactly mimicked the CIE color matching functions (or a linearly independent combination of them); and although in practice we can get close, it’s nearly impossible to match them exactly.

Furthermore, two people may have different color matching functions! Although the variation is not huge, it can be enough to cause some colors to appear differently to different people. This means that even if we do everything right according to the CIE spec, the colors still won’t look exactly right to some people.

In summary, color reproduction is simple in theory, but practical limitations mean that imperfect colors are the norm. However, “imperfect” is usually “good enough,” and you probably don’t have to worry about it.