How does a colour CCD camera interpolate Pixels?

The other day,  Some integrist from the “Filter wheel Clan” left a comment about my article on “Mono vs Colour CCD Camera”. This person claimed that Colour cameras were less sensitive than equivalent monochrome ones by as much as a third! Because I once compared performance of the Atik 11000CM camera with that of its monochrome brother, the Atik 11000M, on the same night, on the same scope and on the same galaxy, I know for a fact that he’s wrong. His logic was that both cameras have the same amount of pixels of the same size, but a color camera pixel only absorbs a third of the light “falling into it” because of the Bayer Matrix (green, red or blue filters in front of each pixels) . Well, I have to admit that it’s pretty logical. Take white light, for instance, and assume it’s made of one third green, one third red, and one third blue light; each green, red and blue pixels only absorb it’s corresponding wavelength which is a third of the total available to a  monochrome camera, compared to a monochrome camera where each pixel absorb the entire amount of white light due to it’s lack of filters. Lets look at a Bayer matrix to illustrate this:
bayer

Now, what happens if we are photographing a red H alpha emission nebula, whose wave length falls spot on into the passing bandwidth of red pixels? 80% of the light will be recorded by the red pixels of the colour CCD, the 20% loss being due to transmissive losses of the bayer’s matrix red filters, but nothing will be registered by the surrounding blue and green pixels. So, in essence, we have 80% sensitivity of a monochrome camera, but now, the resolution has been devided by 4! So how does a color camera compensate for these shortfalls, since as mentioned at the beginning of this article it’s been compared to a monochrome camera and it obviously does… Well, what color conversion software does is use a pretty smart algorithm. To illustrate this, let’s look at a raw Oneshot color camera picture with some hot pixels:

b&n

This example is very useful because it simulates a bunch of photons hitting one single pixel of it’s “bayer matrix” group (made of 2 green, one red and one blue) and it allows us to see what it does with the processing to reconstitute a color image. On this picture, you can easily see the matrix with the blue and green pixels being dark while the 2 green pixels in diagonal are lighter due to light polution. So now let’s look at the processed color image interpret the hot red pixel on the right by the star:
color

Hey, what happened? well adjacent blue and green pixels have been “filled” with red color! So we have indeed lost contrast…  What happens is that the software looks at adjacent pixels, and if they are dark, it will assume the light is monochromatic and fill these pixels with higher levels of red color. Also, the hot red pixel is now white! This is called “Synthetic Luminance”; as the software interpolate light levels by averaging pixels and blocks of pixels. What’s also interesting to note is that the  hot pixels on the left, not being as intense, have kept their original hue, and the adjacent pixels have been only sightly tinted… So, the conclusion is that we have  not completely lost the original resolution, but that the contrast of the image has been degraded compared to a monochrome one… As for sensitivity, because the software “adds” adjacent pixels (red, blue and green) to determine final pixel level, this goes to some distance toward restoring sensitivity compared to a bare mono sensor, but at the cost of lower resolution.

VN:F [1.9.7_1111]
Rating: 0.0/10 (0 votes cast)
VN:F [1.9.7_1111]
Rating: +1 (from 1 vote)

Leave a Reply