you will find on this page:

  1. Atik 11000-CM colour CCD: a general presentation.
  2. Colour CCD acquisition and processing technique: how to process raw CCD  colour images.
  3. CCD in detail: a detailed look at CCD chips, their technology.

Atik 11000-CM Colour CCD

Being accustomed to the ease of taking  pictures with a DSLR camera, I was not quite ready to start messing about with filter wheels and to require several nights to acquire a single object… So, the obvious choice for me was to get a color  24mmx36mm full frame sensor. ccd I went on the forums before taking the plunge and almost everybody was crying wolf, and telling me I was making a big mistake as color sensors are not nearly as sensitive as monochrome. While that may be true, when you have a 14″ aperture, you harbly notice it. I strongly believe that colour cameras are going to take the lead in CCD imaging as sensors become more sensitive and make monochrome sensors obsolete as these are more expensive too (filters and wheel on top of CCD cost). Being able to acquire Luminance and colour channels in one go really is fantastic and make everything easier. And, Astro-imaging should be a pleasure, not a hassle.  At the moment, people are so used to monochrome cameras that it will take time to change minds. The one drawback I find to a color camera is that, should you decide to binne the pixels, you lose colour information. Atik offers with the Atik 11000-CM, a full frame sensor at a very reasonable price, the drawback being of course, that you need an additional camera for guiding with an off axis guider (unlike SBIG). For guiding, I got an Atik 16-IC. My off axis guider is an old Celestron one I used with the Canon on the C11 telescope. It works reasonably well. A word of warning though: The field delivered by the C14 EdgeHD is absolutely flat, as Celestron claim, but the sensor has to be about 140 mm back from the black baffle nut on the back of the C14 telescope. I had initially tried to use my Orion 120mm aperture for 1000mm focal length refractor as guide scope for the Celestron telescope, by mounting it using Losmandy’s “DSBS” H plate in parallel with it. Unfortunately, I could not guide for more than 3 minutes properly so I went back to the old off-axis guider solution.  CCDComments on the pictures delivered by this combination are welcomed. Image processing with colour CCD cameras are of course different from that of monochrome cameras. I personally use MaximDL to acquire and to process images. I am still experimenting, so any advice about colour image processing is welcomed. With MaximDL softfware, acquisition really is a piece of cake, since it manages main imaging sensor as well as guiding sensor. I usually take 10 minutes exposures at -20 Deg C sensor temperature using Maxim’s “Autosave” function. Focusing in my case is done manually by looking at faint stars FWHM (full width at half maximum) and adjusting the focus knob to make it as small as possible. Celestron telescope tubes are made out of aluminium which has a strong expansion coefficient. So, if temperature changes by a few degrees while imaging, it means that focus will have moved some and that focus has to be checked every so often… If I know, from experience, that temperature won’t drift during exposures,then I’ll just go to bed while setting my alarm clock to get up at the end of a sequence. But, I think that, more important than a perfect focus, a perfect guiding is required for deep sky imaging, because, focus usually is better than turbulence or guiding, in terms of arc seconds of resolution on the sky. With a focal length of about 4 Metres, which is that of the C14, guiding is not always easy to achieve, especilly if “seeing” is bad.

Colour CCD aquisition and processing technique

Single exposures length will depends on the object being photographed. For example, a planetary nebula will require a shorter 1 minute exposure time so that it does not get “burnt” while a faint galaxy, with a lower surface brightness , might require 10 minutes, maybe even longer, should the background allow it. It is also necessary to acquire darks, offsets and flats in order to calibrate individual exposures. Sensor temperature has to be the same as to that of the actual deep sky exposures. I usually set sensor temperature at -20Deg Celsius as noise at such temperature is quite low. Darks and offsets are made with the lead on so that the sensor is in the dark. Darks have an exposure length equal to that of the exposures to be calibrated, and offsets have a very short exposure length, such as 0.002 seconds for example. Making flats with a large telescope such as the C14 is not easy.  Usually,  a “light box” made of LEDs on a white background, positioned at the business end of the telescope, is used to acquire flats (see image below right). But, making such a box for  a C14 is not very practical, so I use the sky at sunset, before stars come out, to make flats. flatamono It allows me to dispose of a fairly uniform, not too bright light source. If the source is too bright, Sensor pixels tend to saturate. I then have adjust flat’s exposure length in order to get around 6000 ADU (Analog to digital units). I usually use 0.01 Seconds exposure. It is extremely important to obdurate the telescope as soon as the “start” button has been pressed because otherwise, the CCD matrix carries on being exposed while the flat is being downloaded to the computer. Fortunately, downloading takes around 30 Seconds with USB2.0 so there is plenty of time to put the lead on the telescope.  Also, keep in mind that the sky is blue, which means the flats acquired have to be converted to colour first, them to Black&White, using Maxim’s Convert to Mono function. You need to make a number of Darks, Offset, and Flats equal to the number of exposure. Once these are made, the calibration wizard in MaximDL allows to select the relevant files. The actual deepsky exposures are calibrated using “calibrate all” function, then, they are converted to color (parameters: red=80%, Green=100%, blue=100%). It is not possible to align the deepsky exposures before converting them to color because that would cause the bayer matrix blue, green, and red pixels to get  mixed with one another, and the colour information would be lost.

ngc2841brute ngc2841traite

Raw exposure left, processed right.

Exposures are then aligned and stacked. For stacking, “median” works well especially if there is a satellite track  on one of the pictures, or if there are some “hot” pixels left on some of the exposures despite calibration. It has to be noted that cosmic rays will leave saturated pixels where they hit the CCD chip, and that is not corrected with calibration. Also, while it is very tempting to use a “dark” library and dispense with making them for every single shot, the CCD chip will change with time, by having more and more defective pixels. “Average” stacking mode is great but requires defect free exposures. Then, in order to correct “burnt galaxy centers, one can use DDP filter, or logarithm “stretch” correction. Finally, light curves can be adjusted in “photoshop, “The Gimp”, or Paint Shop Pro. Warning! A single “raw” exposure weights 20.5 Megabytes, and a converted color shot 62.5 Megabytes, so a computer with loads of memory if not especially fast is required. maximscreen

MaximDL acquiring

CCD in detail

The aim of this article is to explain what is a CCD, how it works, as well as the different types available.

pixel So, what does the term CCD stand for? it stands for Charge Coupled Device. Why, because a CCD works by moving electrical charges inside it to restore an image projected on its surface. We often refer to a CCD as a Matrix, because, CCDs are made of an array of tiny pixels. Each pixel converts the  light photons it receives into an electrical charge which is stored locally. A pixel acts a a type of light bucket, so to speak. The picture to the right illustrates what an individual pixel looks like at chip level. The advantage of  CCD technology , is that  it allows, in theory  to record very low levels of light, since all it takes is to leave the light bucket (the pixel) exposed longer. However, we’ll see that there are limiting factors and that things are not as straight forward (are they ever?). Lets imagine a CCD matrix has been left exposed for some time to a galaxy image at a telescope prime focus. Matrix pixels contains an analogue amount of charge (voltage) representing the galaxy image in electrical form. In order to restore that image on a computer, each pixel has to be digitized, meaning it has to be converted (read) to a numerical value . ccdtranse Pixel numerical value are measured in ADU units which stand for” Anolog to Digital Unit” conveniently enough. Now the problem resides in reading the charge of several million pixels as is often the case. To do so, an electronic component called an Analog to Digital converter (ADC) is used. The charge of a single pixel at a time has to be “presented” to the ADC which will convert its analogue content to a numerical value using a “serial register”. An entire CCD row is transfered to the serial register, and each pixel is transfered one by one to the ADC in a serial manner. This is called “progressive scanning”.  The range of the value read will depends on the “resolution” of the ADC which is measured in bits, being a digital scale. Going into digital electronics is out of scope for this article, but as reference, an 8 bits ADC has 255 (2 to the power of 8 ) discrete steps while a 16 bits ADC has 65535 (2 to the power of 16 ) discrete steps. As concrete example, an 8 bits CCD only has 255 steps between pure white and pure black, while a 16 bits ADC has 65535 steps (or shade of gray) between pure white and pure black. The CCD reading process described above is that of a “full frame” CCD type and is illustrated on the picture on the left .  Now, an obvious drawback to progressive scanning technique is “charge smearing” caused by light falling on the sensor whilst accumulated charge signal is still being transferred to the readout register.  In order to avoid this, a mechanical shutter has to be implemented for this particular CCD type which is why other types of CCD have been designed. So, the 3 main types of CCD sensors in existence today are:

  1. Fullframe CCDs
  2. Frametransfer CCDs
  3. InterlineCCDs

The 3 types of CCDs all works on the same principle of converting photons to electrons, but the pixel arrangement is different and is illustrated below.

fullframe CCD

fullframe We already described the fullframe CCD to illustrate the reading process. The advantage of this type of CCD is that they are typically the most sensitive to light, the disadvantage beeing, having to implement a mechanical shutter system which may have lifetime issues. Also these CCDs cannot start recording a new image while it’s being read.

Frame-transfer CCD

In an attempt to address the drawback of the fullframe CCD  chip, frame-transfer CCD uses a two-part sensor in which one-half of the parallel array is used as a storage region and is protected from light by a light-tight mask. The unmasked array works exactly as a fullframe would, the difference being it uses a different reading technique.  To read, the exposed area is transfered to the unexposed one; row to row which is faster than shifting rows from a fullframe sensor, since each row is not being serially shifted to the ADC yet. Once, the exposed area has been transfered to the unexposed one, a new exposure can start while the image stored in the masked area is being read by transfer to the ADC at leisure. The advantages of this architecture is that it has the same sensitivity as a fullframe sensor, while not as sensitive to smearing because transfering is faster. However,  if the light source is strong enough, smearing will still happen.

Interline transfer CCD

interline Instead of using an entire area dedicated to charge transfer as in the case of a Frame-transfert CCD, an Interline transfer CCD uses transfert channels adjacent to pixels column. This architectures allow transfer of charge from sensitive area to mask area, so fast that smearing is non existent. The disadvantage of that configuration is that the sensitive area is reduced but that can be compensated by the use of microlenses (see left).

CCD parameters

As it can bee seen from above, each CCD type has its own advantaged and disavantages. Apart from  types, there are parameters which are common to all CCDs and define their characteristics. Here is a non exhaustive list and what they mean:

Quantum efficiency:

Quantum efficiency is measured in % and indicates how good a sensor is at converting light into electrical signal. For example, a camera with a quantum efficiency of 1 (100%) means that it’s producing an electron for each photons it receives. Unfortunetaly, most cameras has a efficiency of around 0.5 or 50%.  Also, a CCD efficiency will vary with wavelength. For example, CCD tends not to have the same efficiency for blue, green or red regions of the visible spectrum.

Dark current:

Dark current describes thermally generated electrons which generate a charge in pixels of the CCD chip. Because they are thermally generated, it is a noise that adds itself to the signal from light conversion. Dark current will decrease with temperature, so a good way to decrease dark current is to cool the CCD chip. The graph below illustrates dark current versus temperature for the ST10-XME camera: darkcur

Read noise:

Read noise is electronic noise that is generated by the electronics involved in reading the CCD such as Amplifiers, and ADCs.

Color vs B&W CCD cameras

Look up the internet for the debate on what is best between black & white or colour cameras, and you will find people tearing each other to pieces! Black & white camera supporters will tell you that color cameras are useless, that the images produced are dim. Color camera supporters will  say that color camera do a perfectly good job for them… Then again, one has to remember that CCD astronomy began with monochrome cameras, and if someone want to take a color picture, then the LRGB (Luminance,Red,Green,Blue) technique has to be implemented through the use of filters. Color cameras are very recent on the market, and, in Astronomy, like in religion, new ideas take a long time to take root, and some people just do not like changes!

bayer Lets take as an example the Atik11000-M versus the Atik11000-CM. Both use the exact same Kodak interline transfer CCD Kai-11002 chip (ABA monochrome, CBA color). The only difference between the 2 of them is that the CBA color version has a Blue, red green bayer matrix (see picture on the right), meaning that the microlenses are colored. Apart from that, the chips are identical, so noise will be the same. The only thing that can change is the Quantum efficiency, since the color filters will restrict photons reaching the CCD matrix. Lets look at the quantum efficiency given by Kodak below (the black line is the efficiency of the monochrome sensor, the coloured lines shows the efficiency of the color chip for each blue, green and red channels): color-bw_0

Looking at that graph, the conclusion is pretty clear:  It is true that the color chip as a slightly lower efficiency in the blue and green channel than the monochrome version. But, it is “slightly” less sensitive, not “alot” less sensitive as monochrome supporters claim. The only question that remains is whether the loss of sensitivity is worth the time gain and ease of operation of not having to use filters…

To conclude this article, below is a drawing of the Atik color matrix architecture.

VN:F [1.9.7_1111]
Rating: 9.4/10 (13 votes cast)
VN:F [1.9.7_1111]
Rating: +7 (from 7 votes)
CCD astronomy , 9.4 out of 10 based on 13 ratings