Are dark frames overrated?

In almost every book and on many forums you are told to take calibration frames. There are bias, flats, darks and dark flats. There are different approaches especially regarding the darks. On the other hand there are people who do not bother taking darks for DSLR images and the results are excellent. Others turn on the dark subtraction feature in the camera. That means over the night the camera is spending (some say wasting) the time taking darks and using the telescope as a cooler but not as an optical device. I also read that darks must match the lights temperature within a single degree Celsius. For that reason a cooled and temperature controlled camera is the only way to go.

Don't blame the camera for all the noise

It seems to be an widely unknown fact that light itself contains noise. It is a quantum effect that consists of small bits of energy that come in randomly. Mathematically it makes up a Poisson distribution. The simple rule is the more light the more noise. Wait! Why are dark parts so noisy when there is more noise with more light? The good thing is, the noise level increases far slower than the intensity level. Noise is always the square root of the number of photons. In the stretched image the relation between signal and noise counts. The math is simple: if noise is the square root of signal then the signal to noise ratio must be the square root of the signal. The dark areas have a low signal to noise ratio (SNR), the bright ones have a high one. The lesson to learn is this: even a perfect camera that is completely free from noise will take noisy images because of light's quantum nature.
If you use the same camera with the same telescope under a light polluted urban sky and under a dark rural sky shooting the same target with the same integration time the images will have very different noise levels. Reason being, while the mean of the sky background can be subtracted from the image the noise that comes from the sky background cannot. The only way to reduce the background's noise is to increase the over all integration time. If you move from one place with little light pollution to another place with twice the light pollution you need four times the over all integration time to achieve the same signal to noise ratio.

What to expect from darks?

Darks cannot remove the noise that comes in from the light itself, nor can any of the other calibration frames. Darks cannot remove the read noise, nor can any other of the calibration frames. There seems to be a believe that darks do remove the noise caused by dark current. The don't! So, what are darks good for, if not for removing the noise? They do remove noise, but they do not remove the noise caused by dark current. They do remove the fixed pattern noise that is caused by the the pixels different sensitivity do heat. At any given sensor temperature there is some dark current. Increasing the temperature by 4 to 6 degree Celsius doubles the level coming with about 40% more noise. Looking closely at a dark shot reveals that some pixels have less and other have more dark current at the same temperature. (Pixel is not the correct term here. Pixel is short for Picture Element, part of the image. On the sensor there are photo sites, similar to a photo transistor with a capacitor. It is common to use the word pixel for photo sites on the sensor as well.) This is one sort of fixed pattern noise. Fixed pattern noise in general is cause by different offsets and different gain factors of the single pixels. A different sensitivity to temperature adds to it. It is quite likely that this sort of noise is the smallest contributor to the noise in the image.
There is one more benefit in darks. The temperature distribution over the sensor is not homogeneous. There might be one corner that happens to have a higher temperature. In that case the intensity level of the dark current is higher there adding a bright border to the image. There may also be amp glow that occurs at one side of the image.

So, should I use darks then?

After what was said before you won't be surprised when the answer is it depends on. If you have a dark sky and if you image at low ambient temperatures and if your sensor has got a low read noise and if you are trying to squeeze out the faintest structures there might be a benefit in using darks. There is also a discussion whether or not darks help keeping the banding issue low that is common to many Canon DSLRs. I don't have an answer to that. In general you have to do a sort of noise management. If you happen to know the sky background intensity you can calculate the amount of noise created from that. If you know the sensors read out noise and the amount of dark current it may be worth stacking a lot of single darks to a master dark. By doing so the noise of the actual dark current is removed. The remainder is the non-uniformity of the pixels sensitivity to temperature. If this is in the same order of magnitude as all the others, use darks!

A real life test

My sky is a Bortle 4 according to an estimation from a light pollution map. I did not measure it, so this may be wrong. I can see the Milky Way with bare eyes but only in the zenith. For the test I used a 65mm f/6.5 flat field quadruplet refractor on a Skywatcher EQ6-R. The mount was guided by a Lacerta MGEN-II using the ZWO off axis guider. I used my (at the time of writing) 7 years old Canon EOS 600D (T3i in the US) and my brand new Canon EOS 800D (T7i in the US). Of course the condition vary during the night and that is why I swapped cameras every 16 minutes. The ambient temperature was 2 degrees Celsius measured by a fridge thermometer at the tripods leg. Both cameras report a sensor temperature of 12 degree Celsius with a few degrees up and down. The object imaged was M45, the Pleiades or Seven Sisters. (More about the comparison: Update from EOS600D to EOS800D
) Forgetting the comparison for a while the question was if using darks will improve the image and here we have a test with two cameras, an older one with a high downstream noise level and a new one that is close to ISO invariant. The two images below are animated gifs that toggle between a version with darks and without darks. Otherwise the images have been calibrated using the same master bias and the same flat file. Processing was minimal, no cosmetic correction, no denoise, only color calibration and a simple histogram stretch.
EOS600D (T3i) 100% crop (Alcyone)
If you think it does not toggle, take your time. It toggles about every second. It may take some time to see it. Here is the same thing for the other camera:
EOS800D (T7i) 100% crop (Alcyone)
Do you see it? One might conclude that for a bright object like this under my sky conditions there is no point using darks. Wait! What about the non uniform temperature distribution and banding? I do not see much banding in my cameras, neither in the old nor in the new. The reason may well be that background noise is way to high to stretch the banding out. If you have better condition or a copy of the camera that performs worse you may see some banding. Here is the full image scaled down to 1/5th of the original resolution, again toggling every second between with and without darks.
EOS600D (T3i) slightly cropped
There is a slight difference in the background level but it is difficult to tell if it is as strong as it seems or just happens to be enhanced by the stretch. The question is if one of the images shows an uneven background that is not part of the sky but caused by the sensor. Banding is one of these problems. I cannot see anything like it. Here is the same thing for the other camera in the test:
EOS800D (T7i) slightly cropped
Here you can see the right border change in intensity. The effect is present for all borders of the sensor but the other three happened to be cropped in the example. In an extreme stretch of the dark frame (midtone value is 0.000157!) the effect is visible. (This is the first static image in the article.)
stretched master dark, 17 x 4 minutes
Cropping just a bit of the border solves the problem. Now the banding is visible as well. But remember, it is an extreme stretch. The above image of the Pleiades was stretched using a midtone value of 0.0023 which is far less. (The midtone value defines which fraction of intensity starting from the black point is going to be medium gray or 50% of white. A midpoint value of 0.5 is no stretch at all, higher values are a decrease of intensity.) Even the pixels that appear white here are not much brighter than the others when stretching it less. Here is what it looks like when the same midtone value as in the Pleiades image is applied:

master dark, 17 x 4 minutes, stretched less
The few bright pixels will probably not ruin the image. If you are seeking super novae this might be a problem. But if you really do, you probably dither and the brighter pixels will be sorted out by the pixel rejection algorithm while stacking the calibrated lights.

Conclusion

There are two reasons why you may want to use darks: an over all intensity correction for hot or cold borders on the one side and a correction of the nonuniform pixel sensitivity to heat to reduce noise at a small scale. In both cases the answer is found by comparing an image with and without darks on a larger scale and on a small scale. As far as I am concerned I have spent weeks of shooting darks in the fridge and in the freezer building up a dark library for different temperatures for my 600D(T3i). Now, after the test I will not do so for my new 800D(T7i) right now. I think I will repeat the experiment in summer when sensor temperature reaches 30 degrees Celsius and add my findings to this article then.
Thanks for reading!

Back to Media