The following is a guest-post by Stephan Schuh.

The secret of the two missing colours

Do you know that two thirds of your digital image information is not recorded, but calculated by algorithms? And this is happening right in the beginning, during the first step of image processing.

Classic 3-chip camera design with a prism to split the image into the three colour primaries.

De-Bayering is one of the the first steps of digital image processing of a raw image. Back in the old days, most (broadcast) video cameras with CCD image sensors had three sensors, one for each colour: Red, Green, Blue (R,G,B). A prism inside the camera split the image into these colour components, so all the three sensors were exposed with the same image at the same time. For every pixel, there was a precise recording of all the three basic colours, red, blue and green. Celluloid film works the same way. It has three layers with light sensitive grain, one for each colour.

With the introduction of larger diameter CMOS chips, the three colour pixels were now positioned next to each other, instead of on top of each other (because creating such large prisms with decent quality proved to be tricky). The major advantage is, that you can lose the prism and use standard cine lenses instead of broadcast ones. But each pixel now carries the information of one single colour only. That doesn't sound like a big problem, but it is not easy to get a proper image this way. Trouble starts with the amount of colours: How to fit three colours symmetrically into a chessboard style chip structure? Bryce E. Bayer of the Kodak Company, who patented the Bayer pattern in 1976, tried to mimic the physiology of the human eye, which is not equally sensitive to each colour, by using twice as many green pixels as red or blue ones. The Bayer Pattern consists of a line of red and green pixels, which is followed by a line of blue and green pixels. Consequently, only every second line has information about red or blue.

Bayer pattern
Bayer pattern colour filter array.

In order to create R, G and B values for every pixel location, a Bayer interpolation algorithm is used. This algorithm determines the colour of a pixel by averaging the values from the closest surrounding pixels. A consequence of this interpolation process is that the original resolution is decreased, as the colour information is an approximation of several pieces of information. Further a series of unintended results can arise out of these calculations. A frequent problem are so-called zipper artefacts. These are a usual result of bilinear debayering, which is the simplest form of creating values for each pixel location by calculating an average value of all the neighbouring pixels. Visually this causes edges and lines with a zipper or chessboard look.

Bayer pattern
Debayering example 1.

Another unintended outcome can be false colour edges, or additional moirés in fine detail areas. This is already problematic in still images, but in a movie all these artefacts flicker and therefore become even more prominent.

Bayer pattern
Debayering example 2.

Various debayering algorithms were developed to address these issues. In fact, most cameras and post production softwares have their own mix of proprietary debayering algorithms and some of them are analysing and processing each frame several times. Particularly debayering algorithms in postproduction can be based on various steps of complex calculations, given that they don’t have to perform these renderings in real time in contrast to cameras.

debayering algorithms
Example of different debayering algorithms and their outputs.

Line detection and reconstruction, which is part of several Debayering algorithms, is a method to sharpen edges and lines. However, this can result in images looking artificially sharp. Also, it might kill the original camera noise and create a strange grain structure resulting in an artificial look particularly on human skin. Peculiar artefacts can emerge at the edges of fluorescent lights or other high contrast edges such as branches of a tree against a bright sky.

As mentioned most manufacturers offer debayering to address these issues, designed to match their cameras components and internal image processing. Most of the aliasing and moiré artefacts nowadays are in fact caused by a missing low pass filter rather than by inadequate debayering. However, artificial sharpness is still an issue that is responsible for the unnatural look some cameras produce.

Debayering is only one of the first steps in digital processing, often followed by downscaling, colour space conversion, gamma correction, applying LUTS and compression and so forth. All these processing steps have their own issues and come with corresponding algorithms trying to solve them (again potentially leading to a series of side effects). And finally every TV has its own set of processing algorithms that mess with the image such as additional sharpening, noise reduction or 100 or 200 Hz motion interpolation.

The seemingly endless amount of image processing and correction algorithms, illustrates how cameras as black boxes and proprietary software literally take the control out of the hands holding the camera. To take back said control is more than overdue.

To address the issues concerning debayering described herein, my team and I are working on SHOODAK, a debayering method based on a rather simple algorithm. This method aims to make the image look as natural as possible and to avoid most of the common side effects, like artifical over-sharpening. Like all professional algorithms, SHOODAK also needs to inhibit the artefacts that the Bayer pattern itself creates. Evidently this results in the image being softer and including more noise, especially at the edges.

Consequently SHOODAK will not be a good choice for the production of glossy, high resolution photographs. However, it will help create a rough look or a period style and it will definitely improve portrait shots. SHOODAK isn’t trying to mimic chemical film or to create artificial grain, yet we perceive the reproduction of the image sharpness to be closer to the one achieved by 35mm or 16mm film.

Shown below there is a comparison of images which have been debayered with DaVinci Resolve and SHOODAK. No additional sharpening was used in either of the pictures. The differences in colour grading and contrast should be ignored when comparing them, as a good LUT that matches the one from Resolve is still missing in SHOODAK. Ultimately the comparison is shown here to illustrate the differences in the image structure, sharpness and line & edge reproduction.

Debayered with DaVinci Resolve
Debayered with DaVinci Resolve (Setting: Soft, no Anti-Aliasing).
Debayered with SHOODAK
Debayered with SHOODAK.

In time the goal is to have SHOODAK ready to work with the AXIOM Beta. Until then optional grain simulation should be implemented and SHOODAK could be integrated in different post production software bundles. It will only be working with CinemaDNG files. To optimise the working process with SHOODAK it will be necessary to use a low pass filter. Since SHOODAK is not based on complex algorithms, there is no software compensation of aliasing, moiree and line jitter.The first release is going to be a beta version, so extensive testing is advised and feedback greatly appreciated.

SHOODAK could improve cinematographic image processing and as we hope, make shooting even more fun.

Guest-post by Stephan Schuh.

Further Links

Want to participate?

Want to provide feedback or suggest improvements? Want to try your own approach and need help?

Get in touch via IRC or email.