Over the course of the last few months, A1ex from the Magic Lantern team has been leading the efforts to capture tens of thousands of still images with our AXIOM Beta prototype to measure and analyze a wide range of image sensor characteristics. To get the most out of the image one has to really understand what is going on inside the silicon and how all the different aspects influence each other. We have learned a lot and were able to draw a number of conclusions and methods for improving the image quality already, but all in all, we are still in the middle of getting to grips with the image sensor. And since both the AXIOM Beta and AXIOM Gamma intially utilize the CMV12000 image sensor any results and workflows obtained can be used for both cameras. This article intends to give a brief overview of some of the most important topics we are currently working on with links to the corresponding wiki pages containing more complete in-depth information.
This covers the static offset of each pixel and we referred to it so far as Fixed Pattern Noise (FPN). The traditional method for compensation is subtracting an averaged dark-frame (image captured with lens cap on). So far this approach is rather simple to accomplish - the problem though is that the offset changes with analog gain (ISO), exposure time and other sensor settings like offset, black sun protection, PLR configuration and so on. Obviously, we don’t want to store hundreds of dark frames, one for each particular settings combination. By identifying the dark current (the electrical current coming from the image sensor when no photons are hitting the image sensor), we are able to compute a dark frame that is applicable to any usual exposure time, but we are going to store only one or two reference frames for each gain. First reference frame will be called a bias frame (a zero-length exposure, that would contain only static black offsets), and the second reference frame, if used, will be called a dark current frame.
In general most image sensors are considered “linear”, meaning that receiving twice the number of photons in a sensel (a pixel on the sensor) will generate a digital value that is twice as big. In a test setup we captured a color chart at 1ms exposure time and took one hundred images with 1ms increase until we arrived at 100ms. To remove any noise components, the entire sequence was repeated 100 times and the images (taken with identical settings) were then averaged together.
After correcting the black offsets, most of what we previously called Fixed Pattern Noise (FPN) is gone. An important key in solving this riddle was noticing there are two parallel readout circuits, one for odd rows and one for even rows, each with slightly different electrical characteristics (noise levels, offset, gain). What’s left is a dynamic row noise that changes from frame to frame. We cannot fix this one with calibration frames, because it’s random (uncorrelated), but we can use the black reference columns at the edge of the sensor to measure and compensate it. The key was not just subtracting the row averages in the black columns, but using optimal averaging of random variables from Kalman filter theory, which says we should subtract only a fraction of the black column variations in order to minimize the standard deviation of the corrected row noise. Currently all these compensations are done in post. The first image below uses a simple technique, that along with black calibration can be implemented in the FPGA for real-time in-camera correction.
A huge advantage of the Piecewise-Linear-Response (PLR) HDR mode is that it can be fine-tuned to capture as many additional stops dynamic range in the highlights as needed - at the cost of motion artifacts, light sensitivity and noise in highlights if you go to extreme settings. Currently we are trying to identify the response curves and find a mathematical model for them, so you won’t require separate calibration files for every particular setting combination.
An interesting behavior of this image sensor is that, even with PLR HDR turned off, highlights do not clip harshly to white. As a proof of concept, we were able to recover about 2 additional stops of highlights from a regular overexposed image. The result is noisy and the colors aren’t accurate yet, but the data is there, and with proper calibration, we hope to make it usable.