We have now resolved the issues we were facing with HDMI video recorders. It turns out that monitors are capable of synchronising to whatever frequency you are supplying them with, making them very tolerant and usable straight out of the box - even with non standard signals. Recorders on the other hand are a lot less flexible when it comes to the frequency of the supplied input signals. It appears that they will stubbornly refuse to do anything unless the supplied input frequency is perfect. Unfortunately, the documentation we've found for this is often in contradiction - some sources define this frequency as standard, whilst others will list a different specification. As a result of this confusion, we've ended up cycling through multiple frequencies, adding a few Hz at a time until an image appeared on the HDMI recording device.
Please note that this footage contains the first basically unprocessed raw (not in original bayer pattern though) image samples ever recorded with the Axiom Alpha prototype. Whilst this is a major milestone, it represents only our first step through the door and into the beginning of the actual tweaking. Also keep in mind that this is TEST footage not captured with the intent to showcase the capabilities of the camera but rather to proof that it is working at all. While we think you can already see some potential in the image quality the video is simply NOT meant to be beautiful yet. As it stands, the video signal output from the Axiom Alpha still carries some flaws. Let's take a look at them in detail:
Due to the limitations of the HDMI encoder chip implemented on the Zedboard, we are currently outputting a rather exotic colour mode: RGB 2:4:2. This has resulted in 1 pixel colour shifts in some situations, visible as red/blue tints around vertical lines. We are currently investigating alternative modes that eliminate this problem. Please also keep in mind that Youtube re-compresses every video that is uploaded, so even when viewing at 1080p, you will still see noticeable compression artifacts irrespective of how crisp the original video upload might be. Since we are still in the process of tweaking and fine-tuning everything, we don't mind these compression artifacts for now. You can rest assured that as soon as we have a greater selection of artistic footage ready to showcase, we will provide access to high quality online video playback / download of these clips.
We have fixed several bugs in the calibration software and can now measure/calibrate the fixed pattern directly inside the camera. The challenge here is that an accurate calibration requires a fully evenly lit image with all colour channels at equal brightness levels. Any slight deviation from this will result in a sub-optimal calibration profile, meaning the FPN corrections can quickly end up creating more artifacts than they are removing. Under optimal conditions, we've witnessed a 100% elimination of the fixed pattern, however for this we've had to create a special device with RGB LEDs controlled by the camera itself, and which is attached to the camera instead of a lens. We are now considering the production of a 3D printed version of this, which we can use/distribute with far greater ease in the future. As a final note on this topic, the correction of the pattern is already running inside the FPGA with 4K footage in real-time.
For those of you who may not be aware, our image sensor features an HDR mode called "Piecewise Linear Response (PLR) Mode". This essentially means that you can add two knee-points to the response curve already inside the image sensor, achieving a more logarithmic shaped response curve closer to film negative - with a substantially extended dynamic range in the highlights. Adding knee-points to reduce clipping is common in image processing, however the essential difference here is that not only does it alter the image data after it has been captured but also whilst you are capturing it. As we can save the applied knee-points positions / parameters together with the footage, we can fully reconstruct the linear colour data in post production. Here is the first test we have undertaken with this feature (there are many knobs for tweaking - and we've only just started turning them):