Current Development Status (5th December 2016)
17 AXIOM Beta developer kits have been shipped. More are being manufactured. The summer has been a bit slower as people were on vacation or bigger work assignments abroad but now we are back and rolling again. Our main focus is now to increase speed of our electronics production by using more automated workflows (currently we build everything 100% manually - by hand and one Beta has over 500 individual components to be placed) at our assembly facility/office Amescon. We are also building machines that help us build or test the AXIOM Beta hardware quicker like the open hardware flying probe tester or a machine that cleans residue from the soldering process off a finished PCBs.
Currently we are working on three approaches in parallel: 1. We are currently assembling a Liteplacer (DIY PnP machine kit). 2. We are working on improving the C#/WPF based software of the Amescon created custom PnP Machine - if you are interested in helping out here please get in touch with us. 3. We will receive a beta testing PnP machine unit from SmallSMT - the VisionPlacer 3000d around the start of 2017. If you are skilled working with fine pitch electronics please apply to join our hardware production team in Vienna.
The skeleton enclosure is the first mechanical design of the AXIOM Beta enclosure and especially intended for early adopters and developers as it does not actually enclose the hardware but rather hold it together. That way developers can easily access the hardware without disassembling it. The skeleton is milled from aluminum and coated in black. Since the hardware is fully exposed it is not suitable for outdoor operation.
It’s not meant be be pretty but provide a simple to manufacture shell around the skeleton and AXIOM Beta hardware to protect it from all sides. The scope is to create a design that anybody can source locally and is cheap to produce with 3D printed parts. The Simple enclosure will be created after the skeleton design is completed.
Milled from several aluminum pieces and with different coating options the full enclosure should provide easy access to all connectors and interfaces while protecting the internal hardware with a solid metal shell. The full enclosure will have several modular parts held in place with metal screws. It will provide several 3/8” and 1/4” mount points in key places. Simple assembly and disassembly for access and repairability is of course also a goal.
This requires creating software that runs inside the camera to apply the corrections in real time in the FPGA (DSNU + PRNU) as well as software/methods for making the calibrations and verifying the results. Overcompensation can quickly make the image worse than before compensation so this will require some tweaking and optimising over time.
This requires software inside the cameras FPGA to run real time matrix color conversion (eg: white-balancing, offsets, channel merging, color effects, color space conversion) and developing the matching color profiling method with defined lighting and pre-measured color charts as reference.
Every image sensor has millions of pixels and a tiny amount is just statistically out of the expected response bounds or does not work at all - that’s normal and the missing value simple gets replaced by the average of neighboring pixels. This software takes care of this in real time and manages the positions/addresses of these dead pixels.
Canon, Nikon, MFT and Sony lens communication and control is planned where the actual features and implementations depend on the availability of protocol information and documentation and then on the success of reverse engineering anything that is not documented.
The background behind this idea is that that a raw image actually contains less data than a color image because with a bayer pattern image sensor not every pixel sees every color. The colors get reconstructed in the so called debayering process which typically happens in post production with raw footage. So in an RGB recording with 8 bits per channel we get 24 bits of space to park our data in for each pixel. Since most recorders do chroma subsampling eg. 4:2:2 that reduces the effectively available space to 16 bit per pixel. Now the trick is to just store a “monochrome” raw pixel in that space, two 12 bit raw pixels fit into one 24 bit RGB 4:4:4 pixel which would allow to eg. record twice the resolution or twice the frame-rate in a traditional 1080p datastream. If your recorder also supports the double frame rate (eg. 1080p60 if you aim for 1080p30) you actually get 4 times the bandwidth. 4K (or actually UHD) has four times as many pixels as HD, so voila that is the experimental 4K raw storage mode.
This of course means that the recorded video is not viewable out of the box anymore. Its not actually an image sequence you see when playing back the recording, its a visualization of a datastream. With the right interpretation (which any raw format needs anyway) all the original raw data can be utilized as raw footage. Initially this could be accomplished through a simple file conversion (ffmpeg, custom plugins etc.), and eventually (much sooner than later with community support), be widely adopted by NLEs and raw image/video processing software.
Currently the Sensor Interface Board is named “Dummy” because it just forwards 32 of the 64 LVDS lanes from the sensor to the Microzed effectively limiting the sensor to 150FPS in 4K@10bit. The next generation of this board will feature an FPGA to interface all 64 LVDS data lanes and can also be utilized to preprocess the data - in the future this FPGA should act as a bridge between any future image sensor and the rest of the AXIOM Beta hardware.
Currently the Power board generates a fixed set of voltages matching the current component’s requirements of the AXIOM Beta. New image sensors, shields or plugin modules could require different voltages though so the next generation of the Power board will be able to generate voltages as defined via software effectively paving the way for any future components implementation.
Since the Camera is running Linux, you can use a simple Wifi dongle to access it. This allows low level access via SSH/FTP/SCP/etc. as well as operating high level Graphical User Interfaces via HTTP and any mobiles device’s browser.
The 1080p60 4:4:4 output HDMI module is finished, the AXIOM Beta can accommodate up to two of these plugin modules and supply them with independent video streams.
Triple PMOD debug inputs/outputs for connecting a wide range of external PMOD devices - mainly intended for development and testing when General Purpose Input/Output (GPIO) is required.
Three independent DisplayPort Links act as direct connection between the AXIOM Beta and AXIOM Bridge to capture 4K raw data. It should also work with Displayport to HDMI converter cables directly.
Three independent HDMI links to either supply different devices with different streams or increase the overall FPS with multiple recorders.
This plugin module will allow recording 4K video on an external recorder with a standard 2160p30 signal. Will also work to supply 4K screens with a signal of course.