Current Development Status (18th July 2017)
25 AXIOM Beta developer kits have been built. More are being manufactured. Our main focus is now to increase speed of our electronics production by using more automated workflows with pick and place machines (currently we build everything 100% manually - by hand and one Beta has over 500 individual components to be placed) at our assembly facility / office Amescon. Our pick and place machine has populated the very first AXIOM Beta Mainboard PCB successfully! Current development focus is a USB 3.0 plugin module allowing us to stream uncompressed 4K raw video to a connected PC (hardware design complete, Software / FPGA logic development in progress). We are participating in Google Summer of Code this year for the first time and three projects were approved. We need help with mechanical CAD design of the AXIOM Beta enclosure. Team Talk 13 has been shot and will have first episode released in August. We started holding public IRC team meetings with a 2-3 weeks frequency on IRC.
The skeleton enclosure is the first mechanical design of the AXIOM Beta enclosure and especially intended for early adopters and developers as it does not actually enclose the hardware but rather hold it together. That way developers can easily access the hardware without disassembling it. The skeleton is milled from aluminum and coated in black. Since the hardware is fully exposed it is not suitable for outdoor operation.
It’s not meant be be pretty but provide a simple to manufacture shell around the skeleton and AXIOM Beta hardware to protect it from all sides. The scope is to create a design that anybody can source locally and is cheap to produce with 3D printed parts. The Simple enclosure will be created after the skeleton design is completed.
Milled from several aluminum pieces and with different coating options the full enclosure should provide easy access to all connectors and interfaces while protecting the internal hardware with a solid metal shell. The full enclosure will have several modular parts held in place with metal screws. It will provide several 3/8” and 1/4” mount points in key places. Simple assembly and disassembly for access and repairability is of course also a goal.
This requires creating software that runs inside the camera to apply the corrections in real time in the FPGA (DSNU + PRNU) as well as software/methods for making the calibrations and verifying the results. Overcompensation can quickly make the image worse than before compensation so this will require some tweaking and optimising over time.
This requires software inside the cameras FPGA to run real time matrix color conversion (eg: white-balancing, offsets, channel merging, color effects, color space conversion) and developing the matching color profiling method with defined lighting and pre-measured color charts as reference.
Every image sensor has millions of pixels and a tiny amount is just statistically out of the expected response bounds or does not work at all - that’s normal and the missing value simple gets replaced by the average of neighboring pixels. This software takes care of this in real time and manages the positions/addresses of these dead pixels.
Canon, Nikon, MFT and Sony lens communication and control is planned where the actual features and implementations depend on the availability of protocol information and documentation and then on the success of reverse engineering anything that is not documented.
The background behind this idea is that that a raw image actually contains less data than a color image because with a bayer pattern image sensor not every pixel sees every color. The colors get reconstructed in the so called debayering process which typically happens in post production with raw footage. So in an RGB recording with 8 bits per channel we get 24 bits of space to park our data in for each pixel. Since most recorders do chroma subsampling eg. 4:2:2 that reduces the effectively available space to 16 bit per pixel. Now the trick is to just store a “monochrome” raw pixel in that space, two 12 bit raw pixels fit into one 24 bit RGB 4:4:4 pixel which would allow to eg. record twice the resolution or twice the frame-rate in a traditional 1080p datastream. If your recorder also supports the double frame rate (eg. 1080p60 if you aim for 1080p30) you actually get 4 times the bandwidth. 4K (or actually UHD) has four times as many pixels as HD, so voila that is the experimental 4K raw storage mode.
This of course means that the recorded video is not viewable out of the box anymore. Its not actually an image sequence you see when playing back the recording, its a visualization of a datastream. With the right interpretation (which any raw format needs anyway) all the original raw data can be utilized as raw footage. Initially this could be accomplished through a simple file conversion (ffmpeg, custom plugins etc.), and eventually (much sooner than later with community support), be widely adopted by NLEs and raw image/video processing software.
Currently the Sensor Interface Board is named “Dummy” because it just forwards 32 of the 64 LVDS lanes from the sensor to the Microzed effectively limiting the sensor to 150FPS in 4K@10bit. The next generation of this board will feature an FPGA to interface all 64 LVDS data lanes and can also be utilized to preprocess the data - in the future this FPGA should act as a bridge between any future image sensor and the rest of the AXIOM Beta hardware.
Currently the Power board generates a fixed set of voltages matching the current component’s requirements of the AXIOM Beta. New image sensors, shields or plugin modules could require different voltages though so the next generation of the Power board will be able to generate voltages as defined via software effectively paving the way for any future components implementation.
Since the Camera is running Linux, you can use a simple Wifi dongle to access it. This allows low level access via SSH/FTP/SCP/etc. as well as operating high level Graphical User Interfaces via HTTP and any mobiles device’s browser.
Single PMOD debug inputs/outputs for connecting a wide range of external PMOD devices - mainly intended for development and testing when General Purpose Input/Output (GPIO) is required.
The 1080p60 4:4:4 output HDMI module is finished, the AXIOM Beta can accommodate up to two of these plugin modules and supply them with independent video streams.
Triple PMOD debug inputs/outputs for connecting a wide range of external PMOD devices - mainly intended for development and testing when General Purpose Input/Output (GPIO) is required.
Three independent DisplayPort Links act as diverse video output ports. Also supports adapters eg. to HDMI directly.
Allows recording 4K/UHD video on an external recorder with a standard 2160p signal. Will also work to supply 4K/UHD screens with a signal of course.
Offers 3.2 Gbit/s throughput which corresponds to 400MByte/s, enough to record uncompressed 4096x2160 raw 12 bit video at 25 FPS to a connected computer.
An industry standard serial digital interface (SDI) connection plugin module will provide a nominal data transfer rate of 3G/6G.
2x10 GPIO banks as LED indicators plus two power LEDs. 4 LVDS pairs routed to external connectors JP1/JP2 (plus one GND).