AXIOM Team Talk Volume 14.1 - Moving Facilities, Updates and GSoC Projects



Moving Offices

For the last two years the AXIOM headquarters has cohabited from the modest Amescon facilities in the 15th District of Vienna, Austria. This is where most of the electronics manufacturing for the AXIOM Betas has taken place. This spring we began the process of moving to a new, bigger and better shared office called Factory Hub Vienna - this was quite a challenge as the pick and place machine was built and assembled inside the old office and so it wasn’t clear whether or not the machine would fit through the doors… it didn’t. In the first Team Talk video for some time we give the community a tour of our new building. Factory Hub Vienna hosts a diverse range of hardware projects and their teams but it’s also home to a fully fledged, industrial, electronics manufacturing facility.


TELE (Factory Hub Vienna) electronics manufacturing floor

Various Updates

As you may be aware GoPro acquired Cineform and then open sourced it. We think this is a big deal - and a positive one - in contrast to Apple’s recent announcement that a new version of ProRes will encode raw video. In truth, of course, ProRes is far removed from anything that could be described as being open/accessible. We are currently evaluating the prospect of adopting (Cinema) DNG and Magic Lantern Video (MLV) as our preferred raw format/containers in AXIOM devices.


Google Summer of Code (GSoC)

The program where students from all around the world work on open source projects and get paid by Google over the summer has started again and this time apertus° was awarded with six student slots. The projects range from evaluating and extending the Magic Lantern raw video container format (MLV) to OpenCine debayering methods, in camera real time focus peaking, in camera waveform/vectorscope/histogram displays, FPGA code to emulate the image sensor hardware to developing and implementing a bidirectional packet protocol for FPGA communication on the two extension shields on the AXIOM Beta hardware.


The Open Source world meeting at the annual Google Summer of Code Mentor Summit at Google Headquarters, Mountain View, California.


Still to Come

We have a plethora of things to share at this point but this episode is already much longer than the easily digestible bits we normally try to serve. The next episodes have been filmed already but need to be edited. The topics that will be covered in Team Talks 14.2, and onwards, revolve around the progress that’s been made with the camera’s hardware development, e.g. the SDI module, the Center-Solder-On Module (adding an IMU to the AXIOM Beta), enclosure manufacturing and design updates, injection molding parts in-house, the release of unseen AXIOM footage and much more.


On AXIOM Beta Compact Roadmap

Everyone working on the camera sympathises with those among you who've been waiting for shipment of AXIOM Beta Compact for a long time, so we trust that this Team Talk will give the community better insight into the scale of some of the steps that have been climbed so far this year. The project is ideally situated now, development is in a healthy state with the Beta Compact's essential features making good progress, but we're a small team and so the speed of development is at the mercy of the number of hands on deck so to speak. Contributions from anyone with programming experience would make a real difference.



If you're interested in getting involved with the project and are looking for a good starting point then picking a GSoC task, not selected by any student, would be one good way to help things along. Naturally the mentors listed in the task descriptions will be happy to guide you throughout process. There are many ways in which someone could participate, however, and it wouldn't necessarily need to be through working on complex algorithms, so let us know what you'd like to do to help and we can take it from there.

In the meantime we'd like to once again thank everyone for their continued patience and support.


Further Links




7 Comments

1 week ago
Kurt Fillmore

Glad you're settled into your new "digs". Looks like the right environment for a growing company. Next, we want to hear about the camera! LOL. Thanks guys.

6 days ago
Sebastian

Yeah, we are aware that this episode was a bit (not entirely) off topic and will focus on directly related camera stuff with the next episode :)

4 days ago
Anonymous

just a few more random thoughts/questions

- is the sensor/sensor-interface-board still rotatable? wasn't genlock also planned? would stereoscopic (with two rotated cameras) still be possible? or are the dimensions incompatible? I have considered (but cant afford) getting a second voucher from one of those backers who opted out... would you even need a "full" second camera?

- does the compact enclosure attach to the extended enclosure or is it a matter of one or the other?

- what is the progress of OC's avisynth integration? which order will that work in: does avisynth serve frames to OC or does OC serve frames to avisynth? http://avisynth.nl/index.php/High_bit-depth_Support_with_Avisynth

- what format is the metadata from the IMU CSO? is stabilization something thats being worked on in OC? or will the metadata need to be converted to work with other software?

- how is the Active Canon EF Mount progressing? there's lots of talk on the progress of the remote control, but the EF Mount stretch goal suggests it will be controlled using the remote control so surely these are being worked on together?

- was there any further news about the Semantic Image Segmentation with DeepLab in TensorFlow?

- I noticed the photos of chroma pong running on AXIOM beta at the various Faires and was curious about the image tracking mechanics... mostly because of the next point:

- has anyone considered working on something like this:
https://users.soe.ucsc.edu/~milanfar/publications/journal/Chapter1.pdf
rather than bilinear/bicubic/etc interpolated demosaics, tracking the sub-pixel shifts could exploit temporal redundancy to provide more accurate demosaics, or, it could be used for temporal denoising, deblurring, upscaling, desqueezing anamorphic, etc

thanks for your time

4 days ago
Sebastian

Will reply to some points now to some points soon:

- what is the progress of OC's avisynth integration? which order will that work in: does avisynth serve frames to OC or does OC serve frames to avisynth? http://avisynth.nl/index.php/High_bit-depth_Support_with_Avisynth

We have opted out of avisynth for now, as it is not working as expected, also the multi-platform part is more or less broken in the successor.
But we have tested FUSE, which allows to write own file system and on of the GSoC students succeeded in creating AVI files from synthetic data for now.
At later point OC would provide the data and small frame server (using FUSE) would provide the AVI file, which video players/editing software can load.

- was there any further news about the Semantic Image Segmentation with DeepLab in TensorFlow?

Unfortunately nobody picked up the task yet: https://lab.apertus.org/T987

- does the compact enclosure attach to the extended enclosure or is it a matter of one or the other?

We first tried to make the compact enclosure a part of the extended enclosure so it was meant as an addition. But after the list of things on the compact enclosure that would need small changes here and there to make it fit the extended enclosure figured it would work better if the extended enclosure would replace the compact enclosure parts. Some parts like lens mounts, filters, etc. are the same though.

- what format is the metadata from the IMU CSO? is stabilization something thats being worked on in OC? or will the metadata need to be converted to work with other software?

There is no format yet, the data will just be read in software and can then be written in whatever format/form desirable. If you know of a particular format/standard that would make sense please let us know. Image stabilization is not currently on the roadmap for OC.

4 days ago
Sebastian

- is the sensor/sensor-interface-board still rotatable?
It never was, but we thought about doing that 90° rotation option, no concrete plans yet though.

-wasn't genlock also planned?
Trigger, timecode, genlock, sync are all pretty "low tech" signals - easy to implement - we will definitely revisit those once 6g sdi and usb3 plugin modules are done.

-would stereoscopic (with two rotated cameras) still be possible? or are the dimensions incompatible?
Surely its possible but to have an as low as possible inter-ocular distance the 90° sensor rotation option above would indeed be handy.

-would you even need a "full" second camera?
yes, with the current image sensor definitely, maybe we will develop a smaller image sensor board (2/3" seems to still be the sweet spot for stereo3d) in the future (integrating the 90° rotation already possibly) and allowing multiple sensor front ends to be connected with the current rest of the hardware. If the image sensor is FullHD it would definitely suffice datarate wise.

- how is the Active Canon EF Mount progressing? there's lots of talk on the progress of the remote control, but the EF Mount stretch goal suggests it will be controlled using the remote control so surely these are being worked on together?
We want to get the AXIOM Remote operational before we embark on the lens control journey, as there are so many different lenses and we expect them to require slight variations of the protocol and we are a bit scared of opening that can - it sounds like it could become a compatibility nightmare.... But its on the roadmap definitely.

Add new comment

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
go back up