For the last two years the AXIOM headquarters has cohabited from the modest Amescon facilities in the 15th District of Vienna, Austria. This is where most of the electronics manufacturing for the AXIOM Betas has taken place. This spring we began the process of moving to a new, bigger and better shared office called Factory Hub Vienna - this was quite a challenge as the pick and place machine was built and assembled inside the old office and so it wasn’t clear whether or not the machine would fit through the doors… it didn’t. In the first Team Talk video for some time we give the community a tour of our new building. Factory Hub Vienna hosts a diverse range of hardware projects and their teams but it’s also home to a fully fledged, industrial, electronics manufacturing facility.
As you may be aware GoPro acquired Cineform and then open sourced it. We think this is a big deal - and a positive one - in contrast to Apple’s recent announcement that a new version of ProRes will encode raw video. In truth, of course, ProRes is far removed from anything that could be described as being open/accessible. We are currently evaluating the prospect of adopting (Cinema) DNG and Magic Lantern Video (MLV) as our preferred raw format/containers in AXIOM devices.
Google Summer of Code (GSoC)
The program where students from all around the world work on open source projects and get paid by Google over the summer has started again and this time apertus° was awarded with six student slots. The projects range from evaluating and extending the Magic Lantern raw video container format (MLV) to OpenCine debayering methods, in camera real time focus peaking, in camera waveform/vectorscope/histogram displays, FPGA code to emulate the image sensor hardware to developing and implementing a bidirectional packet protocol for FPGA communication on the two extension shields on the AXIOM Beta hardware.
The Open Source world meeting at the annual Google Summer of Code Mentor Summit at Google Headquarters, Mountain View, California.
Still to Come
We have a plethora of things to share at this point but this episode is already much longer than the easily digestible bits we normally try to serve. The next episodes have been filmed already but need to be edited. The topics that will be covered in Team Talks 14.2, and onwards, revolve around the progress that’s been made with the camera’s hardware development, e.g. the SDI module, the Center-Solder-On Module (adding an IMU to the AXIOM Beta), enclosure manufacturing and design updates, injection molding parts in-house, the release of unseen AXIOM footage and much more.
On AXIOM Beta Compact Roadmap
Everyone working on the camera sympathises with those among you who've been waiting for shipment of AXIOM Beta Compact for a long time, so we trust that this Team Talk will give the community better insight into the scale of some of the steps that have been climbed so far this year. The project is ideally situated now, development is in a healthy state with the Beta Compact's essential features making good progress, but we're a small team and so the speed of development is at the mercy of the number of hands on deck so to speak. Contributions from anyone with programming experience would make a real difference.
If you're interested in getting involved with the project and are looking for a good starting point then picking a GSoC task, not selected by any student, would be one good way to help things along. Naturally the mentors listed in the task descriptions will be happy to guide you throughout process. There are many ways in which someone could participate, however, and it wouldn't necessarily need to be through working on complex algorithms, so let us know what you'd like to do to help and we can take it from there.
In the meantime we'd like to once again thank everyone for their continued patience and support.
Glad you're settled into your new "digs". Looks like the right environment for a growing company. Next, we want to hear about the camera! LOL. Thanks guys.
- is the sensor/sensor-interface-board still rotatable? wasn't genlock also planned? would stereoscopic (with two rotated cameras) still be possible? or are the dimensions incompatible? I have considered (but cant afford) getting a second voucher from one of those backers who opted out... would you even need a "full" second camera?
- does the compact enclosure attach to the extended enclosure or is it a matter of one or the other?
- what format is the metadata from the IMU CSO? is stabilization something thats being worked on in OC? or will the metadata need to be converted to work with other software?
- how is the Active Canon EF Mount progressing? there's lots of talk on the progress of the remote control, but the EF Mount stretch goal suggests it will be controlled using the remote control so surely these are being worked on together?
- was there any further news about the Semantic Image Segmentation with DeepLab in TensorFlow?
- I noticed the photos of chroma pong running on AXIOM beta at the various Faires and was curious about the image tracking mechanics... mostly because of the next point:
- has anyone considered working on something like this: https://users.soe.ucsc.edu/~milanfar/publications/journal/Chapter1.pdf
rather than bilinear/bicubic/etc interpolated demosaics, tracking the sub-pixel shifts could exploit temporal redundancy to provide more accurate demosaics, or, it could be used for temporal denoising, deblurring, upscaling, desqueezing anamorphic, etc
We have opted out of avisynth for now, as it is not working as expected, also the multi-platform part is more or less broken in the successor.
But we have tested FUSE, which allows to write own file system and on of the GSoC students succeeded in creating AVI files from synthetic data for now.
At later point OC would provide the data and small frame server (using FUSE) would provide the AVI file, which video players/editing software can load.
- was there any further news about the Semantic Image Segmentation with DeepLab in TensorFlow?
- does the compact enclosure attach to the extended enclosure or is it a matter of one or the other?
We first tried to make the compact enclosure a part of the extended enclosure so it was meant as an addition. But after the list of things on the compact enclosure that would need small changes here and there to make it fit the extended enclosure figured it would work better if the extended enclosure would replace the compact enclosure parts. Some parts like lens mounts, filters, etc. are the same though.
- what format is the metadata from the IMU CSO? is stabilization something thats being worked on in OC? or will the metadata need to be converted to work with other software?
There is no format yet, the data will just be read in software and can then be written in whatever format/form desirable. If you know of a particular format/standard that would make sense please let us know. Image stabilization is not currently on the roadmap for OC.
- is the sensor/sensor-interface-board still rotatable?
It never was, but we thought about doing that 90° rotation option, no concrete plans yet though.
-wasn't genlock also planned?
Trigger, timecode, genlock, sync are all pretty "low tech" signals - easy to implement - we will definitely revisit those once 6g sdi and usb3 plugin modules are done.
-would stereoscopic (with two rotated cameras) still be possible? or are the dimensions incompatible?
Surely its possible but to have an as low as possible inter-ocular distance the 90° sensor rotation option above would indeed be handy.
-would you even need a "full" second camera?
yes, with the current image sensor definitely, maybe we will develop a smaller image sensor board (2/3" seems to still be the sweet spot for stereo3d) in the future (integrating the 90° rotation already possibly) and allowing multiple sensor front ends to be connected with the current rest of the hardware. If the image sensor is FullHD it would definitely suffice datarate wise.
- how is the Active Canon EF Mount progressing? there's lots of talk on the progress of the remote control, but the EF Mount stretch goal suggests it will be controlled using the remote control so surely these are being worked on together?
We want to get the AXIOM Remote operational before we embark on the lens control journey, as there are so many different lenses and we expect them to require slight variations of the protocol and we are a bit scared of opening that can - it sounds like it could become a compatibility nightmare.... But its on the roadmap definitely.
"We have opted out of avisynth for now, as it is not working as expected, also the multi-platform part is more or less broken in the successor.
But we have tested FUSE"
FUSE is an interesting idea actually - I've used mp3fs (and others) on a mpd music server to dynamically create MP3s from FLAC without taking up extra HDD space. Its really quite powerful. It does mean I will need another machine (or at least a linux VM with FUSE+Samba) though. Avisynth on the other hand, has MVTools which (i imagine) could have gone a long way towards the motion tracking in the "chapter1" link/article/idea above.
"If you know of a particular format/standard that would make sense please let us know. Image stabilization is not currently on the roadmap for OC."
Thanks for the link - could be a good starting point indeed, I read up a bit on XMP sidecar files for metadata which adobe seems to support heavily in photoshop, premiere, etc. But there is no standard for movement/orientation data yet it seems - especially not for more than one value = one file/image...
As AVISynth and its successor VapourSynth have several problems on serving frames under Linux and Windows (maybe also MacOS), we decided to try FUSE. Current tests provide some dummy AVI file, which can be loaded using e.g. VLC and played back like normal one. Also we are looking into the possibility to execute FUSE under Windows, there are wrappers for that, like https://github.com/dokan-dev/dokany.
For those that want the main sensor turned 90 degrees, how hard would it be to lay out a new version of the sensor board? Different versions of the sensor board were planned for the On-Semi and the 2/3rd's Cmosis sensors anyway. If changing the layout, fabricating, and testing the new board would be feasible, you could let customers choose which version they wanted.
It's quite a challenge unfortunately, the LVDS lanes from the image sensor to the Microzed need to be length matched.
Adding 90° means that some lanes have a longer way to travel then the others, this again needs to be compensated by making these lanes with shorter way to travel artificially longer with meanders, and all this needs to be done in space that is already very tightly filled currently (see https://wiki.apertus.org/index.php/File:AXIOM_Beta_Sensor_THT_0.13_Botto...)
Its surely not impossible but also not straight forward (with the meander lanes :P ).
Since we have a long list of challenges on the todo-list already we first want to take care of those that will enable us uncompressed 4k raw recording with the USB3 module for example.
I envisioned that the board would be rotated (or maybe flipped and rotated) to work either way, not that it was going to be one board or the other. I guess it doesn't make financial sense to do that, unless there were lots of people buying 2 cameras for stereoscopic. For that matter, I can't see a stereoscopic enclosure being made either (2 cameras, 1 enclosure, aligned at correct angles etc). Maybe things to put on the wish-list I guess. The limited space on the sensor board strikes me as odd though, surely the board can be larger/enlarged, or else how will larger diameter sensors be possible in the future?
The image sensor board is roughly 57x57mm so that provides plenty of room already now for any future (larger) image sensors we might discover. Also there is always the option to increase the size of the board. The only thing that needs to stay the same is the position of the high density connectors.
15 Comments
Glad you're settled into your
Glad you're settled into your new "digs". Looks like the right environment for a growing company. Next, we want to hear about the camera! LOL. Thanks guys.
Yeah, we are aware that this
Yeah, we are aware that this episode was a bit (not entirely) off topic and will focus on directly related camera stuff with the next episode :)
just a few more random
just a few more random thoughts/questions
- is the sensor/sensor-interface-board still rotatable? wasn't genlock also planned? would stereoscopic (with two rotated cameras) still be possible? or are the dimensions incompatible? I have considered (but cant afford) getting a second voucher from one of those backers who opted out... would you even need a "full" second camera?
- does the compact enclosure attach to the extended enclosure or is it a matter of one or the other?
- what is the progress of OC's avisynth integration? which order will that work in: does avisynth serve frames to OC or does OC serve frames to avisynth? http://avisynth.nl/index.php/High_bit-depth_Support_with_Avisynth
- what format is the metadata from the IMU CSO? is stabilization something thats being worked on in OC? or will the metadata need to be converted to work with other software?
- how is the Active Canon EF Mount progressing? there's lots of talk on the progress of the remote control, but the EF Mount stretch goal suggests it will be controlled using the remote control so surely these are being worked on together?
- was there any further news about the Semantic Image Segmentation with DeepLab in TensorFlow?
- I noticed the photos of chroma pong running on AXIOM beta at the various Faires and was curious about the image tracking mechanics... mostly because of the next point:
- has anyone considered working on something like this:
https://users.soe.ucsc.edu/~milanfar/publications/journal/Chapter1.pdf
rather than bilinear/bicubic/etc interpolated demosaics, tracking the sub-pixel shifts could exploit temporal redundancy to provide more accurate demosaics, or, it could be used for temporal denoising, deblurring, upscaling, desqueezing anamorphic, etc
thanks for your time
Will reply to some points now
Will reply to some points now to some points soon:
- what is the progress of OC's avisynth integration? which order will that work in: does avisynth serve frames to OC or does OC serve frames to avisynth? http://avisynth.nl/index.php/High_bit-depth_Support_with_Avisynth
We have opted out of avisynth for now, as it is not working as expected, also the multi-platform part is more or less broken in the successor.
But we have tested FUSE, which allows to write own file system and on of the GSoC students succeeded in creating AVI files from synthetic data for now.
At later point OC would provide the data and small frame server (using FUSE) would provide the AVI file, which video players/editing software can load.
- was there any further news about the Semantic Image Segmentation with DeepLab in TensorFlow?
Unfortunately nobody picked up the task yet: https://lab.apertus.org/T987
- does the compact enclosure attach to the extended enclosure or is it a matter of one or the other?
We first tried to make the compact enclosure a part of the extended enclosure so it was meant as an addition. But after the list of things on the compact enclosure that would need small changes here and there to make it fit the extended enclosure figured it would work better if the extended enclosure would replace the compact enclosure parts. Some parts like lens mounts, filters, etc. are the same though.
- what format is the metadata from the IMU CSO? is stabilization something thats being worked on in OC? or will the metadata need to be converted to work with other software?
There is no format yet, the data will just be read in software and can then be written in whatever format/form desirable. If you know of a particular format/standard that would make sense please let us know. Image stabilization is not currently on the roadmap for OC.
- is the sensor/sensor
- is the sensor/sensor-interface-board still rotatable?
It never was, but we thought about doing that 90° rotation option, no concrete plans yet though.
-wasn't genlock also planned?
Trigger, timecode, genlock, sync are all pretty "low tech" signals - easy to implement - we will definitely revisit those once 6g sdi and usb3 plugin modules are done.
-would stereoscopic (with two rotated cameras) still be possible? or are the dimensions incompatible?
Surely its possible but to have an as low as possible inter-ocular distance the 90° sensor rotation option above would indeed be handy.
-would you even need a "full" second camera?
yes, with the current image sensor definitely, maybe we will develop a smaller image sensor board (2/3" seems to still be the sweet spot for stereo3d) in the future (integrating the 90° rotation already possibly) and allowing multiple sensor front ends to be connected with the current rest of the hardware. If the image sensor is FullHD it would definitely suffice datarate wise.
- how is the Active Canon EF Mount progressing? there's lots of talk on the progress of the remote control, but the EF Mount stretch goal suggests it will be controlled using the remote control so surely these are being worked on together?
We want to get the AXIOM Remote operational before we embark on the lens control journey, as there are so many different lenses and we expect them to require slight variations of the protocol and we are a bit scared of opening that can - it sounds like it could become a compatibility nightmare.... But its on the roadmap definitely.
-- has anyone considered
-- has anyone considered working on something like this: https://users.soe.ucsc.edu/~milanfar/publications/journal/Chapter1.pdf
Very interesting, I dont think we were made aware of that one yet, thanks!
thanks for the info Sebastian
thanks for the info Sebastian, much appreciated
"We have opted out of
"We have opted out of avisynth for now, as it is not working as expected, also the multi-platform part is more or less broken in the successor.
But we have tested FUSE"
FUSE is an interesting idea actually - I've used mp3fs (and others) on a mpd music server to dynamically create MP3s from FLAC without taking up extra HDD space. Its really quite powerful. It does mean I will need another machine (or at least a linux VM with FUSE+Samba) though. Avisynth on the other hand, has MVTools which (i imagine) could have gone a long way towards the motion tracking in the "chapter1" link/article/idea above.
"If you know of a particular format/standard that would make sense please let us know. Image stabilization is not currently on the roadmap for OC."
not really, maybe something like: https://developers.google.com/streetview/publish/camm-spec
though I'm sure anyone could have just googled that too...
thanks again
Thanks for the link - could
Thanks for the link - could be a good starting point indeed, I read up a bit on XMP sidecar files for metadata which adobe seems to support heavily in photoshop, premiere, etc. But there is no standard for movement/orientation data yet it seems - especially not for more than one value = one file/image...
As AVISynth and its successor
As AVISynth and its successor VapourSynth have several problems on serving frames under Linux and Windows (maybe also MacOS), we decided to try FUSE. Current tests provide some dummy AVI file, which can be loaded using e.g. VLC and played back like normal one. Also we are looking into the possibility to execute FUSE under Windows, there are wrappers for that, like https://github.com/dokan-dev/dokany.
very cool
very cool
For those that want the main
For those that want the main sensor turned 90 degrees, how hard would it be to lay out a new version of the sensor board? Different versions of the sensor board were planned for the On-Semi and the 2/3rd's Cmosis sensors anyway. If changing the layout, fabricating, and testing the new board would be feasible, you could let customers choose which version they wanted.
It's quite a challenge
It's quite a challenge unfortunately, the LVDS lanes from the image sensor to the Microzed need to be length matched.
Adding 90° means that some lanes have a longer way to travel then the others, this again needs to be compensated by making these lanes with shorter way to travel artificially longer with meanders, and all this needs to be done in space that is already very tightly filled currently (see https://wiki.apertus.org/index.php/File:AXIOM_Beta_Sensor_THT_0.13_Botto...)
Its surely not impossible but also not straight forward (with the meander lanes :P ).
Since we have a long list of challenges on the todo-list already we first want to take care of those that will enable us uncompressed 4k raw recording with the USB3 module for example.
I envisioned that the board
I envisioned that the board would be rotated (or maybe flipped and rotated) to work either way, not that it was going to be one board or the other. I guess it doesn't make financial sense to do that, unless there were lots of people buying 2 cameras for stereoscopic. For that matter, I can't see a stereoscopic enclosure being made either (2 cameras, 1 enclosure, aligned at correct angles etc). Maybe things to put on the wish-list I guess. The limited space on the sensor board strikes me as odd though, surely the board can be larger/enlarged, or else how will larger diameter sensors be possible in the future?
The image sensor board is
The image sensor board is roughly 57x57mm so that provides plenty of room already now for any future (larger) image sensors we might discover. Also there is always the option to increase the size of the board. The only thing that needs to stay the same is the position of the high density connectors.
Add new comment