Submitted by Sebastian on Fri, 11/01/2013 - 16:54
After a seriously busy month spent programming VHDL for the FPGA (we had a backlash when we discovered a bug in the provided memory AXI slave interfaces, which required a workaround), creating Linux drivers and scripts, testing and establishing workflows, our Zedboard is now fully communicating with the CMV12000 image sensor. We knew this wasn’t going to be a walk in the park, and to explain how complicated it is getting data from the sensor (which probably isn't that obvious to the public), try to imagine a spaghetti network of 70 wires, working in pairs, all sending signals at 300MHz in both directions, and those signals are carrying the clock and data to and from the sensor. The data itself is broken down into serial streams of 8, 10 or 12 bits for each pixel. Following this, you have to tune the delay for each line separately to get meaningful data, and you then have to align the bit-stream on word boundary again for each channel separately. Now, this is all tech-speak, but we’re hoping you’re getting an idea of just what’s involved with building a functional digital cinema camera.
The following animation visualizes the so called "FPGA floorplan". A map that shows all the logic gates as the result of the FPGA programming. There is also an option to show how all logic gates are connected to each other. When turned on its hard to see anything anymore.
We have pictures! And they are in 4K! But don't get too excited yet - like a newborn who has just come into contact with the outside world, they are very RAW and completely uncalibrated and uncompensated. Read on below to see our analysis of the image content.
What the newborn first saw when it opened it's eyes was the booklet of his father - the DVD booklet from the last film by Oscar Spierenburg (who founded the apertus° project more than 7 years ago) - support Oscar's current film crowd funding campaign here. Oscar celebrated our first images in the Belgian way, opening a nice bottle of Cava. Let's take a look at the image and examine the various problem areas and artifacts:
So, we're rather excited by this milestone - everything we can see in the captured images was expected and can easily be compensated and fixed. We have ordered an IR filter, initiated research into the best methods for sensor cleaning (which is quite a challenging task) and adapted our prototype setup in order to ditch the mirror. The following image is the result of a second revised attempt, capturing our brand new "colour chart" comprised of only the finest and most colourful pieces of insulation tape in our workspace. Please keep in mind that this image is still a very early result and while some of the previously mentioned problems have been resolved, there are still many that need to be dealt with. We will be ordering an expensive high-grade Cmosis CMV12000 sensor soon, so please consider donating some money to apertus° in order to make it easier for us to do so on our limited budget. Of course, we would love to take the prototype out into the field, but first we need to get it's basic operation running smoothly.
After conducting further image sensor tests (studying linearity, fixed pattern noise, hot/dead pixels, color calibration, dynamic range tests, etc.) we will start with implementing a simple real time FPGA based debayering algorithmn and resample the image to half resolution as required for supplying the HDMI port with the image data in the correct format. We also need to finish the camera enclosure, and you can see more about this just below.
In parallel to our work programming VHDL for the FPGA and the sensor testing, we are working on a solid enclosure for the prototype. At present, the prototype is basically a stack of PCBs sitting on our desk. The lens mount and lens are heavy and as they are not yet held in place by anything, we can only take pictures of the ceiling with this crude setup. Our next step will involve creating an enclosure assembly from solid steel parts, connecting the lens mount and tripod mount together. A set of acrylic glass parts will then be created with a laser cutter to form the actual enclosure around these parts.
8 Comments
Wow
Congratulations on getting this far....
Great work ! I can't wait to
Great work ! I can't wait to see more.
Awesome
So freakin' awesome guys, keep up the incredible work.
milestone all the way
You making such a great work guys! This is a pioneering and revolutionary project.
Hi guys,
Hi guys,
great achievement, very well done !
Would it be possible to know how to get involved into the development of this project? I have subsribed to the Mailinglist but it does not seem to be possible to reply to past posts or create new posts.
I am basically interested in knowing what PCB cad you guys are using. I would be interested in developing the PCB for the main FPGA (Zynq) board that will replace the current Zed dev. board.
thanks
The newsletter is only for
The newsletter is only for staying informed about latest developments. Your help would be greatly appreciated! To join us check: https://www.apertus.org/join-the-team or get in touch with us here: https://www.apertus.org/contact
This looks like a really cool
This looks like a really cool project! I'm currently working on a project using the Zedboard and I was wondering where you found the clear acrylic case?
Hi Alima.
Hi Alima.
We made it. AXIOM Belgium have posted files for building one that's applicable to the AXIOM Beta. See - https://wiki.apertus.org/index.php/AXIOM_Beta/Enclosures
Also, in case you're unaware, development has moved on a lot since AXIOM Alpha. See introduction to it's successor here - https://wiki.apertus.org/index.php/AXIOM_Project_Background
Add new comment