My name is Cláudio Gomes, I am a Portuguese student currently taking a bachelor’s degree in informatics engineering at the University of Coimbra, Portugal. I was accepted by apertus° for the Google Summer of Code 2018 program (GSoC).
The goal of the program is to bring more student developers into open-source software development. Students work with an open source organisation on a three month programming project during their break from school.
In this article, I will talk about my experience as a GSoC’er and about my task in general.
How it all started
I was casually reading the emails in my university account inbox when I came across an email from a teacher spreading the word about Google's Summer of Code. This is when I found out about the program and got interested in it. I quickly checked the website and discovered the organisations list, where I saw a smorgasbord of projects I had never before heard about. When I was on the apertusº page, it intrigued me so much and, with my curiosity regarding the film-making area, it was almost immediate - I decided I had to participate in GSoC.
I sent an email to the teacher who helped me a lot with regards to how I should approach apertusº and in checking for errors in my proposal. He was a good support from the start, with detailed suggestions for my proposal.
February marked the start of my experience as a GSoC’er - Before I started communicating with apertusº, I researched about the organisation and their suggested tasks for the program. As I am more interested in the software side of the things, with a preference for C++, I got interested in OpenCine. As such I started working on the C++ Challenge, the main goal of which was to create four images from a raw image (1 for each Bayer colour - Red, Green0, Green1 and Blue). My first contact with apertusº took place on February 16th and I asked for suggestions on where and how to start my work regarding GSoC.
This was a very busy month. It was the first time working on a proposal and I had never been in anything similar to Google Summer of Code. After finishing the C++ challenge, my wish was to do the Task T722 - OpenCine: Raw Image Debayering Methods. I started working on the proposal, with a lot of research and planning so not to miss anything. When the submission deadline (March 27) started approaching I got a lot of help polishing my proposal from several people - the apertusº members, my teacher and, last but not least, my friends.
A month later, I was accepted! I was very delighted by this news but I knew that this was where the real work would begin.
I had to implement and accelerate several raw image debayering methods for apertusº OpenCine suite. This project can be seen on the Google Summer of Code website here. Additionally, the GSoC time-line can be seen here.
Synopsis from the Task Proposal
When OpenCine was conceptualized there were several free and open-source raw image processing tools. But none of them were able to process moving images e.g. movies. To bring a solution to this problem, the OpenCine concept is to provide users with a powerful raw image processing tool with a lot of planned features. My goal was to implement different debayering algorithms and to 'parallelise' implementation with OpenCL or other frameworks (as described on T722 in the Task List).
The Community Bonding Period
During Google Summer of Code there is a period before coding starts when students can bond with their organisation, each other and plan their task with their mentors. This period was from April 23rd to May 13th.
In my case, and since I had been with apertusº since February, I was familiar with apertusº members so it was mainly a planning exercise for establishing how I would approach the task. My mentor, Andrej, during this phase, requested me to do several tasks, e.g. developing an image Downscaler class, improving the image PreProcessor class, and creating some Unit Tests. Those tasks played a valuable role in planning a practical workflow for OpenCine and GSoC.
After the “introductory” tasks I started working on the next phase prior to the commencement of the official Coding period, May 14th to August 6th, as it is a good way to get ahead of schedule.
Research and Implementation Period
Thrilled by the community bounding period, I was eager to start working on the first subtask: “Creation of the base Debayer Class”. However, my mentor said that this wasn’t needed as OpenCine already had the desired Classes implemented but without the demosaicing algorithm code.
In response I quickly advanced to the next subtask: “Research and Implementation of Bilinear Interpolation”. Originally, it was meant to be “Research and Implementation of Linear Interpolation”, but my mentor suggested that I implement Nearest Neighbour Interpolation (also known as Linear Interpolation) into the Bilinear Interpolation Class.
My first days working on this sub-task were frustrating because I was having difficulties with the algorithm, namely the “starting” pixels. With a lot of help from my mentor, I was finally able to implement a working idea. After this struggle, everything came nicely and fast. The Bilinear Interpolation and Nearest Neighbour algorithms were finished and working correctly, passing all the unit tests.
The next subtask was to develop implementation of Green Edge Directed Interpolation, or GEDI for short. The algorithm was very easy to develop and the fact that it shares a lot of similarities with Bilinear Interpolation made it very quick to develop. I finished it 19 days ahead of schedule.
Finally, the last algorithm to implement was SHOODAK Debayer. This was my favourite one as I personally contacted the creator of the algorithm, Stephan Schuh, and exchanged a lot of ideas with him. He was very nice and helped me with this algorithm, which has an entirely different approach compared to Bilinear Interpolation and GEDI. From the discussion, I decided to implement SHOODAK2, an untested version of SHOODAK that solved many issues found in the first version. I even created an edge sensing algorithm to see if I could further reduce the artefacts caused by this algorithm. I felt very accomplished with this exchange of ideas and learned some valuable lessons regarding teamwork and contacting the community - I requested some sample images from Sebastian and he kindly asked the community to send him some samples with my specifications. The community was quick to supply me with images! SHOODAK2 took some time to finish implementing but was very rewarding.
Comparing the different debayering algorithms side-by-side.
First Evaluation Phase
As soon as I finished implementing the demosaicing algorithms, the first evaluation phase had begun! I was anxious about this as I really wanted to be approved and was eager to receive feedback from my mentor. I was approved! And my mentor gave an elaborate feedback which helped me shape the next tasks with more efficiency. The feedback was positive and alerted me to the fact that I hadn’t documented any of the code I produced up to this point. This was a particularly important advise as Open-Source is all about sharing and helping each other - documentation is fundamental and helps others understand and improve upon existing code. I quickly started writing the documentation of the code and designing some images for it.
With the four planned algorithms implemented it was time to accelerate the algorithms. We started by accelerating the demosaicers with OpenMP. As expected, I spent some days learning it, documenting the algorithms and porting the documentation to the website. After that I started accelerating Bilinear Interpolation and GEDI - The first results were great, but I only finished the final accelerated version on my third try. Results were impressive, though. E.g. the GEDI demosaicer on my computer went from 67ms to 5ms. SHOODAK was improved but I postponed further improvements as I decided to revamp SHOODAK in the future to make it faster than GEDI (as speed should be one of the advantages of SHOODAK).
This part was tricky as we are working with threads. We had to be cautious in ascertaining that the threads run on the correct “pieces” of memory. For example, after I committed the final version of the algorithms everything working nicely on my 12 threaded desktop, however, my mentor tested it on his virtual desktop with 2 threads and it glitched up the entire image. Fortunately, this was simple to fix and didn’t impact on general performance.
This marks the finish of every task in my proposal. For the last period, we decided to start working with OpenCL.
Second Evaluation Phase
After finishing the OpenMP acceleration the second evaluation phase began. I was approved again, with feedback from my mentor saying that I was ahead of schedule and that the documentation images needed to be more polished. I addressed this issue and improved many of the existing images.
So, I started studying OpenCL and learning how it works. This was a very collaborative period as my mentor helped me a lot with what I was doing. I created a base OpenCL class for OpenCine - revamped by my mentor with a C++ wrapper of OpenCL. After that, we developed a working kernel for testing purposes and then we started creating OpenCL Demosaicing Processors (responsible for debayering images) using the BaseOCL class. I created the first processor for Bilinear Interpolation, however, we found it a bit cumbersome and visually unpleasant. I improved it slightly but it had a big issue that I couldn’t overcome - It is slow. This was probably the most frustrating part of the experience. I simply couldn’t get any improvement in performance. On my computer, what takes 4ms in OpenMP (Bilinear), takes 41ms in OpenCL, which is a very big difference, even though there is the possibility that my CPU completely eclipses the GPU in performance (This happens in Blender renderer too).
Last Evaluation Phase
The last evaluation phase had been reached and it marked the end of this great experience - I learned so much collaborating with others, with planning projects, with learning by myself , with learning with others, and, more importantly, with having a sense of community.
I finished Google Summer of Code successfully! I am very proud of this accomplishment and it ends with a weird feeling of "saudade", as people of my country say - a feeling of missing something that was important for you.
However, more importantly, this isn’t the end of my contribution to apertusº! I will keep developing code for OpenCine and I am currently implementing a way to load any RAW file into the program.
Thank you for reading my article. In case you are curious about my task you can check work progress here and the code documentation here.
Want to participate?
Want to provide feedback or suggest improvements? Want to try your own approach and need help?
Get in touch via IRC or email.
Hi Cláudio, just after a
Hi Cláudio, just after a quick browse over your Article:
It is really great and thank you very much sharing your Experience!
very nice wrap up about
very nice wrap up about Debayering, a key feature along the RAW processing pipeline!
i'm not skilled at all about GPU accelleration, but it's crazy that OPENCL is that difficult and/or bad to code, anyway maybe this is why CUDA/Nvidia is showing so much advantage!
is Apertus going to try Cuda accellertion too?
Ajouter un commentaire