Internally and externally the AXIOM camera projects provides ample opportunities and also a definite need for visualization. Our decentralized teams are iterating over rapidly evolving designs for circuit boards, sockets, handles, enclosures and soon with the production of the AXIOM Beta also packaging and possibly expansion components, and all of this needs to be communicated between our team members, to our backers and the public. Having only limited resources to spend on the artistic rendition of our hardware designs we opted for an R&D effort to supply ourselves with an automated visualization system, currently codenamed "Project Elmyra". This article outlines our goals and thoughts on this issue, and provides a short teaser to the system architecture and components we will use. (If you're in a hurry and just want the executive summary, scroll down to the second-last paragraph - "The Plan")
Graphics and visualization work is often understood as a manual process, applied on-demand and one-by-one on discreet pieces of source material by a dedicated person that is usually unconnected to the actual work being carried out. We can for instance imagine this process as some engineer producing technical hardware designs and giving them to an artist in order for her to transform them into something that communicates well with the non-technical audience, something that is understandable and beautiful. Put into practice, such a design workflow often poses a horrendous threat to productivity due to wildly undefined routines and therefore ensuing chaos and emotional stress: How/when is material given to the designer? How much source material is needed to work out a design? When should it be ready and how may the artist prioritize requests? How is feedback handled? How/when is the finished work returned to the requester? ... Addressing these issues is one core aspect of Project Elmyra.
Working out and enforcing a process in which (to stay with our previous example) engineers and artists can smoothly work together is one thing to increase productivity and happiness. However, it does not change the fact that the engineer is still in most cases completely dependent on the artist to get a rendition of the item at work that complies to the established style and quality of the project's communication. Automating away the process of requesting and receiving artwork and leaving the artist with the sole task of artistic decision and intervention is the second core issue for Project Elmyra.
The third, and most fundamental aspect of Project Elmyra lies in the past and future of artistic production itself. As with all things, computer-driven automation is also not stopping before the field of visual production, having led us to a reality of physical paintbrushes being transformed into digital brushes (in other words: the digitalization of our tools), but also, more importantly, giving us the opportunity to transform more and more of our ways in which we "paint" into reproducible algorithms (that is: the digitalization of our artistic strategies), which is exactly what we will make use of in Project Elmyra: Turning a (previously one-shot) artistic effort into a reproducible series of steps for our system to repeat and replay on our ever-evolving designs of open hardware cinema cameras and components.
So ... "What is it? And what does it do?", you might be wondering now! Still being at an early stage of development, we do expect some things will change, a lot is currently not implemented, and some parts we don't even know about yet, but here's what we currently can say about it: Our envisioned system automatically pulls in the latest hardware designs from our repositories on github, and through a human-friendly webinterface and a machine-friendly API offers automated visualizations (for the most part: renderings) of specific parts or sets of parts in these repositories. A little more concrete: Our hardware designers can push their latest changes onto github and then, in the browser, go straight to our system to obtain a visualization of their work, in a predefined style (e.g. shaded, illustrated/blueprint, realistic), specific resolution, specific format, etc. As all visualizations, and all their variants (different style, size, format, etc.) are available under well-defined and permanent URLs, it is possible to place visualizations in (for instance) a user manual, that always reflect the current state of the hardware, because the system seamlessly updates all visualizations behind the scenes as soon as changes occur. The visualization artists meanwhile take care to improve the default set of visualizations and to perform manual (but very importantly: redo-able) changes to specific parts that are available in the repositories. For a specific part an artist might, for instance, choose and define a better camera position (from the default one), or a manual override on the rendering parameters for blueprint renderings, because the part might be overly complex or vice versa very simple. All of these changes are retained and reapplied to future revisions of a part that might get pushed by the engineers. As near-future goals we are also looking into automated animated visualizations and automatically provided interactive visualization widgets (All based on the same general architecture outlined before). Lastly, some technical details: Our current implementation is based on Python and Flask, and for rendering it relies on a fantastic piece of free and open 3D software - you might have already guessed it: Blender!
Hopefully we could catch some interest and shed some light on this development for you - If you have questions, ideas, or most importantly if you're potentially interested in utilizing our software for your own project (Yes - it will be released as free and open software of course!) don't hesitate to let us know, we're happy to hear from you!