Submitted by simonrepp on Tue, 07/07/2015 - 18:55
Internally and externally the AXIOM camera projects provides ample opportunities and also a definite need for visualization. Our decentralized teams are iterating over rapidly evolving designs for circuit boards, sockets, handles, enclosures and soon with the production of the AXIOM Beta also packaging and possibly expansion components, and all of this needs to be communicated between our team members, to our backers and the public. Having only limited resources to spend on the artistic rendition of our hardware designs we opted for an R&D effort to supply ourselves with an automated visualization system, currently codenamed "Project Elmyra". This article outlines our goals and thoughts on this issue, and provides a short teaser to the system architecture and components we will use. (If you're in a hurry and just want the executive summary, scroll down to the second-last paragraph - "The Plan")
Graphics and visualization work is often understood as a manual process, applied on-demand and one-by-one on discreet pieces of source material by a dedicated person that is usually unconnected to the actual work being carried out. We can for instance imagine this process as some engineer producing technical hardware designs and giving them to an artist in order for her to transform them into something that communicates well with the non-technical audience, something that is understandable and beautiful. Put into practice, such a design workflow often poses a horrendous threat to productivity due to wildly undefined routines and therefore ensuing chaos and emotional stress: How/when is material given to the designer? How much source material is needed to work out a design? When should it be ready and how may the artist prioritize requests? How is feedback handled? How/when is the finished work returned to the requester? ... Addressing these issues is one core aspect of Project Elmyra.
Working out and enforcing a process in which (to stay with our previous example) engineers and artists can smoothly work together is one thing to increase productivity and happiness. However, it does not change the fact that the engineer is still in most cases completely dependent on the artist to get a rendition of the item at work that complies to the established style and quality of the project's communication. Automating away the process of requesting and receiving artwork and leaving the artist with the sole task of artistic decision and intervention is the second core issue for Project Elmyra.
The third, and most fundamental aspect of Project Elmyra lies in the past and future of artistic production itself. As with all things, computer-driven automation is also not stopping before the field of visual production, having led us to a reality of physical paintbrushes being transformed into digital brushes (in other words: the digitalization of our tools), but also, more importantly, giving us the opportunity to transform more and more of our ways in which we "paint" into reproducible algorithms (that is: the digitalization of our artistic strategies), which is exactly what we will make use of in Project Elmyra: Turning a (previously one-shot) artistic effort into a reproducible series of steps for our system to repeat and replay on our ever-evolving designs of open hardware cinema cameras and components.
So ... "What is it? And what does it do?", you might be wondering now! Still being at an early stage of development, we do expect some things will change, a lot is currently not implemented, and some parts we don't even know about yet, but here's what we currently can say about it: Our envisioned system automatically pulls in the latest hardware designs from our repositories on github, and through a human-friendly webinterface and a machine-friendly API offers automated visualizations (for the most part: renderings) of specific parts or sets of parts in these repositories. A little more concrete: Our hardware designers can push their latest changes onto github and then, in the browser, go straight to our system to obtain a visualization of their work, in a predefined style (e.g. shaded, illustrated/blueprint, realistic), specific resolution, specific format, etc. As all visualizations, and all their variants (different style, size, format, etc.) are available under well-defined and permanent URLs, it is possible to place visualizations in (for instance) a user manual, that always reflect the current state of the hardware, because the system seamlessly updates all visualizations behind the scenes as soon as changes occur. The visualization artists meanwhile take care to improve the default set of visualizations and to perform manual (but very importantly: redo-able) changes to specific parts that are available in the repositories. For a specific part an artist might, for instance, choose and define a better camera position (from the default one), or a manual override on the rendering parameters for blueprint renderings, because the part might be overly complex or vice versa very simple. All of these changes are retained and reapplied to future revisions of a part that might get pushed by the engineers. As near-future goals we are also looking into automated animated visualizations and automatically provided interactive visualization widgets (All based on the same general architecture outlined before). Lastly, some technical details: Our current implementation is based on Python and Flask, and for rendering it relies on a fantastic piece of free and open 3D software - you might have already guessed it: Blender!
Hopefully we could catch some interest and shed some light on this development for you - If you have questions, ideas, or most importantly if you're potentially interested in utilizing our software for your own project (Yes - it will be released as free and open software of course!) don't hesitate to let us know, we're happy to hear from you!
8 Comments
I am an avid Blender user and
I am an avid Blender user and rely on it to do product visualizations for some of my clients. Very excited to hear the Apertus is interested in Blender as well! I would love to be in the loop on the development of this project.
Great, glad to hear! I think
Great, glad to hear! I think you'll also get the upcoming news on progress if you found this article already. :) (And I'll make sure to specifically keep the blender community up to date on this as well anyway!)
Source, or it did not happen
Source, or it did not happen ;)
I will provide a repository
I will provide a repository with proper documentation as soon as it makes sense, but at this point it would just draw resources with very little gain. (Also I'm not keen on the helpful suggestions and corrections - and I know they will come - for stuff that is in the middle of being worked out. Been there done that ... ;))
Is there any Documentation
Is there any Documentation about the decisions already made concerning the Software Architecture?
(Software architecture plays a central role in successfully designing and maintaining software as well as its general maintainability. Reusability and creation of knowledge about software depend on it)
Just to make clear, which suggestions and corrections can be omitted ;-))
Well, in short: Blender is
Well, in short: Blender is the core of this whole thing, that will stay. Python is what we have to use to write the blender-internal pipeline code, thus the other modules are written in Python too (and I don't see any reason not to use it or any pressing reason to use something else anyway). Flask was kind of a no-brainer for the simple server application then. The actual architecture, what module is responsible for what, what modules there are, etc. is still in the process of constant revision and planning. Will cover that in a future article, once we have some firm decisions (with good reasoning) laid out! (R&D ... we're still at the R ;))
Flask ( I did very little R;)
Thanks!
Flask ( I did very little R;) ) - supports unit testing - great!.
Curious about the module layout..
Marvelous. Can you share
Marvelous. Can you share existing documentation that goes through the raw thought process as in 4nd1 Software Architecture question? Here's an example of an analogous process of thought evolution on the bulldozer - https://docs.google.com/presentation/d/1sBipRXQeSxL3bGcT5ZUQ9ZlEyqAQtQ5S...
Add new comment