I feel that I owe the readers interested in Open Movie Editor an apology, as for there wasn’t much progress in Open Movie Editor development this year.
The reason for this is, that I am currently transitioning into a new phase of my life, out of University, and into a space, where I have to earn a living. Therefore I am currently quite active searching for a job/capitalistic or economical niche, that is in balance with my lifestyle, personality and values. I have come to realize, that this project needs some additional skills, that I have not yet learned, honed and practiced as extensively as necessary in school and university, therefore I am currently teaching myself, everything, that I need to know for this project. As this has become my number #1 focus for the time being, I had to, most unfortunately, reschedule OME into a lower priority class.
I have a pretty clear idea of the direction that I am taking, however, I cannot make any promises for now, as I have no idea of where I might ultimately end up.
As a side note, if you are aware of any kind of interesting Jobs, like Secret Agent, Porn Star, Wrestler, or anything that eventually includes car chases, gun fights, travels to exotic locations, seducing beautiful women, etc. I might be interested.
In the Revista Espírito Livre Magazine.
I just returned from a two weeks conference and festival marathon, attending the Linux Audio Conference among others.
At the Linux Audio Conference in Parma Open Movie Editor met Gmerlin.
I gave a talk about the “Current State of Linux Video Production Software and its Developer Community”. To little time at LAC to cover the whole topic of course, but detailed information is available in the Paper, and for your convenience, a video and the slides of the talk are available as well.
for everyone interested in how Open Movie Editor is moving along. Right now I am working on finishing my diploma thesis, which is due for the end of December, and I am focusing much of my energy into this. However, I will likely take my final exam early next year, after which I will have my mind free for working on Open Movie Editor again.
I have some interesting ideas, and I even submitted a proposal to a local businessplan idea competition about how I plan to build an open source business around OME. I even won a price, read about it in the following link. CAST Technology Award 2008, some images are here: Pictures from the Businessplan Idea Competition.
That’s it for now,
I’ve just uploaded the new release of the Open Movie Editor.
This release adds some smaller improvements, and it took so long, because I was mostly working on custom features for a movie we were making:
McFinnen & Wallace
Now that this Movie is finished I removed those custom work, and released this version.
I just mad a quick Mockup on Gimp, that shows a possible User Interface for implementing Keyframes, or Automations into the Open Movie Editor.
if you are following the news in Free and Open Source Software, you might have noticed some buzz around the High Priority Free Software Projects list, published by the Free Software Foundation.
And if you look closely, you see that one entry in the list is Free software video editing software. So Open Movie Editor is definitely a trendsetter.
However, while the FSF list mentions some other Video Editing Projects, it is quite unspecific about what actually needs to be done, and how people might contribute significantly.
Therefore, I publish a list of things that might directly benefit free video editing applications. And I try to make those items relevant enough that they are not only specific to Open Movie Editor, but likely beneficial to alternative and future Video Production Softwares as well.
#1 A GPU/MultiCore accelerated Image/Video processing and display Library: As I have mentioned in my last blog post, editing High Definition Video needs a lot of processing power, so we need to squeeze every last bit of available power out of the hardware. And there is a lot of power in GPUs, as well as using many cores at once. Right now, QuadCore CPUs are widely available, and not really expensive, so it is entirely reasonable to expect that normal users might have two of them and that they could be running 8 Core machines on their desktop. And the number of cores is likely to increase in the future, so be prepared.
I advocate uniform utilization of cpu cores, so in my humble opinion it would be best to split work into equal chunks to be processed on different cores. This is actually quite easy in image processing, because images can easily spliced into several parts, and then they can be processed individually and at last they are put back together.
For GPUs, the pixel shaders already take care of parallelism, so this needs no special attention. What needs to be done is that yuv planes are sent to the GPU translated to RGB colorspace there, then image processing, color grading and video filters are performed, and at last the image is either displayed on the screen or sent back to the CPU for storage. For inspiration I recommend to look at the code from the yuvtools. There are very interesting high performance OpenGL based video playback routines.
Also the FreeFrame Project has a specification for OpenGL based Video Filters, and some examples of such plugins.
There is also some OpenGL video accelleration in gstreamer, although I am not quite sure about how to extract and reuse that.
#2 A high performance, high quality intermediate codec: Check out the rants about intermediate codecs from Eugenia. There you should get a general idea about the problem. One thing that is essential for intermediate codecs is high performance so that real time video editing is possible with High Definition and stuff. Therefore an implementation that uniformly utilizes MultiCore machines would be desired, as explained above. Opportunities for inspiration are plenty, there are potential candidates available in ffmpeg, there is Dirac Pro, with its Schrödinger implementation. File Size is not that important, but speed and high quality are.
Additionally, while a solid codec on its own would be of immediate and significant use, ideally the codec should also be available for other Platforms like Apple OSX Quicktime, and Microsoft Windows DirectShow. This would enable collaboration with artists that are still limited to non-free software. Interoperability and Open-ness for everyone is vital, especially when working with artists.
#3 Usability Testing DVDStyler: Delivering a video project is not complete until its distributed to its audience. And more often then not, this means producing a DVD. As far as I know about the landscape of Free and Open Source DVD Mastering Applications, DVDStyler is the most promising solution, it is simple yet flexible, and seems to work quite well. However, at times I find that the workflow and User Interface could use some improvements. This is not mentioned as a criticism of the DVDStyler project, quite the opposite, I think that it is the most promising DVD app, therefore I advocate its usage and its improvement.
Also if you know of any kind of interesting projects that might solve one of those problems, and that I did not mention here, be sure to either email me: richard.spindler AT gmail.com or leave me a note in the blog comments.
what needs to be done for the development of the Open Movie Editor before it can be called a 1.0?
Interlacing might be an annoyance, but it is a reality in current video technology, so a complete video editor needs to handle it. However, it is more complex than simple having a good deinterlacing filter available. First of all, it needs to be handled at all possible levels, and if possible automagically, yet predictable. Consumer camcorders almost always produce interlaced footage, even though avarage consumers might not understand the term, how it affects them, and how they need to handle it. Therefore ideally, the video editing software should support them by doing the “right” thing depending on the available knowledge.
Currently there is not much interlacing handling in OME, almost nothing I have to admit.
What needs to be done? Some filters, like color based filters do not need to be aware of interlacing, they will work always. Filters that introduce distortion, like blur or scale, need to be aware of interlacing and they need to take it into account. Therefore a mechanism is needed to detect interlacing, either bei meta-data-tags of video files, or by image analysis, preferable both.
Automatic deinterlacing needs to be introduced at appropriate processing steps. Computer screens are not interlacing, so everything that is displayed on screen needs to be deinterlaced automagically, although an override for expert users should be available. For rendering automagic deinterlacing should be enabled for formats that are not likely to support interlaced playback, like for example web-based video formats, youtube, etc. However, for delivery formats like DVDs, and other media that supports interlacing, no automagic deinterlacing should be done. And of course subtle overriding possibilities for expert users and for debugging needs to be present. Do the right thing per default! Make it difficult to screw up!
Handling High Definition Media is another point that is mostly about performance, current computers are barely able to handle it, and software needs to be especially careful to be able to utilize every last bit of available processing power, and not waste any.
There are two ways to handle High Definition Media, and depending on the situation, one or the other, or a combination of both of them is better to be used. The first one is multi-core processing, utilizing threads or a similar technology, and the other way is using the image processing and parallel computing power available in OpenGL 3D graphics hardware. Using 3D graphics hardware has advantages for certain operations, for example when it is necessary to display the video data on screen, for editing. However, one severe disadvantage is that such tricks will only work on carefully selected high end or semi high end graphics hardware, so some kind of fallback mechanism is always necessary to not exclude users in less powerful platforms.
Multi-Core processing is useful of course for owners of machines with many-core processors, especially for rendering and as fallback, when 3d-hardware is not available.
Currently OME only uses a simple subset of OpenGL, which is not optimized for performance but for compatibility. OME does no multi-core processing.
The plan is to support 3D-OpenGL based simple YUV-RGB conversion during playback first, because this would enable simple HD editing, which is IMHO the most important and desired feature.
The second step would be to port the filter infrastructure to OpenGL and Multi-Core based methods, which would enable to also work with filters while keeping realtime playback. This is more complicated and likely to happen later, possible after 1.0.
Simple keyframe and automation features are likely very easy to implement, so they will happen sooner or later, depending on how well the other more difficult stuff progresses.
This feature is only necessary for more advanced compositing style work, but might come in handy every once in a while even for simple projects.
Currently OME only does automation for audio volumes.
This is a problem that is bothering a lot of users, and might even prevent some from participating as OME users, however, getting this right is a tedious problem, especially considering that OME is still lacking in other departments that need more priority. Ideally this problem should be handled by distributions like Debian and Ubuntu, but still someone needs to do the work, and I have no whip to force anyone.
The most important tasks are:
If all those preconditions are fulfilled, providing a package of OME, or compiling it from source should be sufficiently easy.
If it is not possible to include a certain plugin in the base-package of libquicktime, it should at least be available optionally as part of an extended or not-free repository.
Help for those tasks is available from the Open Movie Editor packagers Mailing-List
A lot of questions for OME concern the lack decent and easy to use rendering formats and options. Those currently available are A) limited B) To many and to complex to understand. Currently OME simple exposes all the available rendering options from libquicktime. There are many of those, but when the ffmpeg_lqt and x264_lqt plugins are missing, then a number of important formats are missing. This happens in common distributions as explained in the above point.
Some formats, like ogg-vorbis/theora or flv are not available through libquicktime, so here an alternative rendering backend would be necessary. Sidenote: ogg-vorbis/theora will be added as soon as firefox ships with a player for that format integrated per default, before it will be considered non-mainstream, and therefore mostly useless.
What needs to be done else? A decent and usable interface needs to be conceived, more obscure options need to be hidden from the user, OME is a editor first, and not a dedicated encoding tools. Those codecs available need the descriptive names, and the documentation needs to make clear what format to choose for what opportunity. Presets for DVD/Youtube, etc. need to be available. Maybe DiracPro as an intermediate codec.
Playback speed adjustments for video need to be done. Preferable in a simple realtime adjustable fashion. Necessary for freeze frame effects, slowed down playback, fast forward, and preferable also adjustable over time. Cannot be done easily as some kind of filter effect, needs to be integrated into the timeline, and needs dedicated tweaking knobs or something in the timeline.
Likely not to difficult to provide a “simple” (not doing any motion interpolation) implementation, but currently the more difficult things have priority.
Support for editing shots that were done concurrently with multiple cameras would be nice, but not so much of a priority right now.
Some individuals have volunteered already to support translation efforts, but I would prefer to get the core feature-set ready before starting this, because else it would need to be constantly updated, as OME is very much work in progress. Also gettext, the software tool necessary for translations is not yet integrated into the sourcecode, so this would need to be done too.
Anyone interested in this translation thingi is invited to join the Open Movie Editor translators Mailing-List, it will be announced there when OME is ready for translating.
Documentation, yeah whatever….
Some kind of decent “official” web-platform would be nice, so that contributors can collaborate on documentation, however, I have no concrete plans of any kind..
There are of course a lot of other very interesting and cool ideas about what OME should and could be able to do, but what is listed above is currently the bare minimum to call it a day.
Cheers, and have fun
this is a sneak preview of a new filter panel that I am working on, in current versions of the Open Movie Editor, every video filter has its individual dialog for adjusting its parameters. This new panel will integrate some of those dialogs into a single pane in the main OME window.
It already works quite well, but it still needs a little bit of polishing, so that it fits into the rest of the UI.
and have fun,
Sure, there are some issues with HTML and CSS compatibility, but at least developers and users are aware of the problem and working on it.
However, in the case of internet video, we are far from a solution. There is even an alarming lack of awareness about what is wrong. Even in communities that should know better.
Some time ago, before the dawn of youtube, there were three options available for internet video: Quicktime, Real Player and Windows Media. And the problem then was that all of those were incompatible with each other. So eventually you needed to install all of them to be able to view most videos on the web. Imagine if you would need to install Internet Explorer, Mozilla Firefox and Apple Safari Browser to view all webpages available.
So, fast forward to today. Youtube uses flash for video playback, as does everyone else. Flash is available for almost all computers, problem solved, everyone is happy.
Everyone? No, flash is controlled by one company (Adobe) and 3rd party implementations are sparse and incomplete. If you want to implement or improve flash for some kind of mobile device like a smart phone, you have to beg adobe for help and permission, you cannot just do it on your own.
Adobe however, is not the sole offender here, quite on the contrary, since it happened to adopt the h264 video format for its latest edition of flash. And h264 is a recognized standard, with a number of implementations. Even Quicktime does h264.
So now everyone is happy?
No! There is quite a vocal crowd in the open source and free software community, that continues to rub your nose on the fact that ogg vorbis/theora is the one true way to internet video, and that everything else is pure evil.
While those claims have their merits, they also lack a clear vision and strategy of how this will solve our current problems of interoperability. At least I haven’t found any kind of strategy in those claims.
Ok, back to h264. It is a nice, high quality format. It is widely adopted and there are fine open source implementations available. It is the ideal format for internet video. Except that it is considered to be NOT royalty free. This is a major show stopper. There is a certain consortium or whatever that demands protection money from you as soon as you are big enough, so that they will not sue you.
I can deliver as much HTML, CSS, JPEGs, PNGs over HTTP, TCP and IP as I want to, and for video in h264, I suddenly have to pay? I have to pay if I deliver a product that does h264?
How is this fair? They did not even write the code of the available open source implementations like x264, but they want the money.
What kind of a fucked up situation is this?
To FREE video on the web, we need a situation where charging royalty fees on protocols and standards is simple unacceptably and unnecessary. And where diverse implementations prove interoperability. (This is what I like about HTML, CSS, HTTP, TCP, IP!)
Spread the word, and help make it happen.