How AI Is Transforming the Film and Video Industry

Have you ever watched your favorite movie or TV personalities doing their shtick on screen, and you thought – hmmm… he or she is kind of artificial, unnatural, inadequate? Well, next time you are watching a scene you might be right – if only in a different sense of meaning to the term “artificial”. Fact is that artificial intelligence (AI), driven by some agile software houses, is rapidly changing the process of generating video and audio content based on specific algorithmic input instead of human judgment and tedious manual routines.

There are two aspects how AI could change Hollywood’s age-old ways. First, is whether a script on a sheet of paper would make it into a commercial screenplay at all. Some AI systems developed by startups like ScriptBook or Pilot Movies, can now read the script and predict its likely box-office success, based on the given characters and story line, and suggest changes or simply shelving the project. All this based on large sets of statistical data on how similar productions have fared in the past.

A second, more intuitive, idea of how AI will change movie making is to fully automate how video and audio effects are rendered and fused with other scenic elements by deep learning algorithms, including entire characters and complex backdrops. This, of course, is being done already – but on a smaller scale. Otherwise those wildly successful ‘Star Wars’ or ‘Terminator’ sagas would not have been possible. All this has transformed movie making and pushed it into the computer-assisted digital realm. Now, however, AI systems are setting out to make entire movies by themselves. They are entering the creative field – with human sourcing and supervision at best.

A movie preview for the Sci-fi thriller Morgan (2016) shows where this is going. It was produced by an AI technology which analyzed more than 100 scenic bits of the movie in order to identify the most frightening visuals for the trailer. The selection process itself seems fascinating as the AI is able to learn and decide by itself, however, a human editor still had to come in to properly edit the piece for public presentation.

An indication how this could come along is a Silicon Valley garage outfit called “Arraiy“. It was founded by two highly educated neuroscientists and software entrepreneurs and is backed by high-caliber venture capital. They are developing algorithms that learn how to “rotoscope” camera footage: isolating persons or elements from their natural backgrounds and dynamically inserting them into other visual environments. Without the use of today’s uniformly colored “green screens“, and aiming at streamlining the post-production procedure.

All this, obviously, is cost driven. So, how will today’s highly paid specialists, toiling for days and weeks on end over some fleeting visual effects, appreciated only by their peers, fare in this drive to total automation? Not to worry: “The visual artists are safe,” foresees Stefan Avalos, a filmmaker at a Los Angeles-based visual effects service: “But it will replace all the drudgery.”

Varsha Shivam

Varsha Shivam

Varsha Shivam is Marketing Manager at Arago and currently responsible for event planning and social media activities. She joined the company in 2014 after graduating from the Johannes Gutenberg University of Mainz with a Master’s Degree in American Studies, English Linguistics and Business Administration. During her studies, she worked as a Marketing & Sales intern at IBM and Bosch Software Innovations in Singapore.

View all posts by