Filmmakers have a decades-old fascination with artificial intelligence, but now they have the real thing to study. This year, the documentary Eternal You looked at ‘Deadbots’, AI versions of deceased loved ones that enable the bereaved to communicate with the dead. In fiction, Chris Weitz brought us AfrAId, a tech horror about an Alexa-style domestic assistant that runs out of control – an unnervingly plausible extrapolation of our own experiences of AI-assisted appliances.
Behind the scenes, the use of AI in filmmaking remains controversial after last year’s strikes by screenwriters and actors in the US, but 2024 has seen further experimentation and the debut of AI in mainstream feature and documentary film releases as the year ends.
Two distinct trends have emerged: the creation of movies wholly through AI, and the incorporation of specific generative AI tools into the workflows of orthodox film production.
The creation of complete movies using AI is possible using text-to-video tools that allow users to input prompts, generating video sequences. February saw huge excitement around Sora, developed by OpenAI – the company responsible for ChatGPT – which showcased samples of photorealistic AI video, but the company has not made the tool publicly available. OpenAI’s rival Runway continues to be the market leader.
This year’s innovations have focused on animation, with the race to release the first fully AI-generated animated feature won in July by Hooroo Jackson’s stylised genre anime DreadClub: Vampire’s Verdict. Soon after, the new AiMation Studios, founded by UK-based Pigeon Shrine, released Tom Paton’s Where the Robots Grow. The two filmmakers have starkly different approaches to AI filmmaking: Paton’s method is described as ‘synth-assisted’ production, involving actors in voice performances and himself as screenwriter, whereas Jackson allows AI to take complete control. Jackson actually dispensed with text-to-video AI, which he had used in last year’s Window Seat, instead generating 17,000 still images using MidJourney, and then animating this output.
But are the films any good? In a word, no: both have been critically slammed. The failures stem from the scripts rather than the animation, though – Jackson used Claude AI and ChatGPT to draft his screenplay, giving reassurance to the Writers Guild that their members needn’t be worried – yet. However, critics have been much warmer about the films’ visual qualities, considering them superior to most CGI animation.
Robert Zemeckis is a director whose career has involved constant innovation with technology, from inserting Tom Hanks into JFK’s White House in Forrest Gump (1994) to the breakthrough use of motion-capture computer animation in The Polar Express (2004). In 2024, he used deepfake technology to assist the production of Here, another Hanks vehicle. The movie requires extensive ageing and de-aging of the cast, and during the shoot at Pinewood, the AI company Metaphysic was able to manage this for Hanks and co-star Robin Wright in real time, in camera. That allowed the production to dispense with elaborate prosthetic make-up, or months in post-production VFX, and meant Zemeckis could see the final look of the performances in front of him, on set. It also allowed the production to dispense with dozens of jobs. The release of Here in the US in November heralded the arrival of AI in the Hollywood mainstream (UK release is set for 17 January). But reviewers have been unimpressed by AI’s achievements in Here, with Screen Rant complaining that “the faces of the de-aged actors are as distracting as a Snapchat filter.”
Better results have been achieved by filmmakers deploying AI to assist film sound. A leading company specialising in ‘voice cloning’ is Respeecher, based in Kyiv. CEO Alex Serdiuk tells me: “With voice, the industry is very picky – it’s either good, or unusable – but whatever emotional range a human can have, we can translate that to the screen.” Respeecher created all the dialogue and singing for Michael Gracey’s Better Man, the Robbie Williams biopic in which an animated chimp plays the singer. Another AI achievement was in the National Geographic documentary Endurance, about Ernest Shackleton’s ill-fated expedition to the Antarctic of 1914-17. Film archives include black-and-white footage of his team and their stricken boat on the ice, but Shackleton’s voice, in the silent film era, was recorded only on wax cylinders. Respeecher was able to use AI to clean up the sound quality of those recordings and generate a Shackleton voice clone to read from his diaries.
Documentary film continues to be at the forefront of AI innovation. In the UK, the recent relaunch of Hammer Studios was celebrated by Benjamin Field’s documentary Hammer: Heroes, Legends and Monsters. As the film closes, its hooded narrator pulls back his cloak – to reveal the long-dead Peter Cushing. Christian Darkin used AI to merge three elements: “The audio from one of Cushing’s films, the face from another, and the rest of the body we’d filmed with an actor.” Where Disney took months to ‘resurrect’ Peter Cushing for Rogue One – A Star Wars Story in 2016, AI enabled Darkin to achieve it in three days, working alone.
AI is not just transforming film production, but helping cinemas to identify audiences. Tim Richards, chief executive of Vue Cinemas, says that every programming decision in the chain is now made by AI. And the rapid progress isn’t slowing down. Darkin tells me that 2025 will be the year of artificial general intelligence – when AI matches or overtakes human cognitive ability – and no one can predict what that will bring.