Here is a suggested approach for making movies with AI tools. This page describes my method and includes a list of the tools I use (updated 22 January 2026). Feel free to modify this process.
Firstly and most importantly you need a great story. That is more likely to come from a human brain, possibly aided by AI tools. When I have an idea for a story I ask ChatGPT, Copilot and Gemini to suggest 30 or 40 versions of that idea. Then I sift through the results.
This works on the principle that nature over-delivers. Of the many eggs a frog produces, only a handful become a mature frog. Of the many ideas AI tools offer, perhaps a handful will appeal.
I have found using a prompt like this useful: “Suggest 30 storylines in the style of [SNL, for example, or Mock The Week] about [insert story idea]. Sometimes you can find a gem or two within the suggestions. If you don’t, then ask for more, and use a different AI tool.
If your characters are historic people AI tools will help you research bios. Find images of them on Google. I use ImageUpscaler to enhance the images because web files are usually tiny. Colourise b&w photos with tools like PhotosRevive or Fotor. Or ask Gemini to do it.
Write a draft script. The AI tools listed above can be helpful for suggesting dialogue. But the human mind is still the best tool for developing a concept, especially for comedy. Sift through all the suggestions to mine the gems. Discard the chaff.
Save your draft script as a PDF.
Ask Gemini or Copilot to act as a professional script reader to give feedback on that PDF. Use a prompt like: “Assume you are a Hollywood script reader and provide feedback about [insert subject like structure or dialogue] for the attached script.”
Ask tools like Katalist, Copilot, ChatGPT, Gemini to provide shotlists, storyboards and camera angles and suggest lighting. Different tools will give you different results, so experiment and learn.
Generate images of your characters and their environment. Many people like Midjourney. Another option is a free creator like NightCafe. My favourite is Google’s Nano Banana. You need to create several visual variations of each character: eg, long, mid, mugshot, closeup and extreme closeup.
Improve the quality of your images with a tool like Adobe PhotoShop or Clarity Upscaler. The latter is available at FAL. Or ask Nano Banana to do it.
The quality of your images influences the quality of the video created from them. I often create video based on still images because the output is more consistent than text prompts. Never-the-less it’s vital to learn how to write prompts to be able to evolve new versions of your generations.
I use KlingAI to create images of characters because the tool lets me create video based on those images within the same tool. Kling also offers excellent sound effects and lip-sync.
Midjourney offers this approach to writing prompts.
Gemini and ChaptGPT are great for creating storyboards and shot lists and suggesting camera angles or lighting based on the PDF text in your script.
For me the best current video-creation tools are Runway Gen4, Kling and Veo-3. Note that all tools are evolving rapidly. Runway’s Aleph and Act-Two are formidable.
If you want to create storyboards I recommend LTX Studio, Katalist and Rubbrband (note the spelling for the last).
One great trick to ensure consistent characters when you create a film sequence is to take a screengrab of a piece of video you like and use that as the start image for the next piece.
Another trick is to use “image reference” where you generate an image by combining aspects of images that you supply the AI tool. For example, you merge an image of a person and an image of a location and image reference creates a new image. I used an image of myself and an image of a Kremlin church in Moscow to generate a picture of myself in a part of the city I’ve never visited.
Assemble the video you create to begin editing. Often you will find that only bits of your videos are suitable. I work on a 10:1 ratio. That is, you will need to generate about 10 clips from a prompt to get one good piece of video.
Some of the video-creation tools mentioned above also allow you to lip-sync dialogue.
If you choose to have a narrator, use ElevenLabs or Hume to create your narration and character voices. ElevenLabs is good for cloning voices but you need a subscription to get lots of cloned voices. ElevenLabs makes excellent sound effects.
Ultimately you will need to spend money on subscriptions. Think of the money as an investment in your film-making career.
Download video to your laptop or tablet and edit with your preferred software. I edit with the free version of DaVinci Resolve.
Make soundtracks with Udio or Suno. Insert music and sound effects into your editing software.
Add titles, name supers and credits as the final stage of editing. I sometimes use the free Vont app to create animated titles on my iPhone that I drop into my editing timeline.
AI video-creation tools mostly output at 1080p. If you want to enhance to 4K use Topaz VideoAI. This also unifies frame rates. Expensive at USD 299 a year but you have few options if you want 4K.
You will need to find a way to distribute your film and then await recognition of your genius. I’m still researching AI tools that can help here, though Zingroll (see below) might be an option. Or investigate a Ukrainian company Holywater which provides a platform for films made with AI tools.
Films made with stills: It’s fun to make movies using still images or cartoons that you create with AI tools like Midjourney. Write a draft script. Use a tool like Play or Murf or ElevenLabs to record a narration via copy-paste from your script.
Add audio tracks to your editing software (I edit with iMovie or KineMaster or LumaFusion or DaVince Resolve, depending on the complexity of the story). Then insert stills/cartoons to match the audio track/s. I use the Ken Burns effect to create the illusion of movement.
To finish, add titles, sound/visual effects and a music soundtrack before uploading.
Some helpful websites:
A chap with the username PJ Ace offers lots of excellent advice in his website.
Zingroll is a distribution service for AI-generated films. It appears to be in beta as of early 2026. Their website offers tips on a recommended process.
Arcana has a useful website that teaches movie-making from an AI perspective. The site offers a range of online courses but they are expensive. Here is a link to a 20-minute film made with Arcana tools called Echo Hunter.
This 17-minute film from January 2026 considers the key question: Will AI destroy studios in Hollywood/Bollywood or will it just change the way we tell stories. Worth watching.
Project Syndicate publishes long think-pieces about important subjects. Here is a thought-provoking piece about AI and the movie/media industries.
Categories: Home page
1 reply »