This interview has been lightly edited and condensed for clarity.
How did you learn about the contest?
I started following Curious Refuge because they offer classes and free AI tools. I signed up for their email list and followed their YouTube channel. I knew they had hosted contests before, but I did not feel like I had the skills to create something yet. So when they announced this one in August, I thought, “Oh here is something in my world.”
How did you take the assets and turn them into video?
They provided a lot of product photos and the font, but none of the imagery in the spot. I applied a traditional storyboarding and scriptwriting framework and started trying to make imagery in Midjourney. I knew the opening would be a guy driving into town in an old car painted with swirls and seeing the world change into a swirl world. I did that in Midjourney, then put it into Firefly for edits and adjustments, like changing the license plate. I used Midjourney to create clips but then used traditional video editing software to cut them together.
How long did the project take you?
From seeing the brief to uploading something was two weeks, working on it about four to five hours a day. Other companies have asked me how long AI ads take, and they think it’s 20 minutes. There is still a lot that goes into AI video edits, there is no magic button, but it's still faster than if you shot it. That’s months of presentations, location scouting, casting and costumes, set painting. It was also two weeks with no client review, so if there had been more touchpoints it might have been protracted, say if they wanted mood boards or stills.
What challenges did you run into?
I learned you can have the best intention with your storyboarding and shot list, but you have to be willing to reorient your creative vision quickly. I found I would get frustrated by the video generating aspects sometimes because it would not do the thing I wanted it to do, even if I changed the prompt or image. I really wanted to have the guy in the car holding a can of Whipnotic, but could not get him to hold it believably.
Working with AI video generation is like working with a creative collaborator that has an artistic mind of its own that also doesn’t mind bending physics and reality.
Another funny moment was the shot with the older man holding the milkshake at the end. I just wanted him to take a sip, but every time I ran that clip in Runway, it kept wanting to jam the drink and whipped cream into his face.
Were you compensated for your work? Has the contest led to new project leads?
I won second place so I was compensated for that and then Whipnotic worked with me to make small revisions before the ad aired and they did pay for those edits as well.
I have gotten leads from the Whipnotic work. Some that have led to paid work, some that may lead to more in the future.
What is your take on how AI will impact video work?
There is a sense of responsibility when using AI tools. I approach them being cognizant that they are powerful and that there are many people who are afraid of AI tools. But can these tools offer me a different path to solve a problem or achieve something I couldn’t do if I did not have these tools?
It wouldn’t make sense for me as a video editor to not use Premier, and similarly, it would not make sense to be at this point in my career and say, “I am not going to learn AI because that is not what I learned originally.” I have tried to open my mind to the possibility of how they can help, and laugh at them when they make mistakes.