Gift Article10 RemainingAs a subscriber, you have 10 articles to gift each month. Gifting allows recipients to access the article for free.
URL copied to clipboard!
×
Fred & Farid Los Angeles used AI to help visualize the "Silenced" campaign for Greta Thunberg's nonprofit organization Fridays for Future.
Credit:
Fridays For Future
In 2023, businesses, individuals and entire industries have embraced artificial intelligence with unprecedented enthusiasm and adoption rates. With programs like ChatGPT coasting into the mainstream, generative AI became accessible to a wider audience.
But this widescale adoption also brought to the fore the inherent tensions that come along with an increasingly AI-driven world, especially in the wake of the Hollywood writers’ strike. Questions raised during the strike underscored the need for regulation around AI in film and television, but similar concerns apply to the ad industry, for both the people doing the creative work and the brands they’re doing it for.
For this month’s spotlight, we polled the Amp community on how the industry can begin to regulate the technology as we move into an increasingly AI-driven advertising landscape.
AI marketing blog
How brands are using ChatGPT, DALL-E 2 and other tools
A critical missing component in the current AI landscape is the establishment of clear boundaries, the kind that would help keep both users and creators safe. Diederik Veelo, head of innovation at Ambassadors and founder of Vocoda.ai, emphasized the need for AI to operate within defined parameters.
“Since the current state of AI isn’t responsible for its own actions yet, we need to provide it with safe boundaries where it can operate,” said Veelo. “For the foreseeable future, its makers will be responsible for the actions of AI, like a parent to its child. We need to find the balance between innovation and preventing harmful use of the technology. I don’t believe we should be slowing down or preventing AI from developing further, but we do need regulation to catch up to help us navigate how we use the technology responsibly within the industry.”
The consensus on regulating AI among many industry professionals is clear: Regulation is essential. “We're already in a new golden age of hucksterism, and AI could make our post-truth landscape all the more treacherous to navigate ethically,” said Douglas Brundage, founder at Kingsland.
Brundage also referenced increased vulnerability to cyberattacks, especially in the context of the recent cybersecurity breach at two Las Vegas casinos.
“What I hope to see is unlikely—comprehensive regulatory oversight of AI across sectors, led by people who understand it,” he said. “Congress' TikTok hearings, which should have elucidated potential national security threats, instead served to demonstrate to the American public how out of touch our leadership is on technology. AI is like nuclear technology; it can create a better world or be a weapon of mass destruction. Something this powerful needs serious laws around its use, including in advertising and beyond.”
Others echoed the call for a formal entity overseeing applications of AI in advertising to protect both brands and audiences.
“Establishing core principles for an AI-focused regulatory agency and laying out the scope of their responsibilities should be job one,” said Mike Margolin, chief experience officer at RPA.
Margolin pointed to U.S. Rep. Ted Lieu’s (D-Los Angeles County) fitting analogy to the FDA in a recent podcast interview with Politico, highlighting the need for a regulatory body in Washington with a specialty in the field.
“In terms of what we expect to come out of legislation, it’s sure to benefit big tech firms,” said Margolin. “The lesson of GDPR and CCPA is that compliance is complex and involves elite speed and scale of product planning, deep and skilled legal teams and highly active lobbying arms. Of course, society would benefit from greater transparency, and business innovation thrives from competition, but AI will surely be led by big tech firms. For enterprise-level marketers and for agencies, it’s another reminder to partner wisely.”
Margolin also noted, however, that when it comes to intellectual property smaller players may be hurt. As long as there are no changes to Section 230, that grants social-media platforms broad “safe-harbor” protections against legal liability for any content users post on their platforms—we can expect “the growing army of creators on TikTok to go wild with generative AI innovation,” he noted.
“That’s certainly making the TV and movie businesses anxious, and it will likely continue to amplify a growing question for the advertising business: What’s content, what’s an ad and how do we regulate the distinction at scale?” he said.
Regulatory challenges
Brian Yamada, chief innovation officer at VMLY&R, acknowledged the challenge that such a regulatory board would face at this point in time.
“I don't know how the legislators have a chance,” he said. “I don't know how someone who's not in the space can keep up at all. It's changing rapidly and we haven't gotten basic data privacy fully figured out.”
Yamada added that he hopes at some point, that consumer AI interests are protected, and that no one can use anyone's likeness—image, voice, video—without consent.
“That's not just a right that talent needs to have; it should be for all,” he said. “And we need to have basic transparency of AI applications and fully generated outputs to help users understand when they are interacting with AI, and when something is fully generative to clarify what's real versus what's created.”
The future of AI regulation remains uncertain, but it likely won’t be simple, nor easy to implement.
“If history is any indication, it will be a circus at first probably, followed by a push for militant over-correction,” said Anne Thomason, senior VP of business affairs at Barkley. “Hopefully, we’ll arrive at a happy medium of protecting intellectual property rights as well as an individual's right to publicity and the potential to monetize that. Ideally, we keep researching and evolving with the regulations as we learn more.”
Thomason said a number of questions remain to be answered: How much human involvement is needed for a work to be protected by copyright. Will that change over time? What is fair compensation for an “AI you”?
“We need parameters in place to ensure that no entity profits more from using person's likeness and voice than that person themselves,” she said.
Over time and with the right structure in place, the AI landscape could become more ethically navigable, according to John Higgins, CEO and chief creative officer of OS Studios.
“Film ratings guide our movie choices, and theme park height restrictions ensure safety; both demonstrate the importance of human intervention and judgment,” he said. “In marketing, as AI's influence grows, similar checks are vital. It's likely that soon, AI regulations will echo the familiar structures of copyright and industry ethics. In essence, merging advanced AI with core marketing ethics is the roadmap for the industry's future.”
The vital human element
In the realm of artificial intelligence, one fact remains clear: AI outputs are only as effective, accurate and considerate as the human inputs that drive them. For the ad industry to continue scaling its use, human oversight remains a critical component.
“The good news for agencies is that more AI actually might mean more humans,” said Summer Burton, creative director at Praytell. “And rightfully so, as it's clear that as AI has become more prevalent in our everyday lives. The need for human expertise has become more evident, not less. Whether it’s viewing AI output through the lens of diversity, equity and inclusion, or simply ensuring the tone of voice is relevant to the target advertising or marketing audience, there will always be a need for a human touch.”
Burton underlined the risk of bias inherited from those working on tools such as OpenAI, IBM Watson and Caffe, emphasizing the importance of a diverse and inclusive perspective.
“The biggest issues we encounter with AI tools are reproductions of bias, cultural stereotypes and cultural insensitivity,” she said. “While there are plenty of areas where federal regulation is needed, we believe it is especially important for the ad industry to self-regulate in the short term by applying an incisive DE&I lens to work with AI. Following and supporting groups such as the Algorithmic Justice League who are at the front lines of this work, is a good place to start.”
In order for the AI tools to evolve toward greater DE&I, humans will play a pivotal role in continuing to do that work in the real world, according to Scott Madden, chief strategy officer at Connelly Partners.
“Savvy use of AI will forever depend on the evolved learning of the humans who feed AI its inputs,” he said. “One needs to look no further than advertising’s love affair with data analytics for a historical parallel that validates the need to guide the power of AI with human oversight.”
More news and thought leadership from the Amp community
Madden added that the impact data analytics can have on marketing performance relies predominantly on human analysis and interpretation, not the machining of the data itself.
“Many brands today have robust data-mining capabilities, yet those large investments in infrastructure are only as effective as the skills of the interpreters,” he said. “Although AI’s impact on advertising may be significantly more potent than present-day data analytics, human oversight and tool savvy will determine its true potency.”
Optimism remains
While the surge in the use of AI tools across industries has sparked concerns over job security, others maintain a more optimistic outlook.
“There’s much discourse these days in the brand and advertising industries over whether or not the boom of AI tools threatens our jobs,” said Madi Rinaldi, content manager and copywriter at [L]earned Media. “However, they require human oversight in order to be beneficial. In an ideal world, generative AI tools serve as a jumping-off point and expansion tool for creatives as opposed to a replacement. For any chance at impactful branding or advertising, thoughtful strategy requires human oversight and minds to create connections between brands and consumers.”
Others shared a similar perspective, viewing AI as a complement to human creativity rather than a substitute.
“At Fred & Farid, we believe that AI is a powerful tool that has the ability to enhance our creative thinking, but as creatives we should never limit ourselves to what AI gives us,” said Eileen Zhao, strategy director at Fred & Farid Los Angeles. “We've found success in using AI to help visualize our ‘Earth Is No Toy’ campaign as well as our most recent ‘Silenced’ campaign for Greta Thunberg's nonprofit organization Fridays for Future. However, the ideas for both of these executions and all the work we do always originate from human talent. To us, there is a role for AI—to inspire and push our thinking forward. But it should always be accompanied by human talent.”
As an industry focused on fostering human connections and relating to people on an emotional level, the question of how big of a role AI should play in creative work is an important one.
“I’ve heard many advertising leaders express enthusiasm that AI will help make more effective work, more efficiently,” said Greg Harrison, chief creative officer at Mocean. “But phrases like ‘more effective’ and ‘increased efficiency’ are the words not of creativity, but of commodification.”
Harrison said there’s a danger in creative leaders getting seduced by the false promise that AI can somehow lessen or remove the risk essential to the creative act, and that the best brands and creatives understand that greater value, including more money, is the byproduct of breakthrough creativity.
“At the heart of all great creativity is a person with a vision for something that doesn’t yet exist, who listens to an inner voice, filters it through their personal taste and years of experience, and takes risks in finding unique ways to move people with their work,” he said.
Balancing regulation with innovation
In the intricate intersection of AI and advertising, where automation collides with creativity, a host of complex problems will need to be confronted. One such issue is the need for a framework around copyright and attribution—“one of the biggest yet-to-be-solved aspects of AI and advertising,” said Oliver Dore, partner of technology at Work & Co.
“If you use generative AI to create visual assets for your campaign, and the AI model is trained on and uses copyrighted material to produce the asset, the legal risks remain a murky area,” he said.
Dore underscored the risks of operating without any guardrails, which could potentially harm businesses’ reputations and financial stability.
“Until we have firm standards around these issues, humans with subject-matter expertise are needed to identify and correct errors produced by AI, identify risks and ensure compliance,” he said. “Human-AI interaction also affords the opportunity to oversee the analysis of the first- and third-party data being used to drive AI output, so we all collectively have transparency into which signals inform the ads put in front of users.”
Although the concerns and challenges are real and need to be addressed, there is a lot to be excited about in the realm of AI innovation, and balancing those future goals with the need for firm boundaries will be critical in not stifling existing momentum.
“If other industries regulate AI, I'm hopeful advertising remains open so that we can push it and see just how creative we can get with it,” said Dave Gregg, staff creative producer at Community Films. “Honestly, it is a groundbreaking tool. Everyone in the industry should be playing with it, learning it and understanding the potential, the limitations and, yes, the dangers.”
About
Amp is a platform that’s integrated with Ad Age and Ad Age Creativity, allowing you to leverage our editorial credibility while showcasing your expertise, accolades and campaigns. For more information visit our FAQ page. Not an Ad Age Amp member? Find your page and claim it today.
Ashley Joseph is a writer, editor and content strategist based in Montreal, and has been a Contributing Editor for Studio 30 covering stories from the Ad Age Amp community since 2018. She also writes about food, travel and beauty when not developing content for brands.
Ad Age Studio 30 is the creative content arm of Ad Age. Built on the same bedrock of journalistic integrity, Ad Age Studio 30 specializes in multichannel membership content for Ad Age subscribers, as well as custom and sponsored content that resonates with our audience. To partner with Ad Age Studio 30, email James Palma at [email protected].