Marketers should also implement their own methods of risk mitigation, said Lartease Tiffith, executive VP for public policy at the Interactive Advertising Bureau (IAB). One example, he proposed, is having humans review AI outputs to ensure there aren’t inaccuracies. Despite its human-sounding answers, generative AI has a reputation for sometimes spouting information that appears factual but is not—a tendency often referred to as “hallucinating.”
Another internal practice that companies should consider is holding regular conversations with AI partners to ensure everyone is on the same page, said Tier. Tech companies already don’t have a great track record in terms of following the rules and doing right by everyday users. As such, brands should determine, to the best of their efforts, that these partners are acting responsibly and preparing for government regulation.
Much of the policy to come, however, is expected to fall squarely on AI companies. Special licenses, for example, may be required in order to create AI, as well as compliance rules for those who are allowed to do so, said Ivan Ostojic, chief business officer of telecommunications company Infobip. Lina Khan, chair of the Federal Trade Commission, recently exhorted regulators to crack down on everything from collusion and monopolization to fraud enablement and privacy infringement. Her admonishment was published as an opinion piece in The New York Times.
Also read: WPP to launch generative AI ad platform with Nvidia
Transparency with consumers should be another consideration for brands as they await official regulation.
“Advertisers need to make sure they aren’t surprising anyone,” said Tiffith. He suggested that however a brand decides to go forth with AI, they should properly disclose the use of such materials.
The media industry has already seen how not disclosing this information can cause significant backlash. Earlier this year, tech outlet CNET was found to be quietly using AI to write short articles and attributing them to “CNET Money Staff.” The articles themselves were later discovered to contain numerous inaccuracies. Now, CNET writers are demanding assurances as the outlet’s reputation reels.
What brands are saying
Salesforce, which has launched a CRM-focused AI model called Einstein GPT, expects regulators to consider how generative AI would engage with existing data protection laws in order to mitigate harm to users, Hugh Gamble, Salesforce’s VP of federal affairs, wrote in an email. The EU is already doing so with respect to its privacy law, the General Data Protection Regulation. The U.S. does not have such a wide-reaching framework, however, and so regulators do not have a backdrop against which to base policy.
In the meantime, Salesforce has established ethical guardrails for its own products. The company is also staying in touch with regulators in order to support the developing situation.
Plant-based food brand NotCo, which has used generative AI in recent ads, views regulation of the industry as a significant dilemma because it could be seen as hindering innovation, especially because new tools have already been used to solve various problems. The brand does not expect policy any time soon.
See it: NotCo uses AI to show animals living a full life
“We are not preparing for any impending regulations at this time,” wrote Aadit Patel, VP of product and engineering at NotCo, in an email.
Other brands are taking a more restrictive approach. In what is most likely an effort of caution, some BBDO clients have rejected agency work that used generative AI, Ad Age previously reported. In April, BBDO Worldwide President and CEO Andrew Robertson issued a memo that urged employees to refrain from using AI tools in client work unless formally permitted to do so by the agency’s legal team.
“While we are excited by the potential to incorporate generative AI into our services, we want to do so in a way that avoids unresolved issues such as potential violations of copyright and ownership and confidentiality concerns,” Robertson wrote in the memo.
And some are outright prohibiting employees from using AI tools under any circumstances. Apple last week became the latest company to do this, disallowing platforms like ChatGPT over concerns relating to the divulgence of confidential data. JPMorgan Chase, Samsung and Verizon have implemented similar lockdown policies.
While the IAB supports AI experimentation, Tiffith understands why companies may want to proceed with such a level of caution.
“Brands should avoid becoming guinea pigs for when something goes wrong,” he said.