What brands can do
To be sure, Faikcheck is far from the first AI detection tool capable of processing images. But its development by an ad agency is relatively novel, and reflects a frustration felt by Graves that AI providers aren’t doing enough to help curb the flow of AI-generated misinformation. Instead, marketers have had to pick up the slack, such as consumer health company Haleon, which has adopted digital watermarking technology to authenticate its brands’ ads.
Another worry facing marketers, Graves said, is simply appearing next to AI-generated misinformation. Indeed, in a 2024 consumer survey conducted by IPG Mediabrands, only 36% of respondents found it appropriate for brands to appear next to such content. This concern increases the need for brands to identify material that may be fraudulent so that they can act accordingly, such as by applying misinformation reporting to their ad campaigns or leaving the platform in question altogether.
At the same time, marketers have to learn to let go of some control, Graves said. The fear is that one’s IP can now be exploited in both detrimental and believable ways. But Graves stressed that, in most cases seen so far, the creator is not profiting off of this fake content, and thus little retaliatory action has been required.
“Brands used to tell consumers how to interpret their IP, but now people on social media are flipping that on its head,” Graves said.
Tools like Faikcheck will be important as the flood of AI-generated material surges on the internet, but these solutions, too, must be taken with a grain of salt. Faikcheck has shown a tendency to conflate heavy editing with evidence that the image was AI-generated, Graves said. Faikcheck itself is built on top of ChatGPT, which perhaps helps explain this imperfection.
“AI would be the one technology to know how to do this because it knows itself,” Graves said.