Add to that location-based data from mobile phones, transactional data from credit cards and adjacent data sets like news and weather. When machine learning and advanced algorithms are applied to these oceans of digital information, we can intimately understand the motivations of almost every consumer.
These are undeniably powerful tools, and no one can blame the advertising industry for rapidly adopting them.
But AI also introduces troubling ethical considerations. Advertisers may soon know us better than we know ourselves. They'll understand more than just our demographics. They'll understand our most personal motivations and vulnerabilities. Worrisomely, they may elevate the art of persuasion to the science of behavior control.
Aside from these fears, there are more practical considerations around the use of AI in advertising: inherently biased data, algorithms that make flawed decisions and violations of personal privacy.
For these reasons, we need a code of ethics that will govern our use of AI in marketing applications, and ensure transparency and trust in our profession.
A system of trust
The more complete our understanding of an individual, the more persuasive our marketing can be. But each new insight into a consumer raises new questions about our moral obligations to that individual -- and to society at large.
For example, most would agree it's acceptable to leverage AI to target a consumer who shows interest in sports cars. But what if you also knew that consumer was deep in debt and lacked impulse control, had multiple moving violations, and had a history of drug and alcohol abuse? Is it still okay to market a fast car to this person, in a way that would make it nearly irresistible?
Rather than judging each case on its moral merits, it's more effective to establish guidelines that remove the guesswork. A system of transparency -- in which the consumer is more of a partner in his or her marketing, rather than an unwitting target of it -- is the ethical way forward.
Such a system would include three primary aspects: data, algorithms and consumer choice.
Data -- AI is fueled by data, which is used to train algorithms and sustain the system. If data is inaccurate or biased in any way, those weaknesses will be reflected in decisions made by the AI system.
Often, these data sets reflect preexisting human biases. Microsoft's unfortunate experience with Tay, the conversation bot that reproduced the hateful speech of those that engaged it, is probably the most infamous case study.
Algorithms -- AI engines contain codes that refine raw data into insight. They dictate how the AI system operates, but are designed and developed by humans. Which means that their instructions should be "explainable."
Some call this "algorithmic transparency." Transparency, however, is not realistic in this context, because the most valuable intellectual property of an AI lives in the algorithm, and agencies aren't eager to share that code openly. In addition, sophisticated machine-learning systems can be a black box, unable to adequately explain their rationale for any particular choice. When you don't know the internal functions and benefits -- the recipe for authentic trust isn't there. Explainability means ensuring the ability to clearly explain the decisions an AI makes and why.
Consumer choice -- Simply put, consumers should be aware of the techniques being used to market to them, and have the option of participating in those campaigns. In order to make an informed choice, consumers need a clear explanation of the value exchange in any given campaign. What are they giving up? What are they getting in return? And they should be allowed to opt out if they are uncomfortable with the transaction.
We are advertisers, not ethicists. However, that doesn't excuse us from considering the social impact of the work we do. We know there's a line that can -- and probably will -- be crossed with AI. Therefore, we must establish best practices for the use of AI in advertising, and understand the differences between what we can know, should know, and shouldn't know.