A Digital Monoculture Is a Bigger Threat Than the Terminator Scenario

By Published on .

Waze planned drives on iOS
Waze planned drives on iOS Credit: Courtesy Waze
Most Popular

While Waze leads us to our destinations via the quickest route, our dependence on this kind of decision-support system may also be the quickest route to a monocultural society.

You've said the word "algorithm" a thousand times this year, and you may have even written out your algorithmic goals in English, but have you ever coded an algorithm? Do you really know how any AI model is performing? What tiny mistakes (or purposeful small changes) are being made to subtly guide our decision-making?

Humans Are Incredibly Bad Decision-Makers
To make a good decision, you have to properly assess risk. Sadly, people almost always improperly assess risk. For example, you have a one in 11 million chance of being in a plane crash, but you have a one in 272 chance of being killed in a car accident on the way to the airport. You have a one in 20 million chance of being killed in a terrorist attack, but you have a one in 300 chance of being assaulted by a firearm. By the numbers, cars and guns are far more life threatening than planes and terrorists. So it should be easy to find people to advocate for spending cuts on anti-terrorist programs in favor of funding programs to reduce gun violence. But, without a clear understanding of the risks we face each day, our decisions are controlled by our hearts, not our heads.

That said, every decision is not emotional. We are often called upon to make conscious decisions, and confirmation bias aside, we have developed some statistical tools that can increase our odds of success. In his book The Wisdom of Crowds, James Surowiecki explains how to use a diverse collection of independent thinkers for decision-making.

The book opens with a story about a crowd at a country fair. All of the people in the crowd were asked to guess the weight of an ox, and while none of them got the correct answer, the average of all of the answers was closest to the ox's actual weight. Importantly, this methodology actually requires individuals to think independently. If the individuals were influenced by expertise, group dynamics or other types of bias, the results will be skewed.

We've Set the Bar Pretty Low for AI
In an "observe and react" architecture, AI systems can use algorithms that loosely mimic a "Wisdom of Crowds" approach. If the AI system just makes "pretty bad" decisions (as opposed to "incredibly bad" decisions), the improvement will be measurable. But as we start to build interactive man/machine partnerships, things are going to change.

If only a few people were driving around in cars with relatively accurate traffic congestion maps, they would benefit from the knowledge and enjoy an alternative, and presumably quicker, route. However, the overall impact of AI on the larger traffic system would be negligible.

But I just logged on to Waze, and there are over 53,000 Wazers around me. All Wazers are seeking the fastest route to their respective destinations, and Waze is doing its best to help them. Are we all being sent via the best route? The better Waze gets, the more people will use it, and the more we use it, the smarter Waze will become, until … we are totally dependent on Waze to get us where we need to go. With 53,000 vehicles around New York City being "routed" by Waze, the impact on the larger traffic system is material.

The Road to Digital Monoculturalism Is Paved with Good Intentions
This would not matter as much if it were just about Waze. But Waze is a proxy for its parent company, Google. And it is also a proxy for Amazon and Facebook and IBM and Microsoft -- the other founding members of the newly formed "Partnership on Artificial Intelligence to Benefit People and Society." Add Apple, and you now know the names of the very few "artificial intelligences" that you have been training to make decisions for you.

As AI and machine learning improve, more people are going to benefit, and through our interaction with the machines, the AI systems will make better decisions for us and in turn become more and more popular. And then it will happen: a small number of AI systems (most likely the aforementioned "Partnership on AI" group) will be making most of our decisions for us. We might not even notice that in the process, we devolved our diverse, multicultural world into a collection of distinct digital monocultures.

AI will sort our news feeds (it already does), our entertainment choices (it already does), our way-finding (it already does), and the energy efficiency of our homes and offices (it already can, but it is not widely deployed); make our financial decisions (it mostly does); make our medical decisions; make our business decisions; and probably make our political decisions too. The list of potential AI applications is bounded only by need and imagination.

I Don't Know What I Don't Know
I'm not really worried about rogue computers threatening our lives. I'm worried about the small number of programmers and coders charged with realizing the financial and political goals of their patrons. Could a ubiquitous social network skew or even direct an election? Could a traffic control system delay certain people from getting to work on time? Could an AI-enhanced financial services company deny loans or insurance due to zip code or race because it is the "best outcome" based on its programming? Could we train the AI that controls our news, communications and entertainment to restrict us to our comfort zones without even realizing what we've done?

I can imagine a world filled with digital monocultures, isolated from one another by feedback loops. Cognitive computed comfort zones will be much worse than our self-crafted comfort zones, worse because we won't know that as a few artificial intelligences strive to algorithmically optimize our lives, it will be at the cost of our incredibly bad human decision-making.

Which I'm sure we're going to miss.