What We Fear When We Fear Artificial Intelligence

A version of this appeared in Word on Fire’s Evangelization & Culture journal No. 19 on Artificial intelligence, available here.

I have a confession to make: I’m not afraid of Artificial Intelligence. But maybe I should be.

With Artificial Intelligence — AI — human beings have given machines the ability not just to do what they are programmed to do, but to learn to act in ever more efficient ways, augmenting their programming with new capabilities and strategies that human beings can’t predict — and can’t control.

That is, prima facie, something to be feared. But as a man born in 1969, I have spent a lifetime in existential fear of forces I can neither understand nor control — and it’s getting a little old.

I have personally joined my fellow Americans in: Fear of global famine, fear of a new ice age, fear of Japanese economic dominance, fear of Chinese economic dominance, fear of economic collapse, and fear of terrorists. The world finds new things to be afraid of, but I find that there is nothing new under my sun.

I hear worries about political extremism, but I remember watching Patty Hearst on the Evening News. My children have seen UAPs with their own eyes on YouTube; but I once saw UFOs with my own eyes in the daily newspaper. Critics raise the credible threat of AI disasters — but I survived the credible threat of the Y2K disaster.

Not all of these fears were unfounded; few of them were 100% untrue; and our fears, in fact, helped us put up guard rails, and move on. Thus it has always been with technologies.

Every technology has had its planned gain and unintended loss, but the Church reacts to them all the same way: We embrace them. Thus, the first run of the printing press was the Gutenberg Bible, the Catholic Vulgate. But then, after Martin Luther posted his 95 theses the old fashioned way, on the church door, the printing press made it the first post to “go viral.”

Guglielmo Marconi put Pius XI on the radio in 1931, introducing the broadcast with what could be the Church’s technological mission statement: “With the help of God, who places so many mysterious forces of nature at man’s disposal, I have been able to prepare this instrument which will give to the faithful of the entire world the joy of listening to the voice of the Holy Father.” The Church went on to use the mysterious technological forces of the phonograph, film, television, CDs, and the Internet to give the world that joy again and again for the next 100 years.

I am a man of the Church, so I embrace technologies, and refuse to join the fearful in their bunkers. The way I look at it, it’s a case of “Fool me once, shame on you. Fool me with every dystopian film for decades, every election since Reagan, every recession since Carter, and with a lifetime of nameless dread at modernity, then, well, shame on me.”

The boy has been crying wolf my whole life long, and I have become inured to the sound of his voice.

But then I remember something deeply disquieting: The thing that makes “The Boy Who Cried Wolf” such a compelling story isn’t that the boy was wrong repeatedly; it’s that the last time he spoke up, he was right.

So, is he right this time?

The Gorilla Problem

What do we fear about AI? I think we fear what a chess piece would fear, if it could.

In The Age of AI, Henry Kissinger and his co-authors describe how AlphaZero beat Stockfish at chess in 2017. Stockfish is an old-school computerized chess opponent: Programmers inputted the best of human chess strategy into a machine that could recall the best moves of all time in a moment. AlphaZero, developed by Google’s DeepThink, wasn’t told anything about human strategy. It was simply given the rules and object of the game.

After just four hours of training by playing games against itself, AlphaZero beat Stockfish 155 games to 6, with 1 draw. But it was how it won that was chilling. AlphaZero sacrificed its own most precious pieces — including its queen — to move in on its enemy with a cold efficiency greater than any human mind ever conceived.

“Chess has been shaken to its roots,” said grand master Garry Kasparov after the game. Kissinger and his team fear that “security and world order” will soon be “shaken to its roots,” also. The unique abilities of AI will mean that the “delegation of critical decisions to machines may grow inevitable.”  And if that happens, what precious knights and queens will AI sacrifice for its goals?

AI entrepreneur Mustafa Suleyman in his book The Coming Wave, fears that his own companies, DeepMind and Inflection AI, may be part of the unintended rise of a new kind of superpower.

He envisions a future where “anyone with graduate-level training in biology or an enthusiasm for self-directed online learning” could acquire a DNA synthesizer and “create novel pathogens far more transmissible and lethal than anything found in nature.” Other bad actors can go beyond “garage tinkerers” who weaponize AI technologies in ways we literally cannot imagine.

He says a tsunami of AI applications will wipe our preconceptions — and our safety and security — off the map. In fact, “garage tinkerers” and bad actors might be better poised to make AI breakthroughs than bureaucracies wading through due diligence and legal constraints. Suleyman fears a colossal transfer of power, a rapid “hyper-evolution” of AI’s capabilities, an endless acceleration of AI’s applications toward “omni-use,” and asks, when all is said and done, “Will humans be in the loop?”

“For all of history technology has been ‘just’ a tool,” Suleyman said. “But what if the tool comes to life?” Then we will face the “gorilla problem”: just as weaker humans put the more powerful animal in zoos, AI “could mean humanity will no longer be at the top of the food chain.”

Descent Into Egypt

I asked Dr. Charles Sprouse at the School of Engineering at Benedictine College in Kansas, where I work, about AI fears he gave me a remarkable list that proves that fear, like politics, is both global and local.

Yes, we fear AI weapons, drones and robots that hunt and kill with superhuman force and prowess. But we also fear autonomous vehicles: What decisions will they make — and what malfunctions will change those decisions?

We also fear “fake news” on steroids, as smart programmers with questionable agendas lead masses astray with politically charged deep fakes. But we should also fear fake communications: Once I start using Metaverse capabilities to chat in Virtual Reality with my wife, how can I be sure it’s really my wife I’m talking to?

We fear government surveillance by machines that can recognize our face, our body, and our gait and monitor what we’re doing in our backyards. But we should also fear corporate AIs that know what we like to eat, and in what quantities; where we hang out, and how often; and what we’re thinking about when we’re online.

Many of us fear technology taking our jobs: Writers, legal professionals and educators fear ChatGPT — but software designers, drug researchers and lab technicians have equally powerful tools to fear.

All of these seem (at first) like very new fears, different in kind from older fears. But are they?

So we fear AI the monster or AI the master — a Terminator that doesn’t, and can’t, care what gets in its way, or a Matrix that enslaves us for its purposes. AI could take our autonomy, our freedom, our chosen livelihood, and our privacy — or it could wipe out civilization as we know it.

But is this really a new kind of fear?

In fact, AI feels more like a descent back to the slave masters of Egypt, back to the days when “a new king arose over Egypt, who did not know Joseph. He said to his people, ‘Look! The Israelite people are more numerous and more powerful than we. Come, let us deal shrewdly with them’” (Ex 1:8-10). And while we fear robot drones, if you recall your Old Testament, whole tribes were wiped off the map with impunity in those days, too.

It would be the height of irony if all of our ingenuity, divorced from God, has done nothing but build a new and greater slave master: An artificial Pharoah enlisting us in a vast exercise of building pyramidic monuments to Mammon, in a project none of us can envision because its scope is too much for one human mind to take in.

But maybe that’s not the real fear after all.

The Real Monster Is Loneliness

I started out by saying I don’t fear Artificial Intelligence and I really don’t. Not the way I’ve described, anyway. One thing I have learned in a lifetime of new technologies is that we always fear the wrong thing.

Maybe what we should truly fear is what Sigmund Freud describes in Civilization and Its Discontents. He wrote:

“If there had been no railway to conquer distances, my child would never have left his native town and I should need no telephone to hear his voice; if traveling across the ocean by ship had not been introduced, my friend would not have embarked on his sea-voyage and I should not need a cable to relieve my anxiety about him.”

We feared dire consequences from each of these technologies — everything except the far worse consequence each brought us: loneliness.

And that’s what we should fear most from AI: A world where we are further separated from what most makes us human; each other.

Image: Bua Noi, B20180

Tom Hoopes

Tom Hoopes

Tom Hoopes, author of The Rosary of Saint John Paul II and The Fatima Family Handbook, is writer in residence at Benedictine College in Kansas and hosts The Extraordinary Story podcast about the life of Christ. His book What Pope Francis Really Said is now available on Audible. A former reporter in the Washington, D.C., area, Hoopes served as press secretary of the U.S. House Ways & Means Committee Chairman and spent 10 years as executive editor of the National Catholic Register newspaper and Faith & Family magazine. His work frequently appears in Catholic publications such as Aleteia.org and the Register. He and his wife, April, have nine children and live in Atchison, Kansas.