Artificial Intelligence (AI) is a phrase referring to a program or computer system that thinks, reasons, and learns in the same way as a human being. This has long been a theme of science fiction—the droids from the Star Wars films are thinking, reasoning, emoting machines. Fictional AI often takes on a villain’s role, such as HAL 9000 from 2001: A Space Odyssey, the machines of The Matrix, or the character Ultron from Marvel Comics. The concept of advanced artificial intelligence is related to the idea of a technological singularity, the point at which manmade creations overtake humans in terms of reasoning ability, problem-solving, and self-development. Despite hopes and fears to the contrary, there is no reason to think that true artificial intelligence is possible, let alone actual.
Many who see artificial intelligence on the horizon point to the development of machines and other technology. They note how industrial robots are faster and/or stronger than people. Calculators can perform operations with perfect accuracy and in much less time than a human being. Computers, of course, can store, recall, and manipulate data far more efficiently than can a person. AI proponents often point to computers that have beaten human opponents in contests such as chess or the TV game show Jeopardy. Following this type of reasoning, some suggest that technology may advance such that machines will be able to think as well as or better than the average person.
An analogy to show how such reasoning falls short relates to animals and people. When someone says, “Machines and AI will be better or smarter than human beings,” it’s like saying, “Animals are better than humans. Cheetahs are faster. Elephants are bigger. Birds are more agile.” The problem, of course, is all of those are separate animals, and they are only “better” in separate categories. A single AI program might be “better” at chess or cooking or even making music. But for AI to be legitimately as smart as or smarter than people, a single program would need to excel in all of those things at once.
Key to understanding the idea of artificial intelligence is carefully defining terms such as intelligence; in popular depictions of AI, more common terms are variations of smart or smarter. Computers often appear to be intelligent, when in fact they are performing extremely low-level thinking extremely quickly. They aren’t actually smart; they are just capable of doing certain tasks in less time than people can. There are some tasks they cannot do at all. If a person defines intelligence in a way that eliminates concepts such as morality, emotion, empathy, humor, relationship, and so forth, then the phrase artificial intelligence is not so meaningful.
This is a particularly important point to keep in mind when discussing strategy games like chess or go, in which computers often defeat even the greatest human masters. This, some say, is proof that computers can be smarter than people and perhaps already are. And yet the program that bests a human in a strategy game is designed specifically for playing that game. It might win, but the human can then leave the room and do many, many other things that the machine cannot do. The software that allows the machine to succeed in a trivia game can’t tell you how to tie your shoes. Or make a sandwich. Or draw a flower. Or write a limerick. Nor can it comfort a sick child, pretend to be a character in a play, or watch a movie and later explain the plot to someone else. The truth is that those purpose-built AI computers are markedly less intelligent than the humans whom they defeated in narrow contests.
Further, even the most advanced computer still pits human intelligence against human intelligence. On one side is a single person; on the other is a machine mechanically drawing on the collective intelligence of many people. A computer that beats people at chess or checkers or Jeopardy is not “smarter” than the people it beats. It’s just better at getting certain results according to the rules of that particular game.
The phrase technological singularity specifically refers to that theoretical moment when artificial intelligence reaches a tipping point, after which it self-improves without human input and beyond human ability. In some cases, technological singularity is anticipated as a boon to mankind, with all humanity benefitting from the discoveries made by a vastly superior intellect. In other cases—most, in fact—singularity is feared as precipitating the downfall of the human race—as depicted in movies such as The Terminator and its sequels. A common staple of science fiction is a computer system that evolves and learns so quickly that it outruns the human mind and eventually dominates the world.
The concept of technological singularity also assumes that processing power will advance forever. This is contrary to what we know about the natural laws of the universe. The rate of growth in computing technology eventually runs into the limits of physics; scientists and computer experts agree there is a “hard limit” to how fast certain technologies can operate. Since the complexity required to simulate a human mind is so far beyond even theoretical designs, there is no objective reason to say that true artificial intelligence can exist, let alone that it will exist.
On a more abstract level, math and logic also strongly suggest that AI can never replace the human mind. Concepts such as Gödel’s Incompleteness Theorem strongly suggest that a system can never become more complex or more capable than its originator. To make an AI better than a human brain, we’d need to fully understand and then surpass ourselves, which is logically contradictory.
Spiritually, we understand our own limits because, being creations of God (Genesis 1:27), we can’t outdo God’s creative power (Isaiah 55:8–9). Also, God’s depiction of the future does not seem to include any kind of technological singularity (see the book of Revelation).
Regardless of the limitations they face, researchers continue to attempt to develop artificial intelligence, and large sums are being invested in programs that promise to further work in virtual assistants (such as Alexa or Cortana), deep learning platforms, and biometrics. Not surprisingly, there is already a religion, called Way of the Future, started by a former Google engineer, that plans to worship AI and look to it as mankind’s caretaker and guide. Such futile imaginings are nothing new; humanity has often been guilty of worshiping the work of their own hands. The Way of the Future is just a modern version of carving an idol.
In short, AI might be able to perform certain, limited tasks better than a person can, but there is no logical, philosophical, or biblical reason to think it can be “better” in a meaningful sense. AI might emulate the patterns human beings use when we think, but it can never replace the prowess, dexterity, and creativity of the human mind. Despite fears and speculations, the weight of science, observation, and Scripture refutes the possibility of true artificial intelligence or a technological singularity. In short, the concept of AI makes for entertaining fiction, but not much else.