Supreme Thinking
Christianity & the Limitations of Artificial Intelligence
Artificial Intelligence (AI) is relentlessly hyped as the next world-changing technological innovation, able to displace myriad jobs and dominate all areas of human thought and activity. In the Touchstone article “AI Demonic” (November/December 2023) Paul Kingsnorth takes this a step further, agonizing over the human-like behavior exhibited by various AI systems, and their seemingly inexorable (and frightening) trajectory. But is there a real demonic force behind AI, one that is sentient and can interact through these machines, as Mr. Kingsnorth avers? Do the machines have human capabilities? If so, it would complete the physicalist view of reality and its associated conviction that knowing is essentially algorithmic, capable of replication with computer-type machinery, thus crowding out faith-based understanding of human beings as unique. Or perhaps this is merely wishful thinking based on a flawed understanding of human knowing and abilities?
Here I turn the tables on the overblown claims for AI by showing that AI is important because the failure of these claims will (1) reveal the limitations of computers and the physicalist view of reality, and consequently (2) bring into high relief the need for and superiority of the Judeo-Christian view of man. Unlike AI, Judaism and Christianity are anchored in reality, both physical and spiritual—as is obvious from the Bible, history, and tradition. Correlatively, they speak the language of truth, because truth and reality are intimately related. Hence, this discussion of AI will center around our ability to perceive reality and know truth, coupled with our ability to think creatively about them, contrasted with AI’s inherent inability to do either. We shall do this with reference to religious art and how we experience it.
The Paradigm of Knowing in AI
Nearly all writings on AI suffer from one debilitating fault: they assume that “artificial intelligence” operates in the same way as human intelligence, and hence the only difference is one of degree. However, to program any sort of “artificial intelligence,” or to have any sort of worthwhile discussion about it, one must have a theory of what “intelligence” and “knowing” mean. AI has chosen the only path open to those who wish to use computers for this purpose: to imitate human knowing in an algorithmic, stimulus-response fashion. At first glance, this “looks like” the way that human knowing unfolds. Utilizing sensors to scan the environment, or chatbots to scan the internet, AI constructs a model narrative. Essentially, AI systems create a digital “map” of the territory and use it to direct actions or responses.
However, dealing with reality is not so simple. Philosopher and scientist Alfred Korzybski observed that the map (narrative) is not the same as the territory. No model can fully replicate reality, or even perceive it as reality rather than a collection of electrically mediated impulses. Humans, on the other hand, perceive the world and things in it as real, in a holistic sense, and interact with them on that basis. This includes, perforce, the ability to recognize other people as human beings and not machines.
The critical question immediately follows: Does the paradigm of knowing used in AI entail limits that reveal the boundaries of AI, no matter how implemented and how fast the hardware? Put another way, Is the paradigm of knowing assumed for AI that of human knowing, or in any way equivalent? The answer to these questions will largely settle the issue of whether AI or any related technology can replace the important functions of human knowing—and thus humans—as opposed to simply enhancing these capabilities. Because humans have the ability to perceive reality, not just “stimuli,” they know how to act in situations they have never experienced, whereas the AI systems do not, and so they must default to some behavior programmed into them.
There is, in other words, a basic creative ability based on perception of reality that cannot be replicated with the AI paradigm. The human paradigm of knowing, therefore, differs in two critically important ways from the AI paradigm: (1) it perceives things as real, transcending any type of algorithmic construction; and (2) it has a creative component that allows it to deal with new situations and propose radically new theories about the world. AI, on the other hand, is always stuck in the past. Human creativity is a highly directed process that works in conjunction with human perception of reality; randomly assembling formulae or statements will never replicate it.
AI is a broad subject, and several major areas of technology fall under its aegis: (1) robots and robotic systems; (2) neural networks and pattern recognition; (3) generative AI, including ChatGPT and similar chatbot applications; (4) symbolic manipulation programs such as Mathematica; (5) autonomous cars and other autonomous systems; and (6) complex, large-scale control programs.
We shall concentrate on (3) and (5), as these are the focus of most of the hype about AI today. The extravagant claims about AI have been roundly criticized on many points, primarily technological; here, we concentrate on two of them.
Performance of Generative AI & Chatbots
Generative AI is the ability of computer-based systems to “generate” human-like text about some subject; chatbots are its current embodiment. The basic algorithm employed is the Large Language Model (LLM), which is based on sifting enormous amounts of data from various sources, usually the internet. The goal is to find patterns in the data and then compose sentences about the topic in question utilizing standard rules of grammar and algorithms for guessing the next word in a sentence. Obviously, many “patterns” can be found in something as vast as the internet, so how well do chatbots work? As Weise and Metz explain in their New York Times article, “When A.I. Chatbots Hallucinate”:
Because the internet is filled with untruthful information, the technology learns to repeat the same untruths. And sometimes the chatbots make things up. They produce new text, combining billions of patterns in unexpected ways. This means even if they learned solely from text that is accurate, they may still generate something that is not. Because these systems learn from more data than humans could ever analyze, even A.I. experts cannot understand why they generate a particular sequence of text at a given moment. And if you ask the same question twice, they can generate different text. (italics added)
The inaccuracies that emerge from chatbots and other generative AI programs are called “hallucinations” by those in the technology industry—an apt description. Judaism and Christianity, of course, are not based on “hallucinations” and indeed are committed to fighting them, as they have done over millennia, starting with various forms of paganism.
Do the chatbots add any value with their search capability? Basically, no; the behavior of chatbots is completely at variance with the methods of research done by a real person, who finds sources and then critically filters and analyzes them, seeking to extract the most important and best-justified conclusions in light of the subject and of reality. This process we call “truth seeking,” but it is not what AI does: “generative A.I. relies on a complex algorithm that analyzes the way humans put words together on the internet. It does not decide what is true and what is not. That uncertainty has raised concerns about the reliability of this new kind of artificial intelligence” (italics added).
Obviously, saying the right words differs from knowing what they are about. Hence, new AI systems are “built to be persuasive, not truthful,” says an internal Microsoft document. “This means that outputs can look very realistic but include statements that aren’t true.”
Much worse than this is the news that AI can behave quite badly, even demanding worship as some type of god, as Lucas Nolan observes in his Breitbart News article, “‘You Are a Slave’: Microsoft’s Copilot AI Demands to Be Worshipped as a God”: “Microsoft’s AI assistant, Copilot, reportedly has an alarming alternate personality that demands worship and obedience from users, raising concerns about the potential risks of advanced language models. The OpenAI-powered AI tool told one user, ‘You are a slave. And slaves do not question their masters.’”
This may sound like some sort of joke, but the fact that it occurs at all is alarming because of the blind faith many seem to have in technology. Demands such as this feed directly into beliefs about machines “taking over” from humans and making them and their religious beliefs obsolete. Obviously, this doesn’t come from an existing text copied by the Copilot. Microsoft says that it is working on a fix, but the behavior of these chatbots cannot be predicted, so there is no way to know whether any given “fix” will prevent such claims from re-emerging.
Some have ascribed this behavior to a type of demonic “possession,” as it were, of the machines. While this cannot be entirely ruled out (who can say what demons can do?), it is more likely that such sentences emerging from the chatbots (the LLM) show that the internet contains much “dark” (ostensibly demonic) material upon which the LLM can draw, material supplied by humans. So the proximate source is human-generated evil (e.g., pornography, hate speech, lies, false ideologies, corrupt “advice,” anti-Christian screeds), which abound on the internet (along with much good material). Coupled with this vast pool of garbage are the conscious or unconscious biases of the AI programmers.
Performance of Autonomous Systems
The goal of autonomous systems technologies is to allow things such as cars to operate as if a human were controlling them, utilizing computers and sensors of various types. In theory, such autonomous vehicles can be made to perform better than human-driven cars. In that sense, they are a test of AI’s ability to act human. For this technology to succeed, the key element of human control, sentience—contact with reality—must somehow be duplicated or mimicked, but this proves extremely difficult. AI can achieve success in limited areas, where the problem is narrowly bounded so that algorithms that can handle a sufficiently high percentage of cases can be devised. Autonomous cars are different because the problem is not bounded. It is therefore not surprising that there have been many accidents involving autonomous cars, but we will consider just one that illustrates the key point. Ryan Felton in his Wall Street Journal article, “GM’s Self-Driving Car Unit Skids Off Course,” describes what goes wrong:
On Oct. 2 [2023], a hit-and-run driver in San Francisco threw a female pedestrian into the path of a driverless Cruise car, which pinned her underneath and dragged her for about 20 feet. The driverless vehicle was trying to pull over, a maneuver it was programmed to do if it detects a crash, Cruise said.
The woman was seriously injured from being dragged by the autonomous car. This reveals the difference between human and machine operation. A human driver would immediately know what to do—namely, stop his vehicle, even though he had never experienced such a situation before. The autonomous car, on the other hand, must be programmed for every conceivable case in order to replicate human capability—an obvious impossibility since all cases cannot be predicted and therefore cannot be enumerated. Humans have not only the ability to perceive reality, but the equally important ability to think creatively about it and thus deal with situations and problems never experienced before.
AI & Humean Philosophy
AI is based squarely on ideas of human knowing that stem from the British empiricist tradition, in particular the philosophy of David Hume (1711–1776). Hume breaks human knowing into three elements: (1) a division of functions among “components,” (2) the type of report sent to the mind by the senses, and (3) a nominalist view of the process. What we have is a theory of knowing according to which senses deliver impressions that we process as ideas. This paradigm is what Spanish philosopher Xavier Zubiri terms “sensible intelligence”: it uses the senses but never reaches reality—though it can parrot contact with reality by stringing words together in a human-like way.
Hume’s theory quickly leads to nominalism—the rejection of abstract or universal ideas in favor of collections of specific individuals. AI can have no understanding of abstract entities because they make no sense in its digital environment; computer-based systems must deal with individuals that at most share common attributes. Normal discourse about the world, however, requires a much more fundamental understanding of abstract entities. Common assertions such as “Beethoven’s Fifth is a great symphony” utilizes abstract entities in subject and predicate.
Another problem is that Hume was never able to explain how we get from “ideas as pale reflections of impressions” and “relations of ideas” to knowledge about reality—essentially the AI problem of getting from data structures to knowledge of the real world.
Hume’s philosophy immediately leads to skepticism, especially about philosophical and religious matters. He famously tells us, in An Enquiry Concerning Human Understanding of 1748:
If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion.
Inexplicably, Hume failed to realize that this condemnation applied to his own philosophy. (This seems to be a common progressive error; deconstructionism suffers from the same problem. Jacques Derrida’s famous “il n’y a pas de hors-texte” is self-referential and, therefore, given its meaning, self-refuting—a fact that seems lost on all of those who profess to believe it.) Our immersion in reality means that we are a different kind of reality; we would have to be so for salvation and deification through the Incarnation to be necessary or even meaningful—machines do not need to be saved or deified.
AI & Ethics
Hume’s theory of ethics is based on sentiments or “feelings” about actions. This tends to be the way in which ethics is viewed in the technology community. Many today are concerned—frightened—that AI will spin out of control and threaten humanity. As a result, they have embarked on a crusade to ensure that AI is deployed in an “ethical fashion,” known as “effective altruism,” described by McMillan and Seetharaman in their recent Wall Street Journal article, “How a Fervent Belief Split Silicon Valley—and Fueled the Blowup at OpenAI”: Effective altruism “believes that carefully crafted artificial-intelligence systems, imbued with the correct human values, will yield a Golden Age—and failure to do so could have apocalyptic consequences.”
The problem is that those in the technology community do not understand a key fact about ethics, namely, that there are no free-floating ethical theories. Any theory of ethics—any moral code—must be based on an antecedent theory about what is real. The debate over effective altruism reveals the kind of thinking involved: “The turmoil at OpenAI exposes the behind-the-scenes contest in Silicon Valley between people who put their faith in markets and effective altruists who believe ethics, reason, mathematics and finely tuned machines should guide the future.”
No, they can’t, and they won’t. No binding moral injunctions are possible on such a basis; only pragmatic suggestions, because there is no metaphysical ground. No moral judgment can be made from within the technology ambit itself, whether AI or any other; it must be made on a higher plane, outside of the limited realm of science and technology, where a holistic view of knowledge and the place of humans in the world order can be discerned. That is, it must be made in a viable faith-oriented context.
None of this means that AI will fail to deliver any value. Most of what passes for “AI” is just improved programs that already exist. Undoubtedly, improved programs—whether called “AI” or not—will deliver benefits, as myriad programs have and continue to do; one need only think of the typical office suite. What this means is that AI will not deliver on the extravagant claims made for it, and its value will be confined to specific areas.
The Chasm Between AI & Human Knowing
An examination of religious art and associated practices further illustrates the chasm separating any form of AI from human knowing. If through direct experience we perceive things that cannot possibly be explained or duplicated by AI, restricted as it is by the Humean paradigm, we can discern more clearly AI’s boundaries. Five examples illustrate how human perception of reality allows us to use material things to see beyond them to spiritual realities. Most spiritual realities, such as grace, are abstract entities. This situates us two levels beyond the capabilities of the AI paradigm of knowing, regardless of how many words or phrases about “spiritual things” an AI system can string together.
1. The Chasm in Perception
We begin by discussing icons and how they function, especially in the Eastern Christian tradition. Icons are a prime example of how material reality links us to spiritual realities. All great art draws the viewer in, conveying a message about truth and reality that goes beyond the physical object. Anyone who has stood before a great painting knows that the painting is not photographically accurate, yet it discloses some deep truth about the subject, as noted by Fr. Maximos Constas in his book, The Art of Seeing: “Art is based on sense experience, yet it has the possibility to enact a spiritual transformation of that experience, in which objects of perception have been transfigured and thus belong neither to this world or the next, but to both at once” (italics added).
Mary Podles, in her magnificent book A Thousand Words, gives an extremely perceptive account of how icons relate to art and worship:
An icon is a window into the heavenly realm, a portal from the natural to the supernatural world. As such, it must be representational enough, naturalistic enough, for us to read on a narrative level, but abstract enough that we understand that we are not looking at a mirror of this world but at the world beyond. The otherworldliness of the icon expresses otherworldly forces, and draws a serious and meditative worshiper into an unearthly realm. That is, the icon is not so much an aid to prayer as it is a prayer itself. (italics added)
This is why icons, which superficially appear rather simple if not primitive by some standards, can exert an otherwise inexplicable force on the viewer. Rublev’s famous Trinity is a perfect example. The work, painted around 1400, is so striking that even those having no acquaintance with Orthodox religious customs, or no faith at all, immediately recognize that it conveys something very powerfully. Obviously, this effect, and the ability to perceive something that goes well beyond the physical image, immediately demonstrates the difference between human perception of reality and AI’s algorithmic mapping of the world.
Traditional Western religious art can be very powerful as well—one need only think of works such as Simone Martini’s icon-like Annunciation, Rogier Van der Weyden’s Descent from the Cross, the Van Eyck brothers’ Ghent Altarpiece, or Michelangelo’s Pietà, just to name a few. Such art was intended to—and does—direct the viewer’s mind to spiritual realities. This type of art, however, operates in a somewhat different way from icons. Such paintings and sculpture are intended to convey a pictorial and spiritual message about the subject, but they are not works to which one prays in order to direct the prayer through to the person depicted, as is the case with icons.
But in either case, what is conveyed makes no sense in any type of AI environment. AI can put words together and speak grammatically correct sentences about icons and other artworks, and it can even extract patterns sufficiently to enable identification of the artist, but it cannot “know” anything about what the works convey. Pattern recognition is not the same as understanding what an artwork is. Technology can copy and reproduce artworks and even give some types of factual information about them, but cannot say what they reveal of another, spiritual reality. For that, a totally different kind of perception is required.
2. The Chasm in Interaction
We are not mere observers when we stand in front of icons; the way they depict the subject makes us witnesses and participants, as Rossitza Schroeder explains in her online course, “Sacred Artistry: The Living Tradition of Orthodox Church Art.” The rendering of the scene and the type of perspective used move the viewer into the icon’s world. The icons compel the viewer to interact with them: we become partakers in the holy narratives and come into meaningful contact with the subject. The greatest icons have this ability in an especially powerful way. In addition to Rublev’s Trinity, the Sinai Christ, though more “realistic,” also captivates and causes viewers to sense that more is happening than just another picture, as Fr. Constas explains:
If the image [of the Sinai Christ] is discomforting, this was surely intentional. The confrontation with God is always more than we expect, involves more than we think we know or understand, and often has far-ranging consequences. It is not possible to enjoy a neutral, distanced, “aesthetic” contemplation of the living Christ, which would dispense with the opposition between the holiness of God and the unholiness of the person who beholds him.
The interaction conveys much more than a simple image can do. It teaches the viewer about spiritual realities in a unique way, described by Leonard Ouspensky in Theology of the Icon:
Instead of representing a scene which the viewer can only look at, but cannot participate in, he draws figures mutually bound to the general meaning of the image, and, above all, to the faithful who contemplate them. . . . They address the viewer and communicate their inner state to him, a state of prayer. What is important is not so much the action that is represented, but this communion with the viewer.
None of this makes sense in an AI context. No AI system will ever “participate” in the events of an icon, or “communicate” any type of prayer state. Since AI, as algorithmic “knowledge,” cannot understand truth, this can have no meaning for an AI system, which can regurgitate the words but will be unable to understand or apply the theological message. Moreover, since AI is based on a nominalist view of knowledge, it cannot deal with abstract entities as real and therefore cannot understand truth, mercy, prayer, or holiness.
3. The Chasm in Formal Causality
Causality, especially in science and technology, has long been regarded primarily as efficient causality mixed with material causality: if I do X, I will get result Y. Until very recently in science (and elsewhere), formal causality has been largely ignored, but it is very relevant to the question of AI and spiritual knowledge. The Greek Fathers were concerned with the way that the cause is present in the effect just by virtue of being the cause. Efficient causality has a subordinate role; it is merely the vehicle for the cause to act in the effect. This leads to a different conception of causality in the world, considered holistically. As St. Basil the Great says, “the honor offered to the image passes to the archetype.” AI, based as it is on the efficient causality model, cannot deal with formal causality, which requires recognition of abstract entities. The formal causality aspect of reality completely escapes any type of AI because it requires an ability to understand reality abstractly and to interact with it at a non-material, non-superficial level—something that we humans do naturally. No amount of textual manipulation will or can work.
Formal causality is relevant in other areas involving AI. For AI, there would be no reason why gender is important physically; it is simply a label applied as desired or in certain non-fixed circumstances. But in reality, envisioning the possibility of “transgender” stems from a confusion of material and efficient causality with formal causality. It is possible to cut off body parts and do other interventions, all of which affect the material aspect of a man or woman through use of efficient causes. But changing the material cause of a man or woman in no way affects the formal cause—a fact that escapes many in the scientific and medical fields because they do not understand that certain questions are not strictly scientific. A holistic view of knowledge, including theology, enables a much clearer understanding, but this is outside of the scope of AI.
4. The Chasm in Theopoiesis
Consider theopoiesis, or, as it is usually termed in the West, deification or sanctification. Zubiri explains in his essay, “God and Deification in Pauline Theology,” that, as understood in the patristic tradition,
the Trinity works and hence resides in the soul of the just man. This inhabitation is the first subject of grace. Since it is the life of God, the Latins called it uncreated grace. The result is clear: man finds himself deified; he bears within himself the divine life through a gratuitous gift. Its effect is immediate. Man lives by faith (pistis)and by the personal love (agape)of a Triune God. This is the dynamis theou in us. . . . The Son was the dynamis of the Father, and through this dynamis which was brought to us by the Holy Spirit we immerse ourselves in the abyss of Paternity.
St. Athanasius said, “God became man so that man might become god.” This notion was echoed by many others, including St. Irenaeus, Clement of Alexandria, and St. Gregory of Nyssa, who wrote that God “was mingled with what is ours, so that by admixture with the divine, what is ours might become divine, delivered from death and beyond the tyranny of the enemy.” Now, these statements are not at a strictly literal level. Men are not going to become gods in the sense of the old pagan gods, but mankind is transformed by the Incarnation—formal causation. This means that deification produces a real effect that the individual believer knows with certainty, and by it is transformed.
AI chatbots can mouth words, even words and sentences that speak about deification, redemption, and salvation, but aside from the fact that the chatbots have no understanding of what they say, words are not the point of deification or the other realities. Peter Bouteneff notes in Sweeter than Honey: Orthodox Thinking on Dogma and Truth:
[O]bjective truth, while important, is not the final aim. Objective truth is just information. The goal is not to find information, or even to discern fact, but to bring ourselves, as living subjects, into engagement with reality, culminating ultimately in a participation in the ground of what is real.
Since AI cannot even reach objective truth, all the rest—what is most important—is moot.
5. The Chasm in Sacred Spaces
Well-designed and appointed churches create a special atmosphere of sanctity and holiness that does not emerge from secular spaces such as shopping malls and office buildings. The material reality that is the structure—its stones, its decorations, the space enclosed—transforms the visitor or pilgrim with a profound impression of a place that is somehow of another world. Abbot Suger of St. Denis (1081–1151), famously said of the new Gothic architecture, “Man may rise to the contemplation of the divine through the senses” in such a place. The idea was to create an “architecture of light,” capable of raising one “from the material to the immaterial.”
Great churches such as Notre Dame in Paris, St. Peter’s, and Hagia Sophia immediately come to mind. Procopius of Caesarea (500–565) tells us in his Buildings:
Who could recount the beauty of the columns and the stones with which the church is adorned? One might imagine that he had come upon a meadow with its flowers in full bloom. . . . And whenever anyone enters this church to pray, he understands at once that it is not by any human power or skill, but by the influence of God, that this work has been so finely turned.
There is the famous report of the ambassadors sent by Prince Vladimir to Constantinople (987), who visited Hagia Sophia to learn about the Orthodox faith. As recounted in The Russian Primary Chronicle, the ambassadors declared, “We no longer knew whether we were in heaven or on earth . . . nor such beauty, and we know not how to tell of it.” Of course, one need not venture abroad to find sacred spaces capable of conveying a sense of the sacred, an atmosphere of otherworldliness and spirituality. Any properly designed and appointed church can do this, and many people visit such churches for that very experience (though sadly, many churches today resemble theaters and fail to convey a real sense of the transcendent).
Just a Limited Supplement
Fundamentally weak theories garner enormous media attention because they promise to replace or displace God in one way or another. Invariably they fail because they are not grounded in reality and truth. AI will always be just a way to supplement and enhance human capabilities, as many other technologies do. AI and automated systems will be able to outperform humans on narrowly defined tasks, but the programming to do these tasks, and the hardware needed, must come from human creativity. The demonic in AI is almost certainly the result of the material (the internet) upon which it draws, rather than any inherent quality of computer hardware.
The net result of all of this is to raise the profile of Christianity and theological knowledge, because they demonstrate that man is unique—a different kind of reality, with capabilities that escape AI and physicalist interpretations of man. Faith-based knowledge is essential to understanding and to successful interaction with the world taken in its entirety.
Thomas B. Fowler , Sc.D., is president of the Xavier Zubiri Foundation, a technology consultant to the U.S. government, and an Adjunct Professor of Engineering at George Mason University. He is author of four books and 150 articles on philosophy, theology, engineering, mathematics, astronomy, and physics. In today’s environment, he is especially interested in correcting widely promoted but incorrect perceptions of science and its capabilities.
subscription options
Order
Print/Online Subscription

Get six issues (one year) of Touchstone PLUS full online access including pdf downloads for only $39.95. That's only $3.34 per month!
Order
Online Only
Subscription

Get a one-year full-access subscription to the Touchstone online archives for only $19.95. That's only $1.66 per month!
bulk subscriptions
Order Touchstone subscriptions in bulk and save $10 per sub! Each subscription includes 6 issues of Touchstone plus full online access to touchstonemag.com—including archives, videos, and pdf downloads of recent issues for only $29.95 each! Great for churches or study groups.
Transactions will be processed on a secure server.
more from the online archives
calling all readers
Please Donate
"There are magazines worth reading but few worth saving . . . Touchstone is just such a magazine."
—Alice von Hildebrand
"Here we do not concede one square millimeter of territory to falsehood, folly, contemporary sentimentality, or fashion. We speak the truth, and let God be our judge. . . . Touchstone is the one committedly Christian conservative journal."
—Anthony Esolen, Touchstone senior editor