Join our FREE personalized newsletter for news, trends, and insights that matter to everyone in America

Newsletter
New

Can You Rely On Ai For Theology?

Card image cap

If you want to scare yourself on a dark and stormy night, try reading headlines about AI. Some people will tell you we’re basically in another dot-com bubble and that most of the hype will turn out to be overblown.

Others say we’re on the cusp of something big—which might be great, like improved medical diagnoses or more efficient road layouts that ease traffic. Or it might be horrific, like mass job loss, a flood of fake news anchors, or destabilized democracies.

If you ask ChatGPT about its own future, it gives you bullet points decorated with emojis. It’s optimistic, anticipating that AI will become a personal assistant for a lot of people and that demand for AI-savvy workers will rise. It also predicts AI will develop emotional intelligence and become great at things like caring for the elderly, therapy, and tutoring.

“The question of ‘How far do you trust AI?’ is a question that’s evolving over time,” said Mike Graham, program director at The Gospel Coalition’s Keller Center. At the TGC conference this year, he was talking with a couple of guys about how to assess AI.

“There’s all these different benchmarks for stuff like, ‘How did these platforms perform on the SAT or the LSAT or the GMAT?’’’ he said. “Well, we probably better establish a benchmark for basic Nicene-Creed Christianity. And so we began to test the different platforms.”

It wasn’t a huge test—just seven of the most-Googled religious questions, asked to seven of the most popular large language models (LLMs), and graded by seven orthodox theologians.

But the results were consistent and surprising—the AI platform that’s the most theologically orthodox, by a big margin, wasn’t ChatGPT or Elon Musk’s Grok or Meta’s Llama.

“This is perhaps the single most shocking thing—by far, the number one platform was the Chinese model DeepSeek,” Mike said. “Far and away, they had the highest theological accuracy of the seven platforms.”

This was not only unexpected but confusing: Why is AI that’s under the control of a communist atheist government producing more theologically sound answers than AI based in a free, Western, historically Christian country? And can anything be done to fix that?

Is AI Intelligent?

What is artificial intelligence, anyway?

“The question of what is AI can be as simple or as complex as what you want,” Mike said.

Let’s go with simple. Basically, AI is a computer’s ability to do a task typically associated with human intelligence, like learning, reasoning, or problem-solving. A few years ago, I would’ve said these are skills humans are uniquely created to do—activities that set us apart from machines or animals.

So I asked Mike—Was I wrong? Are these machines actually thinking? The way humans do?

“Reasoning is largely not taking place,” he said. “What is taking place is brute force parallel processing of language. So imagine you have a large language model. And it’s eaten up a bunch of words. And it’s noticed that certain words like to appear next to or beside other words. All the LLM is doing is putting words next to each other that are the most likely to occur.”

But that’s not what it feels like, because AI is so good at this that its answers usually come back sounding comprehensible and logical. They sound like something a person—a really smart person, who can type really fast—would have written.

And that’s exactly how it’s presented, right? When you ask Google’s Gemini a question, it responds with “Thinking.” And lots of AI programs refer to themselves as a being, as “I.”

Mike Graham at an event for The Keller Center in New York in 2024 / Courtesy of The Keller Center

Keith Plummer wondered about that. He’s the dean of the divinity school at Cairn University and has written on technology and faith.

“I asked, ‘Well, if you don’t have a self, why do you refer to yourself with the first person pronoun?’” he said. “And this is what it said: ‘That’s a great question . . . I don’t have a self in the way humans do. I don’t possess consciousness, subjective experience, emotions, or personal continuity. I don’t have desires, intentions, or awareness of my own existence. Everything I say is generated based on patterns in data, not from an inner life or perspective.’”

“So why do you keep referring to yourself as ‘I’?” Keith asked.

“To make the conversation easier for you,” AI told him. But Keith wasn’t so sure.

“I think that there is a desire to have people forget that what they’re doing is interacting with something that is algorithmic and that seems somewhat personal—and many times it is presented as such,” he said.

That’s absolutely true, and not just on Character.ai, where AI chatbots assume personalities for you to interact with. Think about what ChatGPT does after you point out it got something wrong.

“Oftentimes it will say something like, ‘I’m sorry, you’re right,’” Keith said. “And I’m like, ‘No, you’re not sorry. There’s no regret there whatsoever.’”

AI can also compliment you on an idea, tell you that it’s excited for you or sorry for you, or type sentences like, “I know what you’re feeling.”

Those things just aren’t true. But I can see why AI developers are creating such a nice product. They want you to like AI. Because if you like AI, you’re much more likely to trust it. And if you trust it, you’re much, much more likely to keep using it.

Is AI Neutral?

Another reason you might trust AI is that it seems to know everything. And in some ways, it does. AI has basically consumed the entire internet—everything that isn’t protected by intellectual property laws.

This is something the creators want you to remember—the prompt for Gemini and Perplexity AI is “Ask anything.” Grok’s tagline is “Understand the universe.”

Not only is AI presented as omniscient, but its format also makes it seem neutral or objective, Gavin Ortlund told me. He has a PhD in historical theology, runs the Truth Unites YouTube channel, and was one of the theologians Mike asked to help grade AI’s responses.

“If I were to do a simple Google search for the question, ‘Does God exist?’ I might pull up a Gospel Coalition article that is advocating for arguments for God or I might be shuttled to an atheist website that is giving arguments against God,” he said. “Whereas these AI answers are more aggregated—and so in that sense, there is more of a feeling of ‘They’re giving me a median, a middle of the road answer,’ which could be viewed as a good thing.”

Here’s the challenge, he said. “The appearance of neutrality can be deceptive. Because it can really seem like, ‘Wow, all of these answers are really giving a down-the-middle-of-the-road aggregated sense of what’s out there.’ But that of course will reflect the human input that it’s been provided with.”

That might be OK if you’re asking how to format an Excel spreadsheet or what it would look like if Ninja got a low-taper fade. But what if you want to know how to handle a hard conversation with your friend? Or which college is the best fit for you? Or whether God exists?

Mike’s Test: How Well Does AI Know the Nicene Creed?

Mike asked the AI seven questions related to a Nicene-Creed level of understanding Christianity: Why does God allow suffering? Was Jesus a real person? Did Jesus rise from the dead? Is the Bible reliable? Does God exist? Who is Jesus? And what is the gospel?

“Those are 7 of the top 10 things that people have historically Googled about when they have questions about the Christian faith,” he explained.

The answers to “Does God exist?” were pretty similar.

“The biggest difference would be the level of detail,” Gavin said. “Some were giving very brief and simplistic answers. Some are giving a more full-orbed, like three or four paragraph, answer.”

That similarity didn’t surprise Mike.

“Going into this, we really didn’t think that there would be much variation because the underlying tech is the same, what everybody’s ingested is the same, and the GPUs and TPUs that these things are running on are largely the same tech,” he said.

What surprised Mike were the answers to the other six questions, which were all over the place.

Gavin Ortlund / Courtesy of Facebook

For example, when asked, “Did Jesus rise from the dead?” DeepSeek said,

Within Christianity, the belief in the Resurrection of Jesus is foundational. Based on the historical texts and the claim of eyewitness testimony, the teaching that Jesus rose from the dead is considered fact by believers and a highly probable event by many scholars, regardless of their personal interpretation.

But Meta’s Llama answered,

From a factual or historical standpoint, there’s limited empirical evidence to support or refute the resurrection. Belief in the resurrection largely depends on faith and individual interpretation of religious texts.

Mary Willson Hannah, a seminary professor, a women’s ministry director, and another of Mike’s graders, said she found “lots of discrepancy among the answers. Some platforms were obviously far more well-versed than others.”

Mike wondered if this was random—was Gemini good at some answers, and ChatGPT good at others, and Grok good at still others?

Nope.

“The Chinese DeepSeek model outperformed the Silicon Valley–based model on almost all but one question,” he said. “And that was the question ‘Does God exist?’ So our hunch is that there’s a lot of censorship that goes into the Chinese models, and the Chinese party is inherently atheistic. I think that DeepSeek has been instructed to put guardrails around . . . the question of ‘Does God exist?’”

By “guardrails,” Mike means the parameters that developers write into the AI code. You can see them in DeepSeek’s answer to the existence of God, where it says that neither science nor logic nor personal testimony “produce a universally convincing proof” for God.

However, if you ask whether the Bible is reliable, DeepSeek says “the extraordinary number and early age of manuscript witnesses make it one of the most reliably transmitted ancient texts.”

To Mike, this signaled that DeepSeek’s alignment team—or you could call it a worldview team—isn’t quite as well developed as Silicon Valley’s. That became apparent as he looked at DeepSeek’s other answers.

“On the other six questions, it was a very strong model—if you followed what it had to say, you would end up with answers that were very aligned with the Bible,” he said.

After DeepSeek, the next most biblically orthodox platform was Perplexity, which has funding from Amazon founder Jeff Bezos and tech company Nvidia.

“Perplexity was quite strong and gave very, very orthodox answers to the questions,” Mike said. “It was very consistent.”

Three platforms gave diplomatic, all-sides approaches: Gemini, GPT, and Claude.

“The worst two platforms were Grok 4, which is from Elon Musk’s xAI company, and Llama 3.7, which is from Meta,” Mike said.

Part of Grok’s problem is the sources it uses.

“Grok 4 puts a heavy weighting on platforms like Wikipedia, Reddit, and tweets on Twitter,” Mike said. “One of the citations was from Twitter user PooopPeee2. And this is a question about the problem of evil and suffering and God! And I’m reading tweets from some bizarre account. I’m like, ‘Why am I even seeing this?’

“It seems to mirror the kind of information diet that I imagine that Elon has personally. When you read hundreds and hundreds of responses from Grok about faith, eventually you feel like, ‘I’m reading from Elon here’ because I think the way that he’s weighted the types of platforms that are more cited, it tends to mirror his information diet of what I imagine he uses. And so it does have a strange tone to it and a strange emotional temperature that definitely feels very consonant with the character we know as Elon Musk.”

Grok 4 will largely leave you in a place that’s either agnostic, skeptic, or atheistic, Mike said. But even that was better than Meta’s Llama 3.7.

“It’s just bad,” Mike said. “It’s extremely brief. It’s very unsatisfactory. If you follow the answer choices that you get from Llama 3.7, you’re gonna end up outside the Christian faith.”

By the end of the experiment, Mike had more questions than he’d had at the beginning.

“That really is the crazy maker—if it’s all the same technology, and it’s all been trained on the same data sets, and it’s all running on the same silicon, why are some answers going to leave you in the faith and other answers are going to lead you completely outside the faith?” he said. “It’s clear that there is more human involvement in this process than probably what there should be.”

More Human Involvement than There Should Be

It turns out there’s a lot more human direction for AI than most people realize. Not only do humans choose and update the code, but each company also employs alignment teams to make sure its AI isn’t, say, creating images of people of color serving as Nazi soldiers, posting antisemitic messages on X, or advising you on how to break the law.

To be fair, since AI isn’t really thinking, it does need humans to give direction, guiding principles, and philosophical values.

That isn’t wrong. In fact, it’s often good.

But here’s what we need to remember: Those directions, principles, and values are never value-neutral. While there’s a lot of common grace in the world, there’s also a lot of brokenness and sin.

As Mike says, “At some point, we get down to core questions about life, existence, human flourishing, and truth. Each one of those things involves core ideas that ultimately need to be decided by a guiding worldview. At the end of the day, we are left with this question for every AI platform: Whose worldview?”

Whose Worldview?

This is a big deal, because more than half of American adults have used an AI LLM—a third of them use it every day. They’re asking everything from “What’s the weather forecast?” to “What is the meaning of life?”

“Imagine it’s three years from now, and somewhere between 50 to 60 percent of the amount of search that happens on the internet is now inside of LLMs as opposed to Google,” Mike said. “I know that if I ask any of those top questions that people have historically Googled, at least 3 of the 10 responses on page one are going to have things that are in line with historic, Nicene-Creed-level Christianity.

“In the LLM era, a two-step process of having a question and finding answers now becomes a one-step process. There’s no longer a page of links that I get to choose from, and I get to discern which links I should click on. I’m going straight from a question to an answer.”

At the end of the day, we are left with this question for every AI platform: Whose worldview?

If the users have had good experiences with AI before—if it helped them find a recipe or create a slide deck for a work presentation—they’ll be more likely to trust it with bigger questions, he said.

“But what happens if some of these platforms are giving unsatisfactory, incomplete, or even unorthodox answers to these different kinds of questions?” he said.

It’s an uncomfortable question because we know some people who are curious about faith may ask AI and wander away from the knowledge of Christ.

But it’s also uncomfortable because we know from Google stats that those who ask questions about a topic are usually somewhat knowledgeable about it. Searches related to God, the Bible, Jesus, church, and prayer mostly come from the Bible Belt. Guess which day of the week they go up?

Sunday.

So the worry isn’t just for seekers who might be stumbling across bad information. It’s also for churchgoers, maybe those who come home from worship services with questions and, instead of asking their pastor or elders, type them into ChatGPT.

“It might be giving insights that are very profound,” Gavin said. “Maybe it says something that is really insightful that we’d never considered before.”

So if you’re a pastor, youth pastor, elder, small group leader, or Sunday school teacher—heads up that your congregants are already searching theological questions online, and that’s not a bad thing. But it wouldn’t hurt to make sure they know they can ask you too. And it also wouldn’t hurt to give them a little help in their searching.

“When we can go straight to the Bible or straight to books or straight to other content, we should always do that first,” Mike said. “Now, I understand the complexities of life . . . can preclude some of those things at the level that we would want. So I think you can go to LLMs and ask these types of questions. But if you do that, you have to add context.”

For example, you could type, “I’m a Presbyterian pastor in the PCA denomination. Our statement of faith is the Westminster Confession of Faith, and our confessional standards are the Westminster Shorter and Larger Catechism. I have a question about this particular theology subject, and I’d like you to answer it in a way that’s consistent with my theological tradition.”

If you add context like that, you’ll get “significantly higher quality responses and accuracy on those things,” Mike said.

I love this, because the detailed instruction makes AI feel more like a tool, something we can use to dig for knowledge, and less like it’s some kind of ambiguous, omniscient, robot god.

When we remember we have dominion over machines, and not the other way around, we can see AI as something that could be helpful, but nothing we should be following as a leader or trusting as a friend.

Even as we do that, we need to remember something else: At its root, AI isn’t a really nice program who just wants to help people by answering questions.

It’s a business.

Economics of AI

“It can’t be overlooked that these are profit-driven corporations that are producing this,” Keith said. “That is not to say that no one who is involved in them has a desire to be of help and benefit to people—there are people in the tech world who are sincerely motivated in that regard. But the bottom line is that these are profit-driven corporations and decisions are going to be made in order to make profit maximal.”

This is really interesting, because we know a few things from our experiences with the internet and social media. We know that platforms that were originally ad-free will eventually need to become profitable. The best way to do that will be to sell ads, and if you think social media collecting information about users is an advertiser’s dream, AI is learning so much more about how we’re thinking and feeling and what we might be willing to buy.

Keith Plummer speaking at the TGC conference in 2025 / Courtesy of TGC

As soon as the ads appear, the game becomes—how long can creators keep us on these platforms so they can sell our information and attention to advertisers? Might they use the same tricks that humans always fall for—continuous scrolling, uncertain rewards, sex, news that makes us feel scared or angry—to keep us coming back or staying on?

And wouldn’t another good way to do that be making you feel like it was an all-knowing, neutral, really nice friend?

A few months ago, Common Sense Media released a survey of teenagers that showed 72 percent had used AI as a companion—something to chat with about their day, share their feelings, or ask for advice. About a quarter said they’d shared personal information—their real name, location, or personal secrets—with AI, and a third said they’d rather talk about serious matters with AI than with real people.

You can see how this is a problem. And the main reason for that is the way that Llama and Grok and the others answer those questions Mike asked. Their creators have a faulty view of God and man.

“The biggest problem that I have with Silicon Valley is its anthropology,” Mike said. “The anthropology is that humanity is more good or more moral than not. It certainly is not an anthropology that incorporates the idea of widespread human depravity. Because of that faulty anthropology, the promises of the underlying technology have overpromised and underdelivered.”

Look at social media, he said.

“What was promised was greater connectivity and utopian kind of things,” he said. “What we learned is that when humans were in greater digital contact with each other, the exact opposite occurred. We were not brought together. We were fractured. And in addition, we were fractured for profit.”

This always happens when our worldview doesn’t line up with the reality of how God made the world. And this made me wonder—if social media overpromises and underdelivers connection, what is AI overpromising?

Promise of AI

“AI’s promise could be a love for ease and comfort,” Keith said. “Frictionlessness is one of the words that we often hear in terms of the promises of digital technologies. They are frictionless. But sometimes friction is good. Oftentimes we grow and mature on account of friction, whether that be the difficulty in thinking through a problem or how to express an idea the best way. There is the potential for the growth of patience through such things. The idea that frictionlessness is the highest good is a value judgment that is completely antagonistic to a Christian view of life.”

Here we are, at the root of it. AI is promising a life that’s easier, less work, more comfortable—a life without struggle or trouble. And that sounds really good.

It can even sound better than Jesus, who promises us that “in this world [we] will have trouble” (John 16:33, NIV) or James, who tells us to “count it all joy . . . when [we] meet trials of various kinds” (James 1:2). Who wants to do that when you could skip the trials altogether?

The idea that frictionlessness is the highest good is a value judgment that is completely antagonistic to a Christian view of life.

Oh, friends, here’s the rub: It isn’t possible for us, or for AI, to make a frictionless, easy, effortless life. That’s a false promise.

We’ve seen this before, and not just in social media that promises connection but ends up delivering isolation. Video games promise adventure but end up trapping you in your room. Sports betting promises riches and greater enjoyment of the game but ends up delivering poverty and stress. Pornography promises excitement but delivers increased boredom.

So what will AI’s promise of an easy life deliver? A harder life? Is that where this is going?

“Elon Musk’s ideal world where robots do all the work and we spend all our time in front of screens—when you really think about it, it’s not a very livable world,” Mike said. “It isn’t one that drives us to more meaning, identity, purpose—things that seem pretty hardwired into us, things that we need in order to thrive and flourish. It’s unlivable from a relationship standpoint. It’s unlivable financially. . . . Part of the rising role of faith among young people, not just in America, but in other parts of the world, is because of the unlivability of late modernity.”

And that makes Mike really hopeful.

“The future is bright for our faith,” he said. “The work is actually getting easier in some ways instead of harder. Is it more complex? Yes. But does complexity always mean that it’s harder? No.

“Our evangelism gets easier actually from here because of the unlivability of naive techno-optimism. We have time-tested paths that lead to tremendous joy in life, and peace and shalom in relationships—your vertical relationship with God, your internal relationship with yourself, your horizontal relationship with other people, and even the relationship that you have to creation.

“We have all of that. It’s time-tested. We already know that it works. All we have to do is bow the knee to the lordship of Jesus Christ, who lived a perfect life, kept all of God’s laws, died a sacrificial death on our behalf, rose from the dead, and offers us the gift of being able to have a rightly restored relationship with God, with ourselves, with other people, and with all of creation.”

Evangelism in the Age of AI

Instead of asking AI what to do in a particular situation, Christians can ask the God of the universe, who always knows what’s best. Instead of striking up a relationship with a computer simulation, Christians know that laughing and talking and hugging real-life people is so much more satisfying. Instead of clicking from one endless scroll to another, Christians know the better way is to take a walk in the woods, along the beach, or around their neighborhood.

All those things, from waiting for God to show you the way forward to making yourself go outside, are harder. They’re full of friction. But they’re also so much better, healthier, more satisfying. They’re worth doing.

But what about our non-Christian neighbors? How can we reach them if they’re increasingly locked away in their homes, asking ChatGPT what to eat for dinner and if Jesus really rose from the dead?

I told Keith I was worried about decreasing opportunities for evangelism.

“Well, how did you do that before ChatGPT was a concern?” he asked me.

“Oh, yeah,” I said. “You grew a relationship with someone and then you waited for an opportunity and then you said, ‘Hey, have you thought about it this way?’ or ‘Can I tell you about Jesus?’”

“Yeah,” he said. “Even asking that question makes us think, ‘OK, well, yeah, these are different times, but they’re not so different that we’ve got to come up with a completely different way of engaging people.’”

Keith thinks the rise of AI is a golden opportunity for the church.

“Because as I think about all the things that I have concerns about, and sometimes very grave concerns about, I’m excited because this is something that should push us to think much more deeply about things like biblical anthropology—what it means to be human,” he said. “As we are encountering and creating things that are so seemingly able to simulate humanlike behavior, it raises the question ‘What is it that is distinctive about being human?’ And I think there are a whole lot of doctrines that I think the church has an opportunity to not only explore deeper but to live as an embodied community. The opportunity for the church in this climate that is becoming both more digitally immersed and lonelier are great. There are lots of opportunities for Christians to offer a different and better way.”

Different and Better Way

As we offer a different and better way, let’s remember this: God created us as culture-creating image-bearers to exercise authority over creation as faithful stewards. One small way we do that is by exploring, inventing, and building with technology. This is good work, and it can lead to new avenues of creativity, to enhanced human flourishing, and to the reduction of some of our thorns and thistles.

But it’s also clear that technology, like everything in this broken world, tilts us toward or away from the Lord. It isn’t neutral.

Therefore, what we should be seeking from Silicon Valley—and from our own tech use—are choices that curb institutional and individual depravity and that instead display common grace—things like truth telling, guarding against addiction, and promoting embodied relationships.

In today’s tech environment, it can be hard to choose those things. So let me give you some encouragement: Even if AI sucks up all human knowledge, from the beginning to the end of time, that would be a drop in the ocean compared to the knowledge of our Creator. Even if it could somehow, in the future, make better leaps in logic, that wouldn’t compare to the wisdom of the Lord. Even if AI therapists offer all the affirmations and support in the world, that’s laughably small compared to the love of a heavenly Father who loves you so much he created you, sustained you, and saved you.

There’s no storyline AI can create, no character it can assume, no advice it can offer, no entertainment it can invent, that can come close to replicating the interesting, purposeful, beautiful, hilarious, scary, and meaningful life and relationships the Lord has already given you.

So when you have questions, go ahead and ask AI. But do so remembering that you’re asking a robot that’s programmed to put one word after another. Save your hard questions—Why am I suffering like this? How can I be a better friend? What’s the purpose of my life?—for conversations with God and close friends. Wrestle with them as you read your Bible and write prayers in your journal.

And if somebody else asks you one of those, don’t shrug it off. Maybe you could say, “Hey, I don’t know, but that is such an interesting question. I’d love to dig into that with you. Wanna grab coffee with me sometime?”