×

Generative AI (Gen AI) is a relatively new phenomenon, but already people are struggling to recall how they lived and worked without it. When it comes to churches, some are letting ministry leaders and teams simply use it as they see fit, others are considering how ministry teams should and shouldn’t use it, but I wonder: how many are asking whether the church should be using it at all?

I can see why corporations, in a race to realise productivity gains and cost savings before their competitors do, have embraced Gen AI as if their livelihoods depend on it; but the survival of your church, my church, the Church, isn’t dependent on profit and productivity. We don’t have to race to adopt AI; we don’t have to incentivise our people to seize every opportunity for increased efficiency; we have a choice. And the more I learn about how it’s being designed, developed, and used, the more doubts I have that the benefits outweigh the harms.

 

What We’re Already Seeing

It’s already becoming clear that habitual use of Gen AI can diminish a person’s capacity to think, learn, and create. ChatGPT and the like are designed to foster passive dependency; users become accustomed to having ready, rapid, obliging, “help” on tap. We know it has a tendency to present fiction as fact, but many seem strangely willing to consider this a minor flaw, not a deal-breaker. We also know artificial assistants can make us less relational—less inclined to turn to a person for help, more inclined to turn to a machine, and more inclined to confuse the two. If you say please and thank you when you enter prompts, you’re already taking a step in that direction. You’re treating a lifeless tool as you would a human. You’re calling a machine “you”, not “it”, you’re showing it the courtesy you’d show a friend. But unlike a human, it doesn’t care about your or how you treat it.[1]

In April this year, a Harvard Business Review article listed the top 10 uses of generative AI and “therapy/companionship” was number one. I’m not sure how robust the study was, but the fact therapy and companionship even made the list is striking. People might not start using Gen AI with the intention of confiding in it or asking for advice, let alone befriending it, but they seem strangely inclined to.

Before journalist Jeremy Ettinghausen trawled through the 12 000 questions three uni students asked ChatGPT over an eighteen-month period, he thought their chat logs would contain “a lot of academic research and bits and pieces of more random searches and queries”. He didn’t expect the sheer number or variety of prompts and responses, which covered everything from writing academic essays, to seeking mental health advice.

There’s nothing the boys won’t hand over to ChatGPT.

There is no question too big (“What does it mean to be human?”) or too small (“How long does dry-cleaning take?”) to be posed to the fount of knowledge that they familiarly refer to as “Chat” …

[These boys] are not friendless loners, typing into the void with only an algorithm to keep them company. They are funny, intelligent and popular young men, with girlfriends, hobbies and active social lives. But they—along with a fast-growing number of students and non-students alike—are increasingly turning to computers to answer the questions that they would once have asked another person.

 

What I’m Already Imagining

It doesn’t take much imagination to see how this could happen in a ministry context or how quickly it could change church culture.  First, you only ask ChatGPT to spit out a Bible study when the person who usually does it is sick. Then, when that person is unable to continue in their role, and no one else puts up their hand for it, you figure it’s easier to do it yourself—or rather, to get AI to do it for you. Yes, you could ask around or train a member of the congregation who’s shown potential and expressed willingness. But unlike “Chat”, they might say no. And you will still ask them, when you’re less busy—or so you tell yourself.

Then comes the day when, after spending what feels like hours but is probably only minutes, trying to write a difficult pastoral email to a member of your church, you figure it won’t hurt to see what Gen AI would say. You might find the first pass is surprisingly good. A little clinical, perhaps, but once you prompt it with “more heartfelt”, it sounds so sensitive and loving that all you need to do is type your name. The recipient would be devastated if they knew, but you don’t feel obliged to tell them. You’re not really being deceptive—you did write the prompt yourself, and they are words you’d (eventually) have arrived at on your own—or so you tell yourself.

It’s not such a leap from this to turning to AI for advice as well, especially if it means you don’t have to be vulnerable with another person, to trust them, or be accountable to them. It’s not such a leap to then use it to write a sermon. But even if you don’t take the leap, haven’t you already gone too far?

“The God of the Bible cares not only about what we human beings do but also about who we’re becoming as we do it,” writes Christian academic Chris Watkin. The problem in these scenarios isn’t only what you are doing, but what you’re not doing—and why. In the Bible study scenario, you could be training another member of the body—and encouraging them, and being encouraged. Or you could be spending more time in God’s word yourself, or you could be choosing a study written by someone who has done the work and, in buying it, paying them for that work.

In the pastoral email scenario, you could be spending more time thinking and praying about what to say, then saying it in your own words and in your own voice. That might involve more typing and thinking and reconsidering and deleting and rewording. But in doing that work you will deepen your understanding of what you think, which will better prepare you for conversations that are likely to ensue. You could also, without naming names, ask a wise and trusted friend for help regarding the pastoral matter. They might give you advice you don’t like, but because they’re human you won’t be able to just tweak your prompt to make them say what you want to hear. And because you know they’re wise you won’t be able to just dismiss their advice, you’ll have to wrestle with it. This too will do you good.

Resolving not to use AI in church—and personal—communications will protect our relationships. If we add **Written with AI’s help** when we don’t write an email on our own we ensure transparency—but we might also send a message that we don’t consider the recipients worth our time. There’s a risk people will switch off if they see a disclaimer. But if we use AI without being transparent we risk deceiving people; losing not only their attention but their trust.

In the podcast Shell Game, tech journalist Evan Ratliff makes a digital copy of himself and places it in conversation with unwitting humans. A duped friend later reflects that ever since he mistook his friend’s clone for his friend, he’s been 10% uncertain, when they’re talking on the phone, he’s really talking to his friend; he says it turns out 10% is quite a lot.

 

What’s Already Happening

Jessie Epstein, writing for Inkwell, says that “in bypassing the time, effort, and experience [ChatGPT] takes to arrive at what you personally think about a given topic (and therefore how you would say it), it not so slowly erodes character, integrity, and original thought”. She goes on:

When we outsource thought, it becomes a quick jump to outsourcing (and therefore abandoning) integrity. When we do not know what we think, we do not know how to act. Moral conviction is forged through knowing what you believe, and then behaving as best as you can in line with that morality…

Souls do not move at the speed of machines; they were never meant to. I don’t want to know what a machine claims to think. I want to know what I think, what you think, what the woman pacing through the library thinks… To arrive at a thought individually rendered and expressed takes quite a bit of time. Even if you are arriving at a conclusion someone else arrived at ages ago (which of course they did), you are arriving at it for yourself.

It’s early days, but research is already showing there are negative neural and behavioural consequences of using ChatGPT. In the words of computer science professor Ioan Roxin, AI use carries “the risk of overall cognitive atrophy and loss of brain plasticity”.

The problem is less the tool itself than how it has and is being developed, and being used. The companies and individuals behind the technology tend to value money, power, efficiency, and speed—not truth, justice, and human flourishing. They seem willing to steal the fruit of people’s intellectual and creative labour at scale, and to roll out technology that damages our relationships and our capacity for thinking, learning, and creativity—not to mention the environment and people’s livelihoods—without a second thought.

As Christians, we know we shouldn’t conform to the pattern of this world (Rom 12:2). AI is being integrated into many of the platforms we already use for “free”, because tech companies want to normalise its use and increase our dependence on it. It takes effort to think about when to adopt a new technology; sometimes avoidance isn’t a realistic option. But because this technology can change the way we think about ourselves, each other, and the world, including our capacity to think well—and because it can change the way we relate to each other, and function as members of one body, generative AI warrants particular caution. Already, people are befriending, even “falling in love” with it. If you haven’t heard alarm bells yet, please hear them now.

The mentality and theology of the people who run big tech companies is “openly religious”, says author and environmental activist Paul Kingsnorth on Honestly with Bari Weiss. He notes what Google’s head of engineering (who famously thinks we’ll merge with machines in about twenty years) said, when asked if he thought God existed: “Not yet.” In a podcast conversation with Kingsnorth, religious artist and YouTuber Jonathan Pageau observes that after we create a tool, we become dependent on it. Think of the wheel, the printing press, electricity, calculators, cars. “Is that good? Is that bad?” Pageau asks.

It’s just reality. It’s just how things function. But [AI is] a technology which has to do with mind, and a technology which has to do with attention… with the way humans perceive meaning and engage with meaning.

“The question for me,” Paul Kingsnorth says to Bari Weiss:

is how can you live through it, and where do you draw your line? And everybody’s going to have a different answer to that. At what point are you going to say I’m not going there?

 

Where Will We Draw the Line?

Even using AI for something as trivial as figuring out how to best use the physical space of a new church venue—a real-life scenario that didn’t ring alarm bells for me—could be a missed opportunity to draw on the artistic talents of a member of your congregation; to be blessed by their gifts, and make them feel like the blessing that they are.

In her Inkwell article, Jessie Epstein categorises a devotional that her best friend generated using ChatGPT as a benefit of AI. She says it was so theologically rich that it moved her friend’s worship team to tears. Rich theology is a good reason for being moved, but what does a leader outsourcing the task of writing a devotional communicate and model to their team about the value of writing a devotional? A person could see it as a sacred task, worthy of their time, and do it faithfully. They could benefit from being asked, and saying yes, and taking the time to do it well. AI can perform the task, but it can’t be faithful, or do it faithfully.

Sydney pastor Josh Maule has written an unsettling short story that imagines a world where Christians have “personalised spiritual assistants” and robots help run worship time at conferences. I can imagine, in that world, denominations splitting because some leaders not only want to use AI, but to welcome AI agents as church members. Others, at the risk of being labelled small-minded bigots, might argue for their exclusion. Those wanting to welcome agents might claim that, though they are made by humans, they are made in God’s image and should be members of the body. It feels like madness to even flag this possibility, but think of the shifts we’ve already seen in such a short time, of the madness that’s already come to pass.

 

Using Values to Draw Lines

In general, and on a case-by-case basis, God’s word can and should guide the way we use AI, and don’t—the lines we draw, and redraw—as individuals, and collectively. We should draw those lines not in panic, not fearfully or legalistically, but calmly, with a degree of flexibility. We shouldn’t care more about not using AI in church than we care about people.

Given the design and nature of this tool, only using it well takes considerable effort—constraining its use, checking its output, maintaining transparency, avoiding dependence. Therefore, not using it at all might prove easier, even more efficient, than using it ethically.

If a church makes not using AI in ministry the general rule, a valid exception might be using it to offer non-English speakers access to a live transcript of a sermon in their heart language—but that doesn’t mean we should just invite them to scan a QR code to gain access and leave them to it. We should at the very least warn them it might not be accurate. Better still, we could make sure a person is at their side too, ready and willing to answer any questions they might have. People who care about people can overcome language barriers in ways machines can’t.

The lines we draw should be a means to a loving end, not an end in itself; a means of continuing to do work that takes time, that is sometimes difficult, and sometimes inefficient, but always worthwhile, without exposure to needless risk or rendering members of the body redundant. It’s not about being against AI, it’s about being for people—for flourishing, for wisdom and knowledge and understanding; for truth, for creativity and creation, for right relationships, for love. It’s also not about imposing lines on individuals in their everyday lives; leaders can leave members to act according to their context and their conscience. It’s about ensuring that when we’re acting as a body, we’re intentional about acting wisely and intentionally; staying on the same page.

 

Some Values to Guide Us

The Bible is full of values that will help us make good and godly decisions. What follows is a selection I offer as a possible starting point.

Wisdom, knowledge, and understanding: AI offers shortcuts at speed, but misinformation is a huge risk, as is the temptation to trust and share information we haven’t taken the time to properly understand ourselves. The Bible values the gaining of wisdom and understanding; it values work and effort, attention, reflection, humility, and accepting, not trying to transcend, God-given limits.

Truth: The Bible values truth. ChatGPT doesn’t always tell us when it doesn’t “know” an answer. It can present information that’s false as true. We need to make sure we’re not misled and that we don’t mislead others, that we check sources and facts and understand context, that we don’t pretend we haven’t used it if we have. We could even make a habit of always disclosing AI use, no matter how trivial, to model full transparency. That way people can safely assume that when it’s not disclosed, it hasn’t been used, which maintains trust.

Creativity and creation: Part of what it means to be made in God’s image is to be creative. We don’t have to start with a blank page, but we might benefit from doing so; or we might benefit from brainstorming or creating with others. Our unique voices, abilities and gifts are valuable and worth cultivating. Using them seems preferable to using Gen AI that’s been trained using the fruit of other people’s creative labour without their consent. We’re also called to value and steward God’s creation, which speaks of his glory. This means we should care about the environmental impact of AI too.

Relationships: The Bible values relationships. It values different people bringing different skills and gifts to the table. We’re warned against relating to “dumb idols” as if they’re anything more than the material they’re made from, we’re to treat created things as created things, humans as humans, God as God. If we don’t, our relationships become disordered and our thinking becomes distorted. Even if those around us are happy to relate to machines as they would people, we shouldn’t be.

Love: The Bible especially values love. As machines become more like us, let’s not become more like them. Let’s be driven not by pleasure or efficiency or productivity, but by love. Let’s be shaped not by profit-driven companies and technologies designed to addict and deceive, but by God and his people and his word.


[1] There is, of course, something problematic about interacting with Gen AI tools in a violent or rude manner. But the options are not to either be polite to AI or to be rude to it. A third option is to compare it to, and treat it as, a tool, not a person. The way I treat an appliance doesn’t flow over to the way I treat people, because I don’t think of appliances as having agency or feelings. I don’t say please or thank you to my car, that doesn’t mean I treat it as a slave. I use it for the purpose it serves, I treat it as the tool it is.

LOAD MORE
Loading