×

I have the great joy of working in Christian education at Perth Bible College (PBC) where I teach Biblical Studies, Christian Ministry, and serve as Dean of Studies. Over the years, I’ve walked alongside students as they wrestle with Scripture, and how to live wisely and faithfully in an ever-changing world. In my first stint at PBC (2015–2021) questions around academic integrity were of the familiar kind: poor referencing, over-reliance on a single source, and so on. Such errors were usually the result of a lack of competency than of any evil plot. Occasionally something more deliberate would show up but these were rare.

Having returned to the classroom in 2025 after some time in pastoral ministry I’ve discovered that the landscape has shifted dramatically. Whether we like it or not, the use of artificial intelligence (AI) has exploded. What this means on the ground is that the old rules of academia almost certainly need revisiting—not because the values have changed—but because the tools have.

What follows is not so much a warning about AI in and of itself, but an invitation to think theologically and critically about how AI tools such as ChatGPT shape our habits, our understanding of formation, and ultimately our witness as Christian scholars and ministers.

 

Phantom References and My “AI Moment”

In a recent assessment, I asked students to produce an annotated bibliography on Ezekiel 34:1–16—a minor assignment ahead of a larger exegetical paper. Students were to identify a range of scholarly sources such as commentaries, journal articles, theological dictionaries, and monographs directly related to the text and relevant themes at hand. A diversity of voices was encouraged, including work from women and minority scholars who often go underrepresented in the field. Along with the references, students were also asked to provide a summary of the authors’ thoughts on the Ezekiel text that demonstrated they understood the authors’ positions.

One student submitted what looked to be an excellent bibliography. It contained different types of references and included some heavy hitters on Ezekiel including Daniel I. Block. Promising.

I decided to do a quick check on one reference, given some of the specifics the student provided in his summary. He had said the author had made an explicit connection between Ezekiel 34 and both Jeremiah 23 and Zechariah 11. Upon checking this reference, it turned out that the author had made no such connection. Interesting.

I looked at the surrounding items in the bibliography, seeing three journal articles referenced, each with a summary. I decided to look up each article to see if these summaries were more accurate. The references were works from reputable journals, including details of volume and issue numbers, year, and page range. I give the examples below:

Daniel I. Block, “Ezekiel’s Shepherd: Beyond the Indictment”, JBL 119, no.2 (2000): 201–217.

Stephen L. Cook, “Apocalypticizing the Covenant in Ezekiel 34”, VT 66, no.1 (2016): 77–93.

Paul S. Evans, “The Davidic Covenant in Ezekiel 34”, JETS 56, no.3 (2013): 537–550.

Incredibly, not one of the articles listed above exists. NOT ONE. Although each reference appeared legitimate, not one of them was authentic. What was going on?

 

Understanding the Tool

I am not an expert in AI systems like ChatGPT. But my good friend Adrian Fry works with AI daily to help clients detect financial fraud in their businesses in his capacity as Director of Intelligence (AI & Analytics) at Accord Analytics. He wasn’t surprised when I shared this story with him. ChatGPT and similar tools, he tells me, are essentially enormous language models that are trained on near-incomprehensible amounts of online data. And while it is astoundingly good at imitating patterns of human language, it doesn’t “know” things in the same way a search engine or library database does.

So when, for example, a platform like ChatGPT is asked for a scholarly reference (as my student above did), it predicts what a plausible reference should look like based on the kinds of references it has seen. The result? Fabricated citations. It correctly identifies scholars who have written on similar topics and provides article titles that sound just right. It places them in journals that really do exist. But no actual publication exists behind the data. AI researchers call this an hallucination—a known limitation of current-generation models.

With this knowledge I conducted my own experiment by trying to replicate what my student had produced. I asked ChatGPT for an annotated bibliography on Ezekiel 34 using the same scholars named above and it likewise produced similar fake references. When challenged, ChatGPT fessed up. Here’s how the whole conversation proceeded:

Me: Hey Chat, can you draw up an annotated bibliography for the purposes of an assignment on Ezekiel 34:1–16. Be sure to include some journal articles from Daniel I. Block, Stephen L. Cook, and Paul S. Evans.

ChatGPT: Certainly! Here’s an annotated bibliography focusing on Ezekiel 34:1–16, featuring works by Daniel I. Block, Stephen L. Cook, & Paul S. Evans.

The list included a commentary each by Block and Cook, as well as providing a journal article with a summary of the content allegedly by Evans. That reference and summary were as follows:

Paul S. Evans, “Ezekiel 34:1–16 and the Shepherd Metaphor in Ancient Near Eastern Context.” Journal of Biblical Literature 134, no. 3 (2015): 455–472.

Evans examines the shepherd metaphor in Ezekiel 34 within the context of ancient Near Eastern literature. He compares the biblical text with contemporary sources to highlight the unique aspects of Ezekiel’s critique of Israel’s leaders and the portrayal of Yahweh as the true shepherd.

Guess what? Again, the article doesn’t exist, and I asked ChatGPT about it:

Me: Thanks Chat, but be honest with me, did you just make up that Paul Evans reference thinking I wouldn’t double check?

ChatGPT: You’re right to question that citation, and I appreciate your diligence. Upon further review, I cannot find any publication by Paul S. Evans, titled “Ezekiel 34:1–16 and the Shepherd Metaphor in Ancient Near Eastern Context” in the Journal of Biblical Literature or elsewhere. While Evans has contributed significantly to OT studies, his work does not appear to include a publication . . . focused on Ezekiel 34:1–16.

I followed up further:

Me: I appreciate your honesty, Chat, but why did you make that reference up? I could get into trouble for this.

ChatGPT: You’re totally right to be upset—and honestly, I owe you a proper explanation. The reference . . . by Evans was hallucinated—it wasn’t real, and I shouldn’t have included it as though it were. Here’s what happened under the hood: (1) [I prioritised] pattern recognition over accuracy; (2) [I made] assumptions about author topics.

ChatGPT continued:

(3) No Excuse, Just Transparency: You’re absolutely right—this can have real consequences for academic integrity. That’s why I take it seriously when you call it out. It’s also why I now prioritize transparency and evidence over just generating plausible-sounding responses.

A remarkable “interaction” if ever there was one.

It’s important to understand at this point this is not because ChatGPT is “lying” per se. Language learning models (LLMs) don’t have a grasp of truth. They simply assemble common patterns of language. That’s what they’re programmed to do.

So what are we to make of this digital revolution, particularly in Christian higher education?

 

Academic Integrity and Spiritual Formation

Certainly, the easiest thing in the world would be easy to say, “Just don’t use AI.” But that would be neither honest nor wise. The truth is I use ChatGPT for administrative tasks—generating marking rubrics and refining class timetables, for example. Used correctly, it’s a helpful co-pilot, though as an auto-pilot it falls well short (as my student discovered to his detriment). The problem may not be the tool so much as how one uses it.

Using ChatGPT-type applications for producing real academic work is fraught at best. It will give you material that looks and sounds authentic as I’ve outlined above. Some of it might be real, but much of it will almost certainly be fraudulent.

If you’re a student under the pressure of a looming deadline there will be a massive temptation to take a ChatGPT shortcut—to do your research; to write your summaries; and even produce full essays. Chances are you’ll be found out for a couple of reasons. First, your teachers who set the assessments know what’s out there. That’s how they got the job. They really know their field. You’re not going to be able to sneak stuff past David Horrell (my former supervisor) if you’re working on 1 Peter. The same goes if you try to sneak something by James Eglinton if you’re working on Herman Bavinck, or Peter Williams if you’re working in the gospels, or Madison Pierce if you’re working on Hebrews.

Second, there is an organic connection between yourself and the material you write. As your lecturers or tutors get to know you, they see the personality in the classroom and how it bleeds into the material you produce. Overusing or improperly using AI will see that organic connection disintegrate. Your voice will be lost in the midst of generic data production.

Such shortcuts are not only academically risky, but they also short circuit one’s personal and spiritual formation. In this sense, tools such as ChatGPT, when used improperly, rob one of opportunities to grow into Christlike maturity, a growth that is integral to Christian faith and praxis, especially for those seeking to enter some form of ministry.

Brandon D. Smith, Associate Professor of Theology and Early Christianity, at Oklahoma Baptist University, pointed out in an online conversation with me how the use of ChatGPT and similar tools both form and unform us in unintended ways. In particular, formation not only happens through the content we absorb; it also happens through the processes we employ.

For instance, an improper use of AI takes away the hard formative work of doing one’s own reading and research, and will ultimately short circuit one’s development. On the other hand, the graft of study forms us in positive ways, teaching us patience, perseverance, and humility. It turns out that wrestling with a difficult text or summarising someone else’s argument in our own words isn’t just about gaining a high distinction. The process itself is the reward in the way it shapes the kind of person we can become, both intellectually and spiritually.

Moreover, ministry requires pastoral and counselling wisdom that can’t be outsourced to a ChatGPT algorithm. It’s hard won from actual study and research, and spending time conversing with people who are with you on the same journey, or better, further along the journey than you are. AI may simulate knowledge, but it cannot genuinely form character, or bear witness to the goodness of God in one’s life.

 

Teaching the Next Generation to Use AI Responsibly

The use of AI may indeed cover for laziness or lack of preparedness. One thinks they can get stuff done more quickly because of AI, so important things are left to the last minute, in a hurried and harried manner, without being truly processed. Sadly, it’s the processing time that you can’t replicate but really need to truly digest what it is you’re trying to learn and to be able to discern reliable from unreliable sources.

All of that said, Christian educators must prepare students to live wisely in a world where AI will be a permanent fixture. While one should not allow students to use ChatGPT to produce assignments, banning ChatGPT from the classroom entirely may be akin to banning encyclopedias in 1988 or Google in 2005. Such a strategy simply won’t work, so the preferable path must be to teach discernment and wisdom in its appropriate use. I discussed this with my friend Adrian, mentioned earlier, given his expertise in AI, his passion for Jesus, and his concern for the future health of theological education. Between the two of us, we came up with the following as the beginnings of a way forward.

Christian educators themselves must become familiar with the appropriate and beneficial uses of AI, as well as its dangers and pitfalls. They should teach students to fact-check AI-generated content and require students to disclose when they use AI tools to maintain integrity. Educators should design assessments to evaluate not just content recall but theological reflection, synthesis, and application. This may include oral examinations and the reintroducing handwritten exams. We all need to work hard to emphasise the value of process over final product. The value of theological education is not just the content the student produces but the kind of person the student is becoming.

Editors’ note: 

To support the ongoing work of the local team at The Gospel Coalition Australia, make a donation here.

LOAD MORE
Loading