Table of Contents

    Skip to content
    AI Ethics Future of AI

    The AI Reality Ministry Leaders Can't Ignore

    The MAI Leadership Team
    The MAI Leadership Team
    The AI Reality Ministry Leaders Can't Ignore
    7:15

    Beyond capability questions lies a more urgent challenge: Are we prepared to govern systems we may not fully understand?

    The conversation about AI in ministry often focuses on capability—what the technology can do. But as artificial intelligence moves from conversational interfaces to autonomous agents, a more urgent question emerges: Are we culturally and ethically prepared for what's coming?

    At CV Global's Digital Day in London, James Poulter, Head of AI and Innovation at House 337, reframed the AI readiness conversation entirely. His talk, "The Ethics of AI: Could We, Should We, Would We?" moved beyond technical capabilities to address the foundational question facing ministry organizations: not whether we can use AI, but whether we've built the frameworks necessary to use it faithfully.

    "I think the real question is the how and why," Poulter began. "We're past the point of 'what is this thing?' The real question is how do we use it well, and why would we use it, particularly in contexts of faith, scripture, evangelism, and discipleship."

    The Reality of Exponential Growth

    Poulter's presentation confronted a reality many ministry leaders haven't fully grasped: AI is no longer emerging technology—it's already ubiquitous. By Christmas 2025, ChatGPT alone is projected to cross one billion daily users, representing approximately one-seventh of the global population. OpenAI's API currently processes six billion tokens per minute.

    But the current state represents just the beginning. Drawing from the "AI 2027" report by Daniel Kokotajlo and other former OpenAI researchers, Poulter outlined a trajectory toward what researchers call "AI superintelligence." Currently, approximately 1,000 people worldwide possess the expertise to train the AI models powering these systems. This human limitation creates a ceiling on development speed.

    However, researchers estimate that by March 2027, AI systems may become capable of training themselves—removing that human bottleneck entirely. "When you get to that point, well, then there's no limit on the people that are needed to do that work either," Poulter explained.

    The implications extend beyond digital interfaces. Poulter referenced the Neo robot—a humanoid robot priced at $20,000 annually scheduled to ship in mid-2026. "When we start putting AI inside of robots, then we run into other kinds of problems. With capability and strength comes amazing opportunity. But also with capability and strength comes huge opportunity for misuse."

    The Theological Stakes

    The most extreme AI scenarios carry distinct theological implications. Researcher Eliezer Yudkowsky has argued that current AI development techniques could lead to catastrophic outcomes—a perspective Poulter addressed directly.

    "As Christians, I just want to point out that we theologically believe that everybody dies—bar one guy," Poulter observed. "So maybe we shouldn't be that worried about that. But none of us are particularly wandering around trying to usher that future in particularly quickly."

    The deeper concern: if superintelligence becomes reality, "we only get one shot at getting it right. Because by definition, if there's a super intelligence in the world, why would you need more than one of them? Something that's all-knowing, all powerful, everywhere, eternal, capable of all things." The theological parallel to divine attributes is unmistakable—and raises questions about what we're actually building toward.

    C.S. Lewis's reflection on the atomic bomb offers a grounding framework: "If we're going to be destroyed by atomic bomb, let that bomb find us doing sensible and human things." The same principle applies to AI—proceed faithfully with the work at hand, regardless of uncertainty about outcomes we cannot control.

    AI Is Grown, Not Built

    One of Poulter's most important insights challenges the common refrain that "AI is just a tool." Unlike traditional tools—chairs, spades, mechanical devices—AI systems are not built from components. They're grown through training processes.

    "We give it training and we see what happens, to the extent that we actually don't know how these things are even able to be possible," Poulter explained. "They are grown, and they are grown by humans. And when things are grown by humans—like your children, or if you're a teacher, those that you might school in a class—they are grown and molded by those that create them."

    This distinction matters because growth processes embed values. Poulter displayed Claude's system prompt—the instructions governing how the AI responds to queries. These prompts determine not just technical behavior but interpretive frameworks.

    One example: "When engaging with metaphorical, allegorical, or symbolic interpretations, such as religious texts, Claude acknowledges their non-literal nature while still being able to discuss them critically."

    "These models are being used by a seventh of the planet every day," Poulter emphasized. "They might kill us, and they seem to be determining what is truthful or non-literal."

    The Trust Transfer Problem

    Perhaps Poulter's most practical warning involved "trust transfer"—the phenomenon where users accumulate trust in AI through benign interactions, then transfer that trust to high-stakes domains without verification.

    "You talk to ChatGPT about recipes, or you might have asked it questions about directions or planning a holiday. That's good, because you talk to a thing that confidently gives you an answer that turns out to be okay, and you're like, 'Wow, this is great.' And what happens then? Well, you accumulate trust with it."

    The danger: users then apply that accumulated trust to consequential decisions—medical diagnoses, theological interpretation, pastoral counsel—without understanding the system's actual reliability in those domains.

    "We create professions—whether it's lawyers or doctors or theologians—to give us trusted answers, and we as a community come around them and verify them through academia, through qualifications, through social proof and long years of trust developed in community. But what we're doing at the moment with many of these AIs is that we take the trust that we accumulate in the benign and we pick it up and deploy it in highly consequential areas."

    Harvard's 2024-2025 time-use study reveals patterns ministry leaders cannot ignore. The top personal AI use? Therapy and companionship. The top professional use? Personal and professional support.

    "This stuff is already abundantly present with us, already being used at scale, and already potentially having some scary consequences," Poulter noted. "But it's certainly coming to help if we let it be helpful. And the choice for us is: will you let it be helpful?"

    The question isn't whether AI will shape ministry—it already is. The question is whether we're building the cultural and ethical foundations to ensure it shapes ministry well.


     

    Missional AI Summit 2026| SiliconValleySV26

    MAI 2026 isn't just a conference—it's a movement of Christians engaging AI as builders and leaders, not merely respondents. A movement believing technology can serve human flourishing when guided by Kingdom principles. A movement declaring: Redemptive AI isn't just possible—it's part of our calling.

    Early bird registrations are now open. We'll soon share speaker lineup details, specific track descriptions, and year-round community engagement opportunities.

    We hope you'll join us in Silicon Valley.


    Watch Missional AI Podcast

    Screenshot 2026-01-21 at 5.56.44 PM

    The Ethics of AI: Could We, Should We, Would We?

    In a world where artificial intelligence is rapidly evolving, the ethical implications of its use are more important than ever. Join us in this thought-provoking episode of the Global Missional AI Podcast as we explore the intricate relationship between faith, technology, and the future of AI.

    This episode presses into some of the most urgent ethical questions facing Christians in an age of accelerating AI:

    - How do we responsibly harness AI’s potential in ways that are deeply aligned with our Kingdom values?

    - What does it mean for ministry when AI models are not just built as tools but grown and formed through training—and by whom?

    - How can we ensure that AI serves the Kingdom of God in ways that honor Scripture, reflect historic Christian ethics, and support faithful discipleship?

     

    Share this post