The defining challenge of our age demands more than silence from communities of faith.
In April 2024, OpenAI CEO Sam Altman published a personal essay titled The Intelligence Age, declaring that artificial general intelligence — AI that can match or surpass human cognitive ability across virtually every domain — was no longer a distant aspiration but an approaching reality. Around the same time, Google DeepMind CEO Demis Hassabis told Time magazine that AGI could arrive within the next decade, possibly sooner. Dario Amodei, CEO of Anthropic, went further, suggesting in a widely-read essay that we could be living in a post-AGI world by 2026 or 2027.
This is no longer fringe futurism. These are the people building the systems.
In a recent episode of the Global Missional AI Podcast, Dr. Richard Susskind — one of the world's foremost scholars on the intersection of AI, law, and society — put it with characteristic precision: "Saving humanity with and from AI is the defining challenge of our age. There's good and there's bad here. We just need to get the balance right."
For people of faith, that framing should land like a thunderclap.
Before ChatGPT launched in late 2022, leading AI researchers broadly estimated that AGI was 20 to 40 years away. That consensus has shattered. The 2024 AI Index Report from Stanford documented a staggering pace of capability gains across reasoning, coding, scientific research, and multimodal tasks — progress that has consistently outrun expert predictions.
Dr. Susskind describes this shift not as a prediction but as a planning imperative: "I'm less predicting AGI and more warning — asking that we plan for AGI, because it seems to me that technologists who understand these systems in detail regard this as such a serious possibility that we should too."
The distinction matters. You don't need to be certain the flood is coming to build higher levees. And when the engineers who designed the dam are quietly reinforcing the walls, you pay attention.
What makes this moment particularly acute is that the consequences are not evenly distributed. Communities in the Global South, where Biblica and many mission organizations work, face both extraordinary opportunity — AI-translated Scriptures, health diagnostics, agricultural tools — and extraordinary vulnerability, from algorithmic exclusion to disinformation at scale. The UN's Governing AI for Humanity report warned specifically that AI's benefits risk becoming concentrated in already-wealthy nations unless governance frameworks are urgently designed with equity in mind.
One of the most sobering threads in Susskind's thinking is what AI researchers call the alignment problem — the challenge of ensuring that increasingly powerful AI systems actually do what we want them to do, in the ways we want them to do it.
"I think it's naive to suppose that we can actually control systems that themselves are able to outperform us," he says. "People say things like just turn them off. But it is clear that these systems will have fail safes in place."
This isn't science fiction anxiety. The Machine Intelligence Research Institute and Oxford's Future of Humanity Institute have spent years formalizing why controlling a genuinely superintelligent system is a problem that may not yield to engineering alone. Even Anthropic's core research agenda is built around the premise that AI systems can behave in ways their designers didn't intend — and that solving this is urgent, not theoretical.
For faith communities, this should resonate at a deep level. We have a long tradition of recognizing that the tools humans create are not morally neutral — that power without wisdom is dangerous, and that the question of who controls transformative technologies is a profoundly ethical one. The control problem is, at its root, a question about human dignity, accountability, and the limits of our own authority.
Susskind frames the path forward with an analogy that deserves to be heard in every church boardroom and seminary classroom: AGI governance, he argues, is not unlike nuclear test ban treaty negotiation.
"The challenge is one of international diplomacy — where we must see collectively an existential threat and set global standards, and under international law bind countries to them."
Progress is being made. The Bletchley Declaration, signed in 2023 by 28 nations including the US, EU, and China, acknowledged shared responsibility for AI safety. The EU AI Act, which came into force in 2024, represents the most comprehensive legal framework for AI risk management yet attempted. The OECD's AI Principles provide a foundation for international alignment.
But Susskind's concern — and it's one worth sharing — is that these efforts, while meaningful, are not moving with the urgency the moment demands. And the gap between leading AI nations, particularly the US and China, remains wide. An AI arms race that prioritizes capability over safety is not a speculative risk. It is the current trajectory.
It would be easy to read all of this and feel paralyzed. Susskind, to his credit, refuses to leave it there. When asked what ordinary people can do, his answer was refreshingly concrete.
First: immerse yourself in the technology. "Everyone's got a very strong view on the technology — not always so well-informed." Use AI tools. Understand what they actually do. Bring firsthand experience into your conversations rather than opinion formed at a distance. Resources like introductory courses from Google's AI Essentials offer accessible starting points.
Second: engage your political representatives. Democracy requires an informed citizenry, and AI policy is not just a technical matter — it is a moral one. The question of what tasks we reserve for humans, what decisions we refuse to delegate to machines, what populations we protect from algorithmic harm — these are questions that belong in the public square.
Third — and this is where the church has something distinctive to offer — help define the moral limits. Susskind speaks of identifying tasks that are intrinsically human, whose significance lies not in their outcomes but in the human participation they require. Pastoral care. Confession. Communal prayer. The laying on of hands. Communities of faith have been thinking about the irreducible dignity of human presence for millennia. That wisdom is not peripheral to the AGI conversation. It may be central to it.
Dr. Susskind ends with a simple challenge: ask the question What if AGI? — in schools, in businesses, at dinner tables, in churches. Not as a thought experiment, but as preparation.
AGI's arrival within the span of a single church strategic plan is no longer implausible. The question is not whether the technology is coming. The question is whether communities of moral conviction will be at the table when the decisions are made.
Saving humanity with and from AI is not a task for technologists alone.
MAI 2026 isn't just a conference—it's a movement of Christians engaging AI as builders and leaders, not merely respondents. A movement believing technology can serve human flourishing when guided by Kingdom principles. A movement declaring: Redemptive AI isn't just possible—it's part of our calling.
Discounted tickets are available through March 10. Now is the time to become a part of the discussion. Join us in Silicon Valley.