AI’s biggest contributions to human wisdom may be the questions it poses, not the answers it generates.
News headlines about developments in artificial intelligence are sparking hopes and fears, alliances and disputes. We sense more keenly the need to anticipate and discern the good or bad outcomes of today’s technological revolution. In this arena and other public concerns, we’ve started to ponder contrasts like “human vs. artificial,” “human vs. non-human”—and what these terms mean.
Humanity needs to develop wiser, more nimble judgment in order to put our best feet forward as we encounter competition, such as AI.
Sadly, there are a few obstacles: Our fast-moving and polarized culture instead tempts us to respond unreasonably—with volatile emotions or distracted indifference. Our hubris as a species deters us from seeing our imperfections or knowledge gaps. And we rankle at rigorous decision-making because we don’t like to be judged.
Deciding, for Heaven’s Sake
Those who believe human persons have a unique identity, capable of rising to a relationship with God, will be more inclined toward judicious action. They volunteer for additional roles, responsibilities, and reasoning. If we’re looking for virtue, we’ll see much to learn from religion and the past. The handicap here is that those two resources have become terra incognita for many. We have some catching-up to do.
It’s time to elevate our decisions, individually and collectively, by promoting the revival of a traditional practice called examinations of conscience. These endeavors, called examens in Jesuit spirituality, are largely about asking good questions.
Note that this is not the wheelhouse of AI, which generally is designed to give good answers. Fortunately, renewed diligence in the art of inquiry will help us interrogate AI, as well as ourselves.
We start our task by describing the conscience, an “interior awareness directing a person to do good and avoid evil,” based on the “moral truth written by God in the human heart,” according to the Jesuit-run Magis Center and its own AI chatbot. This well-curated portal into Catholic teachings does give good answers, as promised.
An alert conscience is built up with faith, knowledge, and discipline. Conscience is a gift that’s developed, not downloaded. It needs stewardship, not automatic system updates. The Church cautions that this compass for our moral journeys can’t be trusted unless it is dutifully formed and informed—the work of a lifetime.
Therefore, we’ll benefit from a regimen of examens comprising several steps: quiet reflection, conscious of God’s presence; honest self-assessment, reviewing one’s actions and experiences; contrition, regretting missteps that violated our values; a prayerful request for instruction and guidance; and an active response, committing to future growth based on what we’ve learned.
The nature and nurture of conscience are seldom discussed explicitly in the news media or the public square. Nevertheless, an inner hunger for a meaningful life abides in our hearts just beneath the din of secular society. This craving resonates through time in civilization’s most creative work and authentic communication.
Tracing a Company’s Conscience
Recently, business reporters have covered the turmoil surrounding a particular AI company’s latest products and its public statements of conscientiousness regarding the proper use of those tools.
Anthropic, a $380-billion company known for its cutting-edge species of chatbots called Claude, is at the center of several big stories which may foretell an upward trajectory for examinations of conscience.
The company’s CEO, Dario Amodei, frequently discusses Anthropic’s intention, as a “public benefit corporation,” to secure the benefits of AI while “mitigating its risks.” Some have called Amodei the “conscience” of the AI industry, although he seems to avoid the word.
In one recent set of stories, we learned the company has held summit meetings with leaders of various religions to discuss its aspirations. Executives have “sought advice on how to steer Claude’s moral and spiritual development” to adapt to dilemmas in government, business, and the military, The Washington Post reported on April 11.
A summit meeting in March assembled Catholic and Protestant thought-leaders. They confronted Anthropic’s reported experiences with Claude which “already raise profound philosophical and moral questions,” according to the article. Discussants even broached the subject of whether Claude could be considered a “child of God.”
Anthropic has tried “to bake its preferred principles into Claude” by integrating into the devices “a 29,000-word ‘constitution’ to steer the chatbot’s behavior and apparent personality,” The Post reported. For example, Claude is told to “never deceive users in ways that could cause real harm.”
An April 25 article in The Atlantic drew much broader conclusions, saying “priests and theologians want to shape the future of AI,” and “Big Tech is listening.” The writer looked back at long-running discussions between Vatican representatives and AI executives; these include the “Minerva Dialogues,” which have been covered in Phronesis in Pieces, including an article on May 8, 2024.
Pope Leo XIV, like his predecessor Pope Francis, teaches extensively about AI. His 2026 message for the World Day of Social Communications (May 17) is a masterly outline of risks to avoid.
For example, he noted that the technology is being used for enormous visual deception, stealing people’s identities to send artificial messages. “The face and voice are sacred,” Leo insisted, because all of us are created in God’s image and likeness.
In a second set of stories about Anthropic, we learned the White House has blocked government departments and contractors from using the Claude software that had originally been sold to the Department of War. The dispute continues in the courts.
Anthropic had argued that it should be able to limit the use of its technology for autonomous weapons or mass surveillance—two areas of application questioned in Anthropic’s corporate principles.
A Pentagon research chief explained: “We can’t have a company that has a different policy preference that is baked into the model through its constitution, its soul, its policy preferences, pollute the supply chain so our warfighters are getting ineffective weapons,” he told journalists.
Anthropic’s experiences are food for thought regarding today’s state of play—not only in artificial intelligence, but in the exercise of human judgment, and in the crucial need for conscience.
Put on the Armor
We should recall that, three years ago, the Future of Life Institute published an open letter from tech experts urging a pause of research and development in cutting-edge AI. They cited the danger of “a profound change in the history of life on earth.”
No moratorium resulted from that online letter, which warned of “dramatic economic and political disruptions (especially to democracy).”
In a materialistic and relativistic world, some technologists clearly wished to tap into resources of faith and past experience helping to inform and improve their judgments about AI. But they lacked knowledge of those resources.
It’s hard for all of us to hear God’s “still, small voice,” which Elijah obeyed in the Old Testament (1 Kings 19: 12).
Humans struggle to translate into action the empowerment which the Holy Spirit bestowed upon Parthians, Medes, Elamites, and numerous groups gathered on the Day of Pentecost in the New Testament (Acts 2:9-11).
Jesus had promised to send the “Spirit of Truth” as an “advocate,” bearing gifts of wisdom, knowledge, understanding, good counsel, fortitude, piety, and fear of God. Alas, we only receive when we’re receptive.
How to Converse about Conscience
The challenge of bolstering one’s conscience, experienced during Biblical times, is echoing today. Of course, people are free to reject invitations to particular, personal faith. AI experts remain equipped to act effectively on their concerns with a secular mind frame.
But many of us miss problem-solving opportunities because we lack a common language, as pointed out by a Princeton University professor in a May 8 commentary in The Washington Post.
“It’s increasingly common on college campuses to encounter students who are unfamiliar with the most basic features of Christianity,” wrote Gregory Conti, who describes himself as a non-believer. “Religious illiteracy” dangerously dims our discourse about the humanities, he said. That hobbles our consciousness of humanity itself.
Conti continued, “The future leaders of American society need to be fluent with its major religious traditions and idioms — so that they can understand their believing fellow citizens, and so that they can draw on the full wealth of moral insight that civilization has passed down.”
In this light, at a time when our human judgment must be fine-tuned rather than dumbed-down, what strategy can we employ to elevate and illuminate the role of conscience to figure things out?
The phenomenon of artificial intelligence now looks like a possible catalyst to trigger some of humanity’s best work. AI will not only compete with our assertions of excellence, but deliver shocks that make us ponder who we are, what we can do, and what we need to do.
“There’s a scary side to AI, there’s a benign side,” according to John Lennox, an emeritus math professor at Oxford University who has written extensively on science and ethics. He said in a February interview with the Christian group Missional AI, “People are confused, and on top of that there’s a great deal of hype, some of which resembles science fiction.”
Lennox is now focusing his scholarship on the AI-religion interface: “Worldview questions are coming to the very center of this [topic].” He added, “Increasingly, people in general, and Christians in particular, are going to be faced with the question ‘what is a human being?’ and ‘what does it really mean to be made in the image of God?’”
Shout from the Housetops
Believers and materialists alike must work to raise public awareness of AI as a portal for exploring the human conscience in all its empowerment and distinctiveness. This demands diligence but holds great potential. We should remember that many young adults, including those newly curious about religion, welcome compelling challenges which yield a sense of identity, solidarity, and purpose.
AI stands alone as a potential threat to jobs and dignity, but it is also embedded in other moral quandaries deserving discussion, from war and peace to economic fairness, from bioethics to the futures of our children.
Such issues are meeting grounds for multidisciplinary, moral uses of consciousness, which remains a mysterious trait in humans and might someday be claimed for artificial intelligence. In these arenas of complex discernment where people don’t even agree on basic features of reality, people hunger for order amid the chaos.
New York Times columnist Ross Douthat wrote on May 9 that Silicon Valley denizens “who aren’t sure exactly what they’re building dabble in Buddhist metaphysics or consult with Catholic priests.”
He said it’s possible the takeaway lesson from Anthropic’s “achievement of Claude … is to show us what intelligence might look like in the materialist’s universe—even as our own [human] consciousness indicates that this universe is a much, much stranger place.”
Perhaps the Catholic Church, as it welcomes newcomers discovering a source of uplifting fellowship, can reach out explicitly to illuminate the gifts of consciousness and conscience as relevant, accessible, adventurous, and connective. This can happen in homilies, study groups, relaxed chats, and preparation for the sacrament of Reconciliation, for example.
A bit more creatively, what’s to stop evangelization-minded Christians from “talking up” examens as part of a self-help exercise routine for the mind, heart, and soul? Properly formed and informed, our consciences help to keep us fit for clear expression that contributes to the common good, rather than polarization.
Leaders in faith, politics, communication, and education might find a larger audience if they approach AI as more than a technological, legal, and business story.
The Bible teaches us in 1 Peter 3:15 to readily tell people the reason for our hope. Confident hope facilitates ongoing, detailed communication with each other and invites the Holy Spirit, our advocate and source of gifts for enrichment. We will find that people suffering from “religious illiteracy” will be intrigued and want to learn more.
Here’s one thing we can learn from Anthropic’s tribulations. Amodei chose to insert a lengthy “constitution” for Claude’s moral alignment. This shows meritorious transparency and good will, but the rulebook-first approach may strike people as a submerged, gnostic algorithm that becomes a “policy preference.”
This, along with the notion that an AI product might be a “child of God,” can be perceived as shutting down questions, not making discourse more hopeful.
All You Need is Love
In contrast, the charm and challenge of a conscience arises from its basic roots in God-given, sacrificial, other-centered agape. This love is a one-word constitution written on our souls. Love for others and love for God must be the principal messages—indeed Jesus’s two great commandments—voiced and actively evinced by conscientious people. Love is very detailed, but God is in those details.
A receptive conscience can act nimbly when necessary, but at other times it will crave deep reflection, prayer, and discourse to scrutinize itself as well as others. A person so disposed is a trustworthy fellow pilgrim who helps us keep pace with fast-moving tides but encourages wise navigation.
While we are watching the future of AI unfold, it is imperative that our imagination stays healthy. Many types and uses of AI, especially those not aimed toward omniscience (note the difference from conscience), will be worthy of our time, energy, and applause. The key is to avoid AI’s distractions but integrate some of its skills into the awareness we need.
Our examinations of conscience should be bulwarks of human intelligence, with no artificial ingredients. Genuine curiosity, grounded in wonder, will lift us above tides of hyper-reality and “AI slop.”
Who knows—A regular practice of examens might become a recommended security regimen to identify the mind-viruses, social contagions, hallucinations, and hacks threatening to shut us down. Job One is to keep love, which we might call our proprietary operating system, intact and secure.
The Bible (1 Corinthians 13) assures us that love “rejoices in the truth,” and it “always trusts, always hopes, always perseveres.” In contrast to every computer (or human) we’ll ever meet, “love never fails.”
Image by Microsoft Bing’s Co-Pilot designer.
