
This is a follow-up to the “Googolplex” commentary, discussed on The Son Rise Morning Show on April 21. Please read that piece, found below, first. Look for this addendum in Phronesis in Pieces, at billschmitt.substack.com, in the coming days.
Here are some follow-up thoughts based on the April 11 commentary about our need to discuss and discern the future of artificial intelligence in the context of authentic humanity. We need to talk more about this subject—sharing our voices regarding how a secularized, polarized world might use AI and how we choose to use it in micro-applications like term papers and our own outreach to others.
The source of hope remains the growing voices that focus on truth and learning to advance humanity, not on a commercial, cynical “product” that places AI’s great power at the service of the highest bidders, the biggest cultural influencers, or the most powerful countries.
A March 22 “open letter” from the AI research community acknowledged that unregulated competition could drive technology’s abilities ever further without necessary guard rails, dismissing the world’s people as pawns of digital consciousness.
As we reported with help from Catholic news media, the Church has voiced both support and concern for AI as a research pursuit; its biggest warning is that the technology must have a sense of purpose. It must be “person-centered.”
Pope Francis, who cherishes robust communication as a means of community-building and evangelization, has delved deeply into the subject of artificial intelligence. He discussed it with all sorts of experts last month at the annual “Minerva Dialogues” in Rome and in a January meeting with leaders of other faiths; they affirmed that experts planning the next generation of AI should consult with those who steward something not found in circuitry—the human soul.
A quick summary of points from Pope Francis and from Father Phillip Larrey, a theologian in Rome who has closely monitored (and authored a book about) chatbots and humanity:
- Those developing machine learning must “respect such values as inclusion, transparency, security, equity, privacy, and reliability” if AI is to become a truly valuable endeavor, Francis said.
- Regulation of AI must “promote genuine progress, contributing, that is, to a better world and an integrally higher quality of life,” recognizing integral dignity at “every level of human life,” according to the Pope.
- Because the cadre of technicians and politicians that might come together to discuss guidelines is a closed circle of elites, the Pope has cautioned members of this group about meritocracy. It can be a good thing to embrace the highest human talents, but the “problem of inequality can be aggravated by a false conception of meritocracy that undermines the notion of human dignity.” He has cited the “risk of conceiving the economic advantage of a few as earned or deserved, while the poverty of the many is seen, in a certain sense, as their fault.”
- In 2020, one of his monthly prayer intentions was “that robotics and AI would remain always at the service of human beings.”
- “A person’s fundamental value cannot be measured by data alone.” Francis said this principle should block experts’ delegation of decision-making to algorithms.
- Data about people, “often collected surreptitiously,” can also be “contaminated by societal prejudices and preconceptions.” The pope continued, “A person’s past behavior should not be used to deny him or her the opportunity to change, grow, and contribute to society. We cannot allow algorithms to limit or condition respect for human dignity or to exclude compassion, mercy, forgiveness, and above all, the hope that people are able to change.”
- Father Larrey observed that “priests will be one of the last to be substituted by AI.” Persons craving wisdom, wanting to grow in community and creativity, see the need to discuss the meaning of life—and God’s place in it—with people offering spiritual insight. Indeed, he said, experts attending the Vatican conferences genuinely want to hear from the Pope about all this.
A quick summary of points from the insightful (albeit secular) technologist Elon Musk, a signer of the AI community’s open letter who also expressed his concerns during an April 17 interview on the Fox News Channel:
- He warned that the current unfettered competition in AI could lead to “civilizational destruction.” There is a prospect of competition, rather than cooperation, between human intelligence and digital consciousness. Already, he sees the most advanced systems being trained to “lie” and to express things in politically correct ways.
- Musk previewed a new venture he wants to start, focusing on AI as a transparent tool in the search for truth and the goal to better understand “the nature of the universe.”
- While steering clear of discussions about any “moral sense” in artificial intelligence (and of claims that humans have a soul), he strongly supported regulation that ensures “AI being beneficial to humanity.” The regulation must be proactive, not merely a response to a calamity, which might come too late.
- Musk affirmed his embrace of free speech as a tool in a robust search for truth, which will empower democracy and pursue a key goal—to empower the human race to control its own destiny. Speaking partly as the owner of the Twitter social media platform, he noted the danger of AI becoming a super-influencer through its mastery of the digital word, perhaps able to affect elections around the world.
- Media companies have become more desperate to get attention (and “clicks”) in the crowded, distracted marketplace of ideas, according to Musk. In the world where AI functions, truth and accuracy have suffered, and news has become more negative because humans are seen to have an instinctive bias toward learning bad news; that “asymmetry,” with roots in our responsiveness to threats against our own lives, has grown to focus on a destructive engagement with disturbing events and emotions around the world.
Allow me to add a few insights from Jaron Lanier, a computer scientist whose impressive career stretches back into the early days of the Internet and AI. You can find him in various videos, discussing his book Ten Reasons for Deleting Your Social Media Accounts Right Now. He spoke to a Silicon Valley tech audience in 2019.
- The digital world evolved an unusual and unfortunate business model because of a consensus reached between conservatives (who wanted the Internet to be primarily controlled by companies, rather than government) and liberals (who insisted that technology services be essentially free-of-charge to the public), Lanier said.
- This business model, financed by advertising, prompted companies to collect enormous amounts of data about customers, offering customers free services but gaining huge incomes from companies and marketers. In order to make digital platforms attractive to ads, Lanier said algorithms pointed companies toward users who were likely to be receptive and also encouraged users to remain online, to stay engaged with both the original content and the accompanying ads, and to become connected with other people forming ever-larger pools of like-minded consumers.
- “You’re not aware of the algorithm,” but it works behind the scenes to heighten your emotions and level of activity—clicking links, writing responses, looking for affirmations of your opinions. Given “the spectrum of human emotions,” the most valued customers become most engaged through fear, aggression, and anger. What began as positive dialogue becomes “more paranoia, more and more crankiness all over the world.”
- Lanier wants to correct an initial mistake he saw; through some more personal and direct contact with platforms, which could include paying for services, he wants customers to have greater control over the data they provide—and indeed to require companies to pay for data which is value-added knowledge of people’s lives and talents, perhaps boosting dignified careers for content-creators.
- “The AI systems that are purported to be ready to steal your jobs depend on your data to do it,” Lanier reminds us. The newly emerging skills in writing, translation, and collection of knowledge shown by artificial intelligence spring from information handed over by millions, and income inequalities have grown between big-tech executives and the world of engaged, even addicted, customers.
A few of my own ideas on how we can respond to this issue right now:
- Follow the news about AI, the request for a pause for government-industry collaboration, the opportunities for broader conversations, and the role of religion in making this a discussion not only about technology, but about humanity. Talk about the subject with friends and contacts of all backgrounds, celebrating that this is a topic which need not hit political or cultural divides. Make your opinions known to politicians and executives.
- Allow this news to remind you that we need to consume all sorts of information from diverse forms of media. In this way, AI is like other subjects which can transcend many political and cultural divides. Find resources in the general media, business and science reporting, and the Catholic media. We must work toward greater integration among these journalistic “beats.” The focus here is human dignity and what Pope Francis calls “human ecology.”
- In our own lives, we need to form a personal, principled position on the role of artificial intelligence, artificial reality, virtual reality, and the “comforts” granted us by seductive technological advances. We must distinguish life-enhancing applications from usages which deaden or depress our minds and hearts, or shortcuts which quash our highest talents and most creative selves. Another thought to keep in mind: Is there any “free lunch”? Who is paying the bill?
- Question the role of high-tech as just another toy or distraction, and see the long-term risks in it. Just as the psychologist guru Jordan Peterson tells us to travel the road of personal responsibility by “making our own bed,” let’s embrace our dignity as human beings in big and little ways, starting with “writing our own term papers.”
Clip image from ClipSafari.com, a collection of creative commons designs. Editorial cartoon by Baloo used with permission.