25 Feb Artificial Intelligence: Theological Perspectives (from Regis Friends Quarterly, 3.3)
Cynthia Cameron is worried about human relationships. Ryan Khurana worries about the truth. Pope Francis has concerns about human dignity. For all three, it’s the software-driven revolution called artificial intelligence that fuels their questions and apprehension.
“Human dignity must never be violated for the sake of efficiency,” the Pope said in a message dated Jan. 14 but made public at the 2025 World Economic Forum in Davos, Switzerland Jan. 23. (Source: Antiqua et Nova)
“Everyone has noticed that AI doesn’t care about the truth,” Khurana said at a Jan. 21 Regis College panel discussion on theology and artificial intelligence.
How we relate to something that isn’t human but seems intelligent raises serious questions, Regis-St. Michael’s professor of religious education Cynthia Cameron told this reporter as the Jan. 21 event broke for lunch.
“We’re at the question generating phase of this,” she said. “There are no answers to these questions yet.”
It turns out that at least one of the people who makes and sells artificial intelligence systems shares the concerns of Cameron, Khurana and the Pope. Sheldon Fernandez, founder and CEO of DarwinAI, has the same questions and concerns.
“Do we have machines that are capable of this quintessentially human capability (consciousness)?” he asked. “Moreover, can we distinguish between something that is conscious and something that simulates consciousness?”
As a Regis graduate who went on to a stint of aid and humanitarian work in Kenya after that country’s 2007 election crisis and then took on training at the Montreal Institute for Genocide Studies at Concordia University, Fernandez is the right sort of coding genius-tech bro to ask these questions. His DarwinAI designs and sells AI systems for autonomous vehicles, consumer electronics and the aerospace industry.
At a panel with University of Toronto engineering professor Michael Grunninger, who specializes in the ontology of AI systems, and Université de Laval Catholic bioethicist Cory Labreque, who has written about AI and palliative care, Fernandez was open about his own AI anxieties.
“The line between consciousness and its imitation is becoming increasingly blurred,” he told the audience.
Other members of the panel chimed in.
“We need a structured way to approach the complex questions AI raises,” said Grunninger.
“More research into AI ethics is urgently needed,” said Labreque.
The problem is that getting our ethical debates up to speed takes much longer than new developments in the technology, the Quebecois ethicist said. A week after the panel discussion in Toronto, Chinese start-up DeepSeek rocked financial markets by revealing a new, cheaper AI capability.
Cheap or expensive, all AI systems rest upon a mountain of data concerning our lives and our environment. This “datafication” of our lives is going to force us to think about what it really means to make ethical, human choices, Labreque said.
We can choose to have an AI avatar made of our deceased relatives and then continue to have daily conversations with our dead spouses, parents or children. This will force us to ask, “What exactly is grieving?” Labreque said.
The ethical questions AI poses aren’t just about intimate choices we make for ourselves. There are also questions of justice, Fernandez pointed out.
“Technology is in the hands of very few wealthy and powerful people,” he said.
Digital relationships “do not demand the slow, gradual, cultivation of friendship, stable interaction, or the building of consensus that matures over time,” pointed out Labreque. Substituting AI-driven digital relationships for slow, awkward and imperfect human relationships will have a cost to our experience of being human, he said.
Grunninger pointed out how a subtle, mature understanding of truth is different from a machine-generated measure of accuracy that depends entirely on a defined pool of data. Now that AI programs are writing and refining their own programming based on large language data pools (essentially, the Internet) it can be impossible to know how new programs are determining what information matters, or is relevant to the questions it is asked.
“It is essentially a black box without any kind of explainability,” Grunninger said. “It’s very difficult to try to extract the bias, because the way that this learning software is developed is not by requirements and design, etc.”
“We have faith, you know. Truth is a person. It’s an objective reality outside of us,” said Khurana, who works as AI lead for Maple Leaf Sports and Entertainment and also consults for MagisteriumAI (magisterium.com). “If we can somehow encourage or direct the research to be more truth-oriented and then build better systems out of that…”
Regis College president and panel moderator Fr. Gordon Rixon promised the conversations and collaborations about AI would continue at Regis.