VATICAN CITY (CNS) — Science, technology, religion and the humanities need to work together for their mutual benefit and to safeguard humanity, Pope Francis said.
A broader, more holistic approach is needed to “overcome the tragic division between ‘two cultures’ — the humanistic-literary-theological and the scientific, which leads to mutual impoverishment, and to encourage greater dialogue,” including between the church and the scientific community.
The pope spoke Nov. 18 during an audience with participants of the Pontifical Council for Culture’s plenary assembly. The gathering, held Nov. 15-18, discussed the challenges facing humanity given the rapid advancements in neuroscience, genetics in medicine and artificial intelligence.
Pope Francis praised those working in the field of science for their efforts to help humanity.
Recognizing that the church has not always shown it its full appreciation for science, the pope said scientific study and endeavours are rooted in the duty to care for and safeguard creation, which must be driven by love and service, not “control and arrogance.”
While science and technology have deepened human knowledge and understanding about creation, “alone, they are not enough to give all the answers,” he said.
It is increasingly evident that religion, theology, philosophy, literature, the arts and folk wisdom offer “gems of knowledge” that are necessary when dealing with “the mystery of human existence,” he said.
The church, he said, can contribute to this dialogue between science and the humanities by upholding the following principles:
— The centrality of the human person, who is never the means, but the end, and who is in harmony with creation, not as a despot, but a loving guardian.
— The universal destination of earthly goods, which include scientific knowledge and technology.
— Everything that is technically possible or feasible is not always ethically acceptable.
Dominican Father Eric Salobir, one of the plenary speakers, said when it comes to responding to the need for an ethics advocate or watchdog in the world of technology and artificial intelligence, “I would say that right now I’m not sure there’s a pilot in the plane.”
Each company acts on its own when it comes to ethical questions without any real co-ordinated, global effort, said the priest, who is the founder and president of OPTIC, a global network dedicated to the ethics of “disruptive technologies” so they may better be used for serving the common good.
Given its presence all over the world, the church is in an excellent position to initiate dialogue between the humanities and technology and “at least to frame the questions,” he added.
Mustafa Suleyman, co-founder of DeepMind Technologies, agreed that “there’s a vacuum of a strong coherent ethical voice in the world that’s providing appropriate direction with respect to values to a whole series of engineering, scientific and technological industries which are advancing at an incredible rate.”
That’s why he worked to get his company and Google, Amazon, Apple, IBM, Microsoft and Facebook together to create “Partnership on AI” (artificial intelligence) and find ways they could meet with non-profit foundations to discuss and seek guidance on the impact of artificial intelligence on society.
The partnership’s job, he said, “is to convene this very difficult and messy conversation about which values should be implemented in the kinds of systems that we build and how can we figure out how technology companies can collectively adhere to those values and implement them,” particularly when it comes to surveillance data and privacy, and the role of government.
“The danger, which doesn’t come from the technology itself, but from society, is that the human being could step back more and more from his responsibility” to be part of the decision-making process and from maintaining control in how the tools are used, Salobir said.
For example, he said, the ease and increased safety seen with autopilot systems for aircraft are now being offered with self-driving cars.
“But the less you do, the less you are in the loop, and the less you are able to jump in when there is a problem,” he said, which will potentially cause problems if AI is used for decision making or is increasingly relied upon for medical diagnoses and prognoses.
Machine-learning computers can perform better than human beings in many tasks, said Suleyman, whose company’s AlphaGo program beat human professional players at Go, a complex, strategy board game.
But those machines are controllable, and people have to decide how far they will let technology make decisions without human input, he said, like, for example, with lethal autonomous weapons systems.
He is part of efforts urging the United Nations to ban the use of such systems and, he said, that debate will help inform other debates, such as with self-driving cars and machine-led medical diagnoses in health care.
“It’s really important that we assert the primacy of human control in these systems because they are growing in their capabilities and they’re improving extremely quickly,” he said.
Machines can be controlled to serve people and build a fairer, more just society, he said; the world only needs the right forum and procedures to do so.
“Do we have in place the collective capacity to steward the direction” technology is moving in or is the technology determining the values and the speed at which things change, he asked the assembly.