"Ex Machina, Ex Cathedra: The Catholic Church's Role in Shaping the Future of AI"
I want to continue with my exploration of the dangers faced by the Church brought about by the rapid technological advances in our society.
Arthur C Clarke, the author of “2001 – A Space Odyssey” once wrote “It may be that our role on this planet is not to worship God, but to create him.” To me, that encapsulates the danger the Catholic Church faces in today’s society.
Today, in reflecting on the technological dangers facing the Church, I want to discuss artificial intelligence. I want to focus on the current state and future prospects of large language models (LLMs), artificial general intelligence (AGI), and the societal implications of these technologies. I will spend more than a few podcasts discussing my thoughts on the Catholic Church and technology.
As I have said before, I am an optimist wrt to AI going forward. Yes, there will be a lot of good, bad, and ugly that we will experience. To me, the good will eventually far outweigh the bad and the ugly. Any technology that begins working at scale in our society has its detractors and doomsayers. I believe we will get past it as AI is integrated into our Church and society much the same way that computers, networking, the Internet, the web, and mobile devices have. The evolution of these technologies provides an existing framework within which to set expectations for the evolution of technologies like AI.
The current explosion of AI capabilities has shown that they can begin to simulate human behaviors and processes. I call this the human simulacrum. But we must be clear. They simulate humans; they are not the same as humans. I have expressed skepticism about the ability of current autoregressive LLMs, such as GPT-4, Gemini, and the upcoming Llama 2 and 3 AIs from Meta (Facebook), to lead to actual superhuman intelligence. Claims that AGI is near is just marketing hype. While there are many models of human intelligence, I tend to look at human behaviors in four categories. These models lack these essential 4 components of intelligent behavior. These are understanding the physical world, persistent memory, reasoning, and planning abilities. These models are built using text from the Internet and there are limitations of text in capturing the richness and complexity of the world. We create our own world models within our brains. A world model is an internal representation of our environment and uses it to simulate future events within that environment. We begin building our own world model when we are born using our 5 senses. AIs only use text or language to build a world model.
The amount of information a young child receives through all sensory input is approximately 10^15 bytes. The text data used to train LLMs (around 2 x 10^13 bytes). The AIs cannot use video, audio, smell, taste, and touch as inputs so they are severely restricted in the type of world model they can create in the computer. Until AIs can build a world model commensurate with ours, we can expect no more than a Human Simulacrum.
There are significant efforts underway to address this input gap between humans and machines that can create approaches that will lead to more advanced machine intelligence. I disagree with the idea that AGI will emerge as a sudden event as proposed by many, especially those who believe in the Singularity where AGI will suddenly emerge by 2030 and manifest transhumanism. Instead, I believe that progress to the Human Simulacrum will be gradual and iterative.
My optimism prevents me from being too concerned about the potential dangers of AI because I believe the development of AGI will be a gradual process. Every technology in the past was matured via a highly iterative process. We did not get to aircraft jet engines such that two of them can power a Boeing 787 Dreamliner for a 8,000 mile flight safely. It took decades of development and refinement with a lot of good, bad, and ugly during that time. We should expect the same for AI; a gradual and iterative process that allows for the implementation of safeguards and ways to counter rogue AI systems. I am also of the opinion, as are many others, that the idea that AI systems will inherently seek to dominate or eliminate humans discussed so freely by people in public is flawed. The desire for dominance is not an inherent property of intelligence but rather a characteristic of social species. AI will do bad things because people use them to do bad things; they will not be bad out of the box.
To me, the potential impact of AI will be similar to the impact of the printing press. AI will ultimately make humanity smarter and more capable, similar to the impact that the printing press had in distributing the Bible to societies and transforming the Catholic intellectual tradition, enabling the Enlightenment, rationalism, and democracy.
I believe our current industrial revolution started with the development of the transistor and then the integrated circuit. Computers, software, PCs, networking, the Internet, telecommunications, wireless, mobile, and now AI, followed over the ensuing decades. Our world is a far better place now with billions no longer in poverty and life expectancies doubled in a short time. The reaction during past industrial revolutions was that electricity was going to kill everyone at some point or that the train was going to be a horrible thing because it was settled science that no one can breathe if you are moving at 60 miles an hour or faster. The detractors and doomsayers are as wrong today as the settled science was 150 years ago.
AIs still struggle with the simplest things and excel at the most complicated things. We taught our children to wash clothes, clean the dinner table, load the dishwasher, clean their rooms, and put their toys away from an early age. My son was not the only member of his 10U travel baseball team who could wash his uniform. AIs and robots (AIs with actuators) excel at chess and Go but cannot load a dishwasher or fold clothes. This is the Moravec paradox reasoning requires very little computation, but combining motors skills and perception skills requires enormous computational resources. These are really hard engineering problems. Chess is not.
Another reason I am less concerned about the danger of LLMs is that they are trained by removing words from a text and teaching a neural network to predict the missing words. This process, called autoregressive prediction, results in a system that generates text by predicting the next word based on the previous words. This approach is fundamentally different from how humans think and communicate, as we often plan our responses independently of the language we use to express them. There is the problem of an exponential increase in the probability of generating nonsensical answers as the number of tokens in an LLM's output grows. We can fine tune LLMs on specific tasks that can help mitigate this issue. We cannot fine tune LLMs to cover the entire space of possible prompts during training. As a result, LLMs will always be susceptible to generating nonsensical outputs when presented with prompts that fall outside their training data. How can we create more than a Human Simulacrum then?
What about the ethical considerations surrounding the development of advanced AI systems? It is impossible to create an AI system that is unbiased and acceptable to everyone, as bias is inherent in the training data and the perspectives of those designing the systems. Instead, we must accept that all AI systems will be biased and they need to be if their use globally is to be successful. The importance of open-source AI platforms and the need for diversity in the development of these technologies aligns well with the Catholic principle of the common good. By ensuring that AI is not controlled by a small number of companies or individuals, we can work towards a future in which the benefits of these technologies are more widely distributed and not used to exploit or harm the most vulnerable members of society.
The Church still treats AI like an edge case, a rare or unexpected situation that occurs outside of normal evangelical processes and exists only at the boundaries of human experience. That stopped being the case with AI and newer technologies some time ago. The markets tell us through valuations and investments what technologies are key and fundamental to our society. We have seen this through the few major industrial revolutions. The current one we are living in is a fully technological revolution playing out over the past decades and continuing unabated into the foreseeable future. There are no longer edge cases in our technological lives.
The Church should support the work and importance of open-source AI platforms to the extent possible. It will not be possible for all AI platforms to be open-source so the Church should strive to maximize the number of AI platforms and their constituent parts that are open. This will ensure diversity and prevent the concentration of power in the hands of a few companies. Bias will always be present in AI systems as this is inevitable because it is inherent in the training data and the beliefs of those designing and implementing AI. Therefore, the solution lies in fostering a diverse ecosystem of AIs that are appropriate for different cultures, languages, and value systems. This is why we need a Catholic AI that can hold its own in a Reddit AMA.
In Laudato Si, Pope Francis rightly critiques the "technocratic paradigm," which he describes as a reductionist view that sees technology as the key to solving all of humanity's problems. This paradigm, he argues, fails to recognize the intrinsic value of nature and the interconnectedness of all creation, leading to a misguided sense of human dominance over the environment. The technocratic paradigm warns us against placing too much faith in technology as a panacea for all of our problems. While AI may indeed have the potential to make humanity smarter and more capable, we must not lose sight of the inherent dignity of the human person and the value of human relationships and communities. As Pope Francis writes, "We have to accept that technological products are not neutral, for they create a framework which ends up conditioning lifestyles and shaping social possibilities along the lines dictated by the interests of certain powerful groups."
The pursuit of ever more advanced AI systems raises questions about the nature of intelligence itself and the unique role of human beings as created in the image and likeness of God. While I am less concerned about the existential risk posed by AGI, arguing that the development of such systems will be a gradual process, the Church teaches us to approach these questions with a sense of humility and respect for the mystery of creation. The path to AGI will be highly iterative and the Church should put itself in the middle of that using Catholic scientists and technologists as its leaders and participants in a very overt manner.
As Catholics, we are called to engage with these developments in a spirit of discernment, seeking to harness the power of technology for the common good while remaining grounded in our faith and our commitment to the dignity of every human person. By bringing the insights of Laudato Si and the technocratic paradigm to bear on these conversations, we can work towards a future in which AI serves the authentic development of humanity and the flourishing of all creation. AI should not help us create God; it should advance and enrich the ways we worship him and evangelize our fellow humans.