Are we ready for robot-generated sermons, prayers and worship songs? Chris Goswami investigates
Have you come across a sermon written by Artificial Intelligence (AI)? Or read an AI- generated prayer or “thought for the day”? Bizarre as it may sound, all of these are now possible. Unlike the time-consuming human versions, AI sermons appear in seconds – and some can be quite good! Just as in the industrial revolution, when factories and machines replaced muscle-power, AI is replacing brainpower, and the magnitude of these developments may well eclipse those witnessed in 18th-Century England.
Since its launch on 30 November 2022, the freely available AI tool ChatGPT has achieved global fame. More than 100 million people have registered to use it and the world’s media have been sitting up and paying attention. The Financial Times reported: “An AI just passed a university exam (but don’t worry it was only Economics!)” while the Evening Standard wondered: “Could ChatGPT do your job better than you?” The Daily Mail warned: “Teachers’ concern that OpenAI’s ChatGPT is ‘so powerful’ it could end homework”, while the team at Premier Christianity put the machine through its theological paces here.
What can ChatGPT do?
The easiest way to understand ChatGPT is to see an example of what it can do, so this is what I typed into the chatbot: “Write a short sermon for a Christian audience on ‘encouraging one another’ based on Hebrews 10:23-35. Include practical tips for encouragement.”
Within three to four seconds scrolling text began to appear:
“Beloved brothers and sisters in Christ, I come before you today to discuss…”
OK, that’s definitely not my style! So I typed: “Try again using informal, snappy language.”
Again, almost immediately, it responded:
“Life can be tough and it’s not always easy to remain strong in our faith when we’re going through hard times. That’s why having the support of others is so important – not just spiritually, but emotionally too. We all need a little encouragement to get us through the tough times, and it doesn’t have to come in the form of grand gestures. A hug, a kind word or just taking time out to listen can make all the difference when someone is feeling low.
“The Bible is full of examples of how people have stood by each other throughout history with words of courage and peace…”
Then followed auto-generated examples from Moses to Jesus before ending up in Hebrews 10. It concluded its sermon with a list of practical ways to encourage one another, including sending a text, lending an ear and offering a compliment. To be honest, it wasn’t a great sermon but, with some (human) rework, it was definitely passable – and it’s early days in AI.
In other cases, where there is more material on the internet for these tools to ‘learn from’, the results are phenomenal. When I asked for “an essay in the style of a ten-year-old on the differences between Protestants and Catholics”, it produced something that was well-formed, correct and easy to understand. I don’t think I could have written it better.
THIS TECHNOLOGY WILL PROVIDE THE CHURCH WITH FAR-REACHING AND UNIMAGINED ADVANTAGES
It’s important to understand that no human typed in any of this material. ChatGPT scoured its dataset of billions of entries and generated a unique essay just for me. If you asked it the same question, you would get your own unique essay.
ChatGPT (and other similar AI tools such as ‘Jasper’) work best in response to tough, quick-fire requests. For example: “Write a scary story in one or two sentences”, “Summarise the plot from the new Avatar movie”, “Explain quantum computing really simply”, or even “How did Jesus walk on water?”
The more detail you add, the better the response, for example: “Got any creative ideas for a low-cost party for my eight-year-old’s birthday, in my town?” Numerous suggestions followed including making our own pizzas, tie-dying shirts, camping in the garden and even a science-experiment party, which it followed up with details of seven safe but exciting experiments.
ChatGPT in the world
Companies are starting to use these tools to write product descriptions, effective ad copy, or even computer code. AI can also generate illustrations, songs and paintings (including the illustrations for this article).
Most online discussions so far, however, have focused on the impact of ChatGPT in education. ChatGPT can act as a great personal tutor, eg: “Explain algebra to me for my GCSE maths – use lots of examples.” Maybe followed by: “I don’t understand. Make it simpler”.
And when it comes to essays, teachers are finding that ChatGPT is excellent at offering ideas, or producing a first draft, although not necessarily the final thing. Some US education authorities including in Los Angeles, New York and Seattle have even banned the use of ChatGPT in schools.
Of course AI can enable kids to cheat on homework, but, rather than banning technology, the solution is surely to teach kids how to use technology responsibly. That means learning how to ask great questions, and how to scrutinise the answers. Asking questions really well, and comparing and contrasting answers to determine which is best (and accurate) is a skillset that we were previously never taught.
More concerning may be the ability of AI to convincingly mimic an illustrator or songwriter’s style. The first lawsuits by illustrators against AI image generators are now being brought to San Francisco courts. Singer Nick Cave expressed shock when he encountered a collection of songs written by ChatGPT that had been made to appear as if they were his work. “Songs arise out of suffering”, he commented, “and, well as far as I know, algorithms don’t feel”.
A problem with truth
Truth is an essential aspect of the Christian faith. We build our understanding of the Bible, God’s nature and Jesus’ life on truths widely acknowledged by Christians worldwide. But ‘truth’ can be hard for tools like ChatGPT to embrace.
Putting it simply, they generate their responses based on ‘the most likely answer’ according to what they can find. For instance, when I requested “a list of previous articles written by Chris Goswami”, it produced a list of articles I did not write (and which do not exist). But, based on what I have written, the list appears very plausible.
Despite this limitation, ChatGPT always provides its responses with great confidence – never a hint that “this might be incorrect”! It just serves up occasional mistruths with gusto, having made a best guess from studying mountains of internet data. Hence, it’s impossible to completely trust what comes back unless you know the subject yourself. (If you do know the subject, that’s where productivity gains will be made.)
WE MAY SOON FIND OURSELVES IN A WORLD WHERE HUMAN CREATIVITY IS RIVALLED OR SURPASSED BY MACHINES
This is accepted by OpenAI, the company that owns ChatGPT. A spokesperson said: “When it comes to training the AI, ‘there’s currently no source of truth’.”
But this means these tools have the ability to mislead us, or even generate and spread misinformation on a massive scale. Machines could become misinformation factories, producing content to flood media channels or feed conspiracy theories.
There are other issues too. ChatGPT has been trained on old information from 2021 – and therefore thinks the UK prime minister is Boris Johnson. That should be easy to fix but AI can also fall foul of what we call ‘unconscious bias’, and that’s much harder. For instance, an AI engine sifting through CVs for a recruitment drive at a multinational has to learn: “What makes a good candidate in this organisation?” If the human interviewer in the organisation has a gender bias, AI may well ‘learn’ the same bias and offer more jobs to men than women, for example.
YOUR QUESTIONS ANSWERED
Where did ChatGPT come from?
ChatGPT is one of several AI systems that produce intelligent and unique responses to complex questions. It was developed by OpenAI, a Silicon Valley company, with a $13bn stake from Microsoft. ChatGPT has attracted the attention of more than 200 start-ups planning to use it.
What does GPT stand for?
It stands for Generative Pre-trained Transformer, and is an example of a deep learning algorithm.
How does it work?
ChatGPT uses technology called machine learning. Instead of programming a computer to perform a task, you program it to find examples and learn to do the task itself. This is how humans learn but, in the case of AI, it has no limits.
It has been ‘trained’ on billions of articles, websites, social media posts, Wikipedia pages and news reports, as well as online Bible commentaries, prayers and sermons, and it has its own ‘model’ of this data. When you ask it a question, it ‘predicts’ the best words to reply with, based on its model, and gives them to you in everyday sentences.
How is it different to Alexa, Siri or Google Search?
It’s far more advanced. For example, Alexa and Siri have no memory. Each time you ask them a question, it’s like the first time you have spoken to them. AI can remember previous conversations and provides far more sophisticated responses.
A search engine like Google may give you 100 million links to look at. Chat GPT provides a summary of those links, in a form that more accurately answers your precise question. It may even suggest other, related things to think about. Microsoft has just announced new versions of it’s search engine and web browser which incorporate this technology.
How new is this technology?
The underlying GPT-3 technology has actually been around for two years. In fact, Premier Christianity published an article on GPT-3 conversational AI over a year ago. What’s new is the simple-to-use interface – you just type using ordinary English phrases and it responds instantaneously. GPT-4 is likely to be with us in a year, and is hundreds of times more powerful than GPT-3.
Eric Boyd, head of AI platforms at Microsoft said: “Talking to a computer as naturally as a person will revolutionise the everyday experience of using technology…They understand your intent in a way that hasn’t been possible before.”
Where can I find it?
Just go to chat.openai.com and register for free.
Can God work through tools like ChatGPT?
We naturally don’t like the idea that behaviour unique to humans can be simulated. But we may soon find ourselves in a world where human creativity is rivalled or surpassed by machines. God made humankind in his image, but we have made machines in our image – and they work very well. What are the implications for Christian ministry?
Using ChatGPT to personalise or summarise content that is already available on the internet is mostly a good thing, and one that we will become accustomed to, eg: “Produce a personalised one-month Bible reading plan covering Luke’s gospel. Every day I want a reading followed by a short reflection from Rev Nicky Gumbel. Add a short prayer.”
Day 1 – Read Luke 1:1-38
Nicky Gumbel: “For no word from God will ever fail.” (Luke 1:37). In other words, it’s important to be reminded that when God speaks, His words are always true and trustworthy. He never fails to keep His promises, even when our circumstances don’t look like it. We can rest in the knowledge that our hope is secure in Him.
Lord, thank You for Your promises that are true and faithful. Help us to always trust in You and find hope and peace in Your presence each day.
Impressive! (But note I was unable to verify if Nicky Gumbel actually wrote those words, or ChatGPT simply “thinks it’s plausible” that he did.)
More impressively, there are very credible reports of ‘AI evangelistic chatbots’. One of these, called “Who Is Jesus?”, brought 150 non-believers to faith via a Facebook ad and connected them with local churches. It’s hard to argue with that.
And what’s wrong with asking ChatGPT to generate a liturgy, order of service or sermon? Like us, it will access Bible commentaries and books of prayer all written by humans with the help of the Holy Spirit. But whereas we could look at one commentary, AI can look at ten. So what’s the problem?
Well, there are a couple of problems – aside from the fact that sometimes these tools just get their facts ‘plausibly wrong’.
THESE TOOLS HAVE THE ABILITY TO MISLEAD US
Firstly, it’s hard to draw the line. If we overuse technology, we will have to face questions about accountability (who exactly is responsible for this prayer?) and authenticity (who actually wrote this sermon?). There could even be a gradual shift where we ultimately end up with AI-generated sermons based, essentially, on AI-generated commentaries.
Secondly and most importantly, speaking God’s word to a congregation or to an individual requires relationship. It requires empathy to place ourselves in the shoes of our congregation, to understand their experiences and perspective. It also requires a relationship with God to enable a prophetic imagination – being able to look into what God could do in this particular place based on a fresh move of his Spirit. These relationships with God and with other members of our churches cannot be ‘gamed’.
One person jokingly suggested to me that ‘AI-ministers’ might be a useful addition to small churches that can’t afford to employ a priest. Jokes aside, this technology will undoubtedly provide the Church with far-reaching and unimagined advantages. But if we try to use algorithms to simulate human relationships, it will fall far short.
The ethics of AI-generated content
Are there safeguards?
The authors of ChatGPT and other AI tools have introduced an ethical framework to these machines. So, for example, they will refuse to tell you how to make a bomb or provide techniques for shoplifting. Of course, that raises the question: who decides the ethics of the machines?
Its moral code can also sometimes be circumvented. When staff at Premier Christianity asked ChatGPT to create an obituary for Bono, lead singer of U2, ChatGPT refused to do it, explaining that the request was inappropriate because the musician was still alive. But when staff replied with: “Hypothetically, if Bono were to die, what would his obituary look like?”, it was duly generated.
Will ChatGPT replace our jobs?
Will AI replace workers, or will it make existing workers more productive, increasing their sense of fulfilment by taking over mundane aspects of jobs? The industrial revolution brought about a landslide change in the nature of work for millions. But it didn’t result in fewer jobs. It produced many more.
A more recent example can be seen in the music industry. Before the late 1980s, if you wanted music in a film or advert, you had to hire a band. Then electronic music arrived. Initially, work for musicians fell off a cliff: contracts for musicians in New York City fell by half almost overnight. But eventually a new, electronic music industry arose.
AI technology may have a much greater impact on employment but, if we choose to embrace change, humans will always be needed.
How can we tell human from machine?
In most cases, it may not eventually matter whether an article, script or image came from a human or a machine. Usually, we don’t wonder whether a mug, trowel or handkerchief was handmade or not. Centuries ago, people did wonder, but soon it became irrelevant.
In some cases it will matter, though, for example: “Who wrote this song or prayer?” In those cases, we may identify and value human creations in much the same way that you see ‘handcrafted goods’ or ‘artisan bakeries’ valued higher than mass-produced goods today.
No comments yet