The following is an article I wrote first for our staff team, though I think it is an important issue for all of us, so I'm including it here for those that might be interested. - Tim Webster
Ministry and the Danger of Artificial Intelligence
"I'm moving to Canada."
A few years ago our campus became unglued when Hillary Clinton lost to Donald Trump in the presidential election. Some classes were canceled. There was a group of students who camped out in a campus building for days, afraid to come out. I spoke with a student who was, at first, mildly disappointed. She went to a campus listening session for consolation. What she got was quite the opposite: she heard how truly horrible it all was and was upset for days.
What about us? Were we prepared with any perspective from Jesus? Were we able to serve our campus practically in a time of need? We did muster a belated response (helped by a local church), but sadly, we mostly missed the opportunity. It could have been different. We had a pretty good understanding of our campus. With a little forethought, discussion, and prayer we could have anticipated what happened and been ready with something helpful.
How much of our Christian lives do we spend reacting rather than anticipating? How often are we caught up in some social trend without much more insight than anyone else?
I have a similar feeling about artificial intelligence. Something important is happening, and we should put some effort into getting out in front of it. I am likely to look back on anything I write here about AI with embarrassment, but I'll take a stab at a conversation starter anyway.
Artificial Intelligence has been with us for a long while. Four decades ago I was writing simple AI algorithms to make my computer games more realistic. I felt pretty smug when I beat a friend's 3D checkers game (once, anyway ‒ my friend had programmed it to always take the best move, so its moves were predictable). Speech-to-text, big data algorithms that market to us, self-driving cars, robotics systems ‒ aspects of AI have already been incorporated into our lives. If we think it is just about Skynet, HAL, and chess programs that beat grand masters then perhaps it is easy to ignore. I've heard things like robotic vacuum cleaners described as a technology in search of a use -- in short, an irrelevant novelty.
"Self-driving cars just crash."
Advances made public in the last few months, however, have changed the perception of AI's incompetence. Siri and Google Assistant may have been a good butt of jokes, but ChatGPT actually works. It works, for any of us who have dabbled in AI or know a bit about coding, astonishingly well. So well that companies are falling over each other to avoid being left behind. Microsoft's Bing chat and Snapchat's AI scooped Google, whose stock started dropping until they recently released their own version called Bard. "Prompt writing" is now a must-have skill for tech job applicants. Developers are rushing out apps and plug-ins that take advantage of AI APIs. IBM is said to be cutting jobs in anticipation that AI will replace them. Just a few weeks ago I set up an OpenAI account, and now hardly a day goes by where I don't make multiple AI queries.
Some are sounding alarms. Vice President Kamala Harris just hosted a meeting of tech CEOs to discuss safeguards. Elon Musk, with a who's-who list of AI signatories, is calling for a pause on development. Time featured an article that makes the annihilation of the human race sound plausible.
"It's OK dad, they are just shooting robots."
That's a line I got from my youngest son when I caught him watching a (forbidden) violent cartoon. Violence sells, of course, and the catharsis seems less morally objectionable the more the bad guys are dehumanized and bloodless. That, and bad guys who aren't us are so much easier to deal with. Is AI a dangerous bad guy?
Much of what I have read focuses on the inherent dangers of AI, the ways it could "get out of control." Like other sci-fi buffs, I've read many of those stories about machine intelligence, but I frankly don't know what to make of it. Since we hardly understand our own cognition, it is hard for me to conceive how we could stumble into building something utterly alien and inherently superior. It's far more likely to be something sub-human. From what I've seen, artificial intelligence is still artificial. It is a simulation of intelligence; it is a tool. It is not the inherent danger of the tool that concerns me so much as the danger from those who wield the tool. In other words, it is the human element. This is an element that we deal with every day in much less esoteric contexts. Because of this, the discussion should first be a discussion about morality.
Perhaps it makes sense to break the danger down into two categories, both of which have to do with moral wisdom for which the scriptures and wise elders among us are so well-suited to guide.
I'd say the first danger is stupidity. AI will be "out of control" when we blindly fail to anticipate the unintended consequences of its use. Personally, I tend to be one of those early adopters who embraces new tech for the fun of it. One of my friends pointed out the Gartner Hype Cycle to me, which is a useful way to describe adoption (going through overblown hype, disillusionment, and eventually a more stable integration). While overwrought fear and blithe dismissal won't help us much, those are not our only options. Wouldn't it be nice if our collective will was strong enough to soberly assess the best way to use (or reject) a new tool that fits within our goals and values rather than becoming a slave to it? My sister's Mennonite neighbors made a regular practice of this, building a strong and God-honoring community with wisdom and restraint. There are so many examples of the negative side-effects of our society's rush to embrace new technologies that I hardly need to elaborate. Here are just two questions we can ask ourselves regarding the widespread implementation of any technology or policy: (1) how will this affect the most vulnerable among us? and (2) how might this disrupt the crucial intergenerational link between wise elders and the young? Here I've touched on the 5th Commandment and passages like Psalm 82, but so many other scriptures are relevant.
Stupidity is often arrogant, willful stupidity that rationalizes self-interest and morphs into the second danger: evil. AI is already a tool that people are using for evil. The bots that attempt to manipulate public opinion (or, say, product reviews) are becoming more sophisticated. Unscrupulous students have new ways to cheat, etc. It would be short-sighted, however, to focus solely on the abuses that relatively powerless individuals can make of the tool. The rich and powerful can use AI to manipulate, oppress, and abuse others on a much larger scale. I think the rise of populist politicians like RFK, Jr., and populist alternative media figures like Russell Brand or many Substack journalists are in part a reaction against the growing technocratic control and abuses of the powerful in governments and large corporations.
C.S. Lewis was right to point out the essentially moral nature of our abuse of science and technology. Power is intoxicating. In The Abolition of Man he writes "what we call Man's power over Nature turns out to be a power exercised by some men over other men with Nature as its instrument." If you haven't read The Abolition of Man recently, now might be a good time. Just substitute "transhumanism" for "eugenics" and "surveillance and media manipulation" for "propaganda" in Lewis's text and his arguments become suddenly fresh and relevant.
What does AI have to do with ministry?
For starters, our proclamation of the gospel should speak to our cultural moment and the various cultural forces that bind us in sin and confusion. Jesus has good news for people stuck in sin and confusion. As I wrote at the beginning, I do think the scriptures give us what we need to get out in front of the dangers of this new trend. We don't have to be AI scientists to understand how humans abuse any new powerful tool they have been given. We don't have to be psychologists to understand how fear distorts our view of God or how it can be used to manipulate us. We need to seek out those whose wisdom brings us back to humility, as well as those whose courage strengthens us to face our fears. I'd say that this week two public figures that have reminded me of these things in science are Brian Roemmele and Jay Bhattacharya.
Secondly, I think some of us should learn AI. Not every tool is for everyone, but as some wise people master a powerful tool and use it for good (and AI has a tremendous potential for good) they can have an important role in helping the rest of us. Learning about AI will help us with the enormously consequential task of understanding forces in the world around us. As Kingdom citizens first, we should seek to understand the worldly influences that form and seek to shape us, avoiding their dangers and building counter-cultures where necessary and possible. When we've got our own house in order, we will have the wisdom and credibility to help others.
"Oh no, dad's talking about ChatGPT again."
My kids chide me for my enthusiasm over ‒ well ‒ whatever I happen to be enthusiastic about that month. I like to talk about ideas, and I like to share what I'm discovering. As much as I would have liked to talk about effective prompts or my latest translation tool, however, I needed to pause and think about the topic of this little article first. I need to orient myself properly and consider a spiritual perspective.
It is in that spirit that I offer this reflection. AI has potential for good or ill. We should pay attention. The Spirit will equip us. Let's talk more.
A few years ago our campus became unglued when Hillary Clinton lost to Donald Trump in the presidential election. Some classes were canceled. There was a group of students who camped out in a campus building for days, afraid to come out. I spoke with a student who was, at first, mildly disappointed. She went to a campus listening session for consolation. What she got was quite the opposite: she heard how truly horrible it all was and was upset for days.
What about us? Were we prepared with any perspective from Jesus? Were we able to serve our campus practically in a time of need? We did muster a belated response (helped by a local church), but sadly, we mostly missed the opportunity. It could have been different. We had a pretty good understanding of our campus. With a little forethought, discussion, and prayer we could have anticipated what happened and been ready with something helpful.
How much of our Christian lives do we spend reacting rather than anticipating? How often are we caught up in some social trend without much more insight than anyone else?
I have a similar feeling about artificial intelligence. Something important is happening, and we should put some effort into getting out in front of it. I am likely to look back on anything I write here about AI with embarrassment, but I'll take a stab at a conversation starter anyway.
Artificial Intelligence has been with us for a long while. Four decades ago I was writing simple AI algorithms to make my computer games more realistic. I felt pretty smug when I beat a friend's 3D checkers game (once, anyway ‒ my friend had programmed it to always take the best move, so its moves were predictable). Speech-to-text, big data algorithms that market to us, self-driving cars, robotics systems ‒ aspects of AI have already been incorporated into our lives. If we think it is just about Skynet, HAL, and chess programs that beat grand masters then perhaps it is easy to ignore. I've heard things like robotic vacuum cleaners described as a technology in search of a use -- in short, an irrelevant novelty.
"Self-driving cars just crash."
Advances made public in the last few months, however, have changed the perception of AI's incompetence. Siri and Google Assistant may have been a good butt of jokes, but ChatGPT actually works. It works, for any of us who have dabbled in AI or know a bit about coding, astonishingly well. So well that companies are falling over each other to avoid being left behind. Microsoft's Bing chat and Snapchat's AI scooped Google, whose stock started dropping until they recently released their own version called Bard. "Prompt writing" is now a must-have skill for tech job applicants. Developers are rushing out apps and plug-ins that take advantage of AI APIs. IBM is said to be cutting jobs in anticipation that AI will replace them. Just a few weeks ago I set up an OpenAI account, and now hardly a day goes by where I don't make multiple AI queries.
Some are sounding alarms. Vice President Kamala Harris just hosted a meeting of tech CEOs to discuss safeguards. Elon Musk, with a who's-who list of AI signatories, is calling for a pause on development. Time featured an article that makes the annihilation of the human race sound plausible.
"It's OK dad, they are just shooting robots."
That's a line I got from my youngest son when I caught him watching a (forbidden) violent cartoon. Violence sells, of course, and the catharsis seems less morally objectionable the more the bad guys are dehumanized and bloodless. That, and bad guys who aren't us are so much easier to deal with. Is AI a dangerous bad guy?
Much of what I have read focuses on the inherent dangers of AI, the ways it could "get out of control." Like other sci-fi buffs, I've read many of those stories about machine intelligence, but I frankly don't know what to make of it. Since we hardly understand our own cognition, it is hard for me to conceive how we could stumble into building something utterly alien and inherently superior. It's far more likely to be something sub-human. From what I've seen, artificial intelligence is still artificial. It is a simulation of intelligence; it is a tool. It is not the inherent danger of the tool that concerns me so much as the danger from those who wield the tool. In other words, it is the human element. This is an element that we deal with every day in much less esoteric contexts. Because of this, the discussion should first be a discussion about morality.
Perhaps it makes sense to break the danger down into two categories, both of which have to do with moral wisdom for which the scriptures and wise elders among us are so well-suited to guide.
I'd say the first danger is stupidity. AI will be "out of control" when we blindly fail to anticipate the unintended consequences of its use. Personally, I tend to be one of those early adopters who embraces new tech for the fun of it. One of my friends pointed out the Gartner Hype Cycle to me, which is a useful way to describe adoption (going through overblown hype, disillusionment, and eventually a more stable integration). While overwrought fear and blithe dismissal won't help us much, those are not our only options. Wouldn't it be nice if our collective will was strong enough to soberly assess the best way to use (or reject) a new tool that fits within our goals and values rather than becoming a slave to it? My sister's Mennonite neighbors made a regular practice of this, building a strong and God-honoring community with wisdom and restraint. There are so many examples of the negative side-effects of our society's rush to embrace new technologies that I hardly need to elaborate. Here are just two questions we can ask ourselves regarding the widespread implementation of any technology or policy: (1) how will this affect the most vulnerable among us? and (2) how might this disrupt the crucial intergenerational link between wise elders and the young? Here I've touched on the 5th Commandment and passages like Psalm 82, but so many other scriptures are relevant.
Stupidity is often arrogant, willful stupidity that rationalizes self-interest and morphs into the second danger: evil. AI is already a tool that people are using for evil. The bots that attempt to manipulate public opinion (or, say, product reviews) are becoming more sophisticated. Unscrupulous students have new ways to cheat, etc. It would be short-sighted, however, to focus solely on the abuses that relatively powerless individuals can make of the tool. The rich and powerful can use AI to manipulate, oppress, and abuse others on a much larger scale. I think the rise of populist politicians like RFK, Jr., and populist alternative media figures like Russell Brand or many Substack journalists are in part a reaction against the growing technocratic control and abuses of the powerful in governments and large corporations.
C.S. Lewis was right to point out the essentially moral nature of our abuse of science and technology. Power is intoxicating. In The Abolition of Man he writes "what we call Man's power over Nature turns out to be a power exercised by some men over other men with Nature as its instrument." If you haven't read The Abolition of Man recently, now might be a good time. Just substitute "transhumanism" for "eugenics" and "surveillance and media manipulation" for "propaganda" in Lewis's text and his arguments become suddenly fresh and relevant.
What does AI have to do with ministry?
For starters, our proclamation of the gospel should speak to our cultural moment and the various cultural forces that bind us in sin and confusion. Jesus has good news for people stuck in sin and confusion. As I wrote at the beginning, I do think the scriptures give us what we need to get out in front of the dangers of this new trend. We don't have to be AI scientists to understand how humans abuse any new powerful tool they have been given. We don't have to be psychologists to understand how fear distorts our view of God or how it can be used to manipulate us. We need to seek out those whose wisdom brings us back to humility, as well as those whose courage strengthens us to face our fears. I'd say that this week two public figures that have reminded me of these things in science are Brian Roemmele and Jay Bhattacharya.
Secondly, I think some of us should learn AI. Not every tool is for everyone, but as some wise people master a powerful tool and use it for good (and AI has a tremendous potential for good) they can have an important role in helping the rest of us. Learning about AI will help us with the enormously consequential task of understanding forces in the world around us. As Kingdom citizens first, we should seek to understand the worldly influences that form and seek to shape us, avoiding their dangers and building counter-cultures where necessary and possible. When we've got our own house in order, we will have the wisdom and credibility to help others.
"Oh no, dad's talking about ChatGPT again."
My kids chide me for my enthusiasm over ‒ well ‒ whatever I happen to be enthusiastic about that month. I like to talk about ideas, and I like to share what I'm discovering. As much as I would have liked to talk about effective prompts or my latest translation tool, however, I needed to pause and think about the topic of this little article first. I need to orient myself properly and consider a spiritual perspective.
It is in that spirit that I offer this reflection. AI has potential for good or ill. We should pay attention. The Spirit will equip us. Let's talk more.
This image is mostly generated by AI (Bing Image Creator in this case) as an experiment. The prompt I wrote for it was (a somewhat longer version of) "an angel and a demon are back-to-back in the pose of Rodin's 'The Thinker' sculpture with a circuit board background." It was an idea I had to visually depict both the threat and promise of AI.