In every practical sense our society is more diverse and more full of discourse than ever, but debate around AI has a Hollywoodesque tendency to focus on whether it will ultimately lead to humanity’s downfall. It means we can miss out on the real conversations. Spoiler alert: AI is (c) neither. Like everything else, it’s a bit more complicated than that.
The wrong debate
Having created limitless social media and rolling news, we have to fill it, and a bit of debate is the easiest way to do that: you get someone in to explain that, for example, climate change is a real and proven threat, then to stretch that into something more meaty you get someone else to say climate change isn’t a thing and you should stop scare-mongering. It fills the space with very little effort. People start shouting at the screen more, or commenting IN ALL CAPS, and apparently that’s audience engagement. Content generated. Job done.
The first problem is that it gives the impression opinion is split 50/50. And with a few notable exceptions, they’re generally not. Consensus abounds. Climate change is a good example: the evidence is robust, clear and abundant with only marginal debate about how bad it is. It’s just a more interesting article/media post/TV segment if there’s a bit of a contention, so you search the recesses of the internet and give half the space to someone prepared to say otherwise. If you’re irresponsible enough.
But that’s not the only problem.
Pick a team
Scale things up to the entirety of the culture wars and it’s as if everyone already picked a team (there are only two, did you know?) and you missed the big start of term dinner where they got out the shouty opinions sorting hat. If you’re pro Trump you have to also be climate change denying and anti abortion. If you’re pro Harris, you have to be a Marxist snowflake enemy of family values. Or something. Honestly it’s hard to keep up.
I’m getting to why this matters with AI. Stay with me.
In reality, this isn’t how the world works: you don’t have to oppose a person because they don’t share your opinion on one red meat topic. For a start, they probably still share a lot of your values and opinions. Your parents voted Brexit, you didn’t, and you can still have a nice cup of tea and agree that the extreme weather events are a worry. One of you might even (gasp) change your mind.
And there are some things on which, honestly, you don’t have an opinion at all. I have no opinion on the music of Coldplay. No strong view. I haven’t listened to anything other than the big hits, so I don’t think I’m qualified to judge. Good luck to them. Let people who’ve listened, read or thought about it be the judges. Leave it to the experts.
Heroes and villains
Whatever Marvel want you to think, you therefore know that the world doesn’t really divide into goodies and baddies. Everything is more complicated than that.
Except I’ve seen the movies and AI was cast as the villain almost from the first moment the concept emerged. I made a list with two columns. On the ‘AI is the going to kill us all unless a plucky human hero steps up’ side we have all the Terminator movies, the Matrix trilogy of four movies*, Space Odyssey, i-Robot, Blade Runner, War Games… (I ran out of space for more). In the ‘AI could really help us out’ column I’ve got Wall-E, Short Circuit and Her. And the last two are, let’s be honest, problematic in other ways.
So which is it?
AI isn’t good or evil. Of course. It just is. It’s a tool, and like any tool, the impact depends on how we use it. AI can help us solve complex problems, make things quicker and easier, and make things more fun. But it can also perpetuate biases, invade privacy, and be used for cause harm. We’re a little way off the cinematic doomsday scenarios but deep fakes, non-consensual porn, disinformation and fraud are far from victimless crimes.
AI is a reflection of us, and we’re complicated. It learns from the data we give it, and it acts on our instructions. If we feed it biased data, it will produce biased outcomes. If we design it with harmful intentions, it can cause harm.
But our survival instinct is powerful: NATO’s AI strategy gives you a pretty good idea of how alert everyone is to the risks, and how they’re managed. Our advice is similar to theirs: let AI analyse and inform and help create things, but don’t let it make decisions that you are accountable for.
Don’t be Skynet
Have an AI policy that suits your principles and values. You can do that this week. Ask us if you want help. Model the approach you want. Champion it. Teach yourself, then show people how to enjoy AI and keep data secure. Understand how your AI is connected and ring-fence it if you need. Give your AI clear, defined tasks that make jobs easier. Don’t bet the farm on it. Don’t create single points of failure: you still need to know how to do your job.
You might find there are jobs your AI can replace, and that’s scary, so think about how you talk to teams about it. You can also create new jobs, take on more work, upskill your teams and make them happier and more likely to stay.
Skynet is not inevitable, it’s just easier to pitch a movie about that than one about using AI to help social workers help kids.
* Probably the closest to a convincing AI doomsday scenario, where the AI realises that the water and oxygen needed to run all those data centres make humans their main competitor in the fight for survival. But as long as androids still walk funny and megalomaniacs don’t want to cede control of their big guns, I’m not worried.
We build private AI tools that let you keep things simple; private GPTs trained on content you curate, giving you answers your teams can rely on, and starting you on an AI change programme you can manage. We charge a one-off, affordable build cost and a single, fixed monthly cost for your whole organisation. Email us at info@engine-ai.co.uk.
Explore our collection of 200+ Premium Webflow Templates