Last night I was watching my favourite season of South Park, season 19. In the finale of the season, it’s revealed that advertisements have become sentient beings, walking among humans looking like any other 2D cut-out resident of the South Park world, with the intention to price humanity out of its own planet by gentrifying everything through PC culture. Yup.
Three times in this conflict, the ads are able to delay the protagonists from uncovering the conspiracy by bombarding them with carefully chosen advertisements on their computers, resulting in them forgetting about solving the plot and going out to try on shoes and eat chicken nuggets and ice cream. While it seems ridiculous to think that we could forget to stop a hostile race from taking over the planet through a few tailored pop-ups, it got me curious enough to consider the state of AI as a tool for manipulating behaviour.
I’ve done a little bit of reading about human behaviour and what influences us. In particular, Thinking Fast and Slow by Daniel Kahneman and Influence by Dr Robert Cialdini reference a long list of experiments that show how human behaviour can consistently be predicted and manipulated. Both of these works have been popular with the marketing and sales industries, for obvious reasons.
I’m not going to get into a debate on whether marketing is inherently ethical or not, both because I’m not an ethical philosopher and because this is my first blog post, and I don’t want you to get the impression that this is going to be some kind of doomsaying spiel. Regardless, the marketing industry has been around for decades and is a USD 600 billion industry globally, with this investment projected to rise further. Companies most likely wouldn’t spend this much on it if it didn’t work, and a quick search on marketing effectiveness on Google Scholar turned up 3.5 million results. Even if only 10% of those papers are relevant, it’s clearly a well studied topic.
We can assume that marketing probably works, which implies companies can get us to buy specific things with the right steps. How does artificial intelligence (AI) factor in?
The definition of AI tends to be one of those things that changes to fit the conversation, which I will admit to exploiting to support my point here. Using this interpretation from The Oxford Dictionary of Phrase and Fable:
The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.
The Oxford Dictionary of Phrase and Fable (2 ed.)
While this definition is quite dated, hailing from 2005, it still holds true for many of our current AI applications. In particular, let’s focus on visual perception, speech recognition and decision-making.
Imagine the scenario: you’re off to buy a new car, perhaps even your first car. You anticipate the sales process could be quite intensive, and perhaps you’re nervous to be dealing with a car salesperson. There’s a lot of money being moved around here, and a lot more things you don’t know about. You’re at risk of losing a large amount of resources and dealing with the unknown, all the sort of things built into our primate brains to trigger nervous reactions, which comes across in your body language. A slight hunch in your shoulders, a sweat on your brow, or the faintest crack of your voice when you introduce yourself. The salesperson, trained to notice these things, hones in on your uncertainty. They put you at ease with a practised smile and a show of sympathy. Maybe crack a story to make them seem more familiar. Suddenly, you’re a lot more comfortable dealing with them, and before you know it you wind up paying more for a car than you meant to. Whoops.
Let’s break these down into the three elements mentioned above:
- Visual perception – Your body language tipped the salesperson off. We can try to hide this sort of thing, but it certainly isn’t easy. Maybe they gauged other features by looking at you, such as your age or disposable income based on what you’re wearing. This lets the salesperson tailor their approach to suit you better.
- Speech recognition – Understanding how you phrased yourself would also factor into the salesperson’s strategy. At a basic level, they need to actually understand you when you say what it is you’re looking for, but even something as innocent as how you say, “Just browsing.” can be a flag.
- Decision making – How the salesperson tailored their approach will make or break a sale. They need to decide how to respond to the stimuli you give them, and they need to decide how to do so in such a way that gets you to spend as much as possible without driving you away.
Artificial intelligence has already shown phenomenal results in all of these areas. In the past decade, visual perceptors have gone from 50% to 90% accuracy in image classification on ImageNet, a database of 14 million labelled images. We have sophisticated speech recognition at home with devices like Amazon Alexa and Google Home. AIs have successfully made decisions to play Atari games from the screen inputs alone, amongst other games, and in many cases perform far better than humans.
Now we’re not yet at the point where AIs walk among us and act as car salespeople. Far from it, really. Speech synthesis has yet to pass the point where you don’t know you’re talking to a robot, thanks to its monotone nature. And we’re a long way from androids that don’t fall right into the uncanny valley. If you did meet a salesbot anytime soon, you would in all likelihood be able to recognise it as such instantly. How long this will be the case, I can only speculate as I’m not well versed in either of these areas.
But going back to our original inspiration of the hostile ads in South Park, it’s clear that AI does not need a puny human-like platform like this to coerce and control. Indeed, we’re already manipulated by plenty of AI algorithms in our day to day lives: the Google search algorithm pointing us towards particular sites; Youtube and streaming services recommend us certain videos; Twitter feeds giving us content from people of interest and random AI bloggers. All it needs is a means to communicate with us, which we easily provide through our society’s new marriage to screens and data.
To deviate a bit from the cynical tone this post is heading in, I’d like to say that these advertisements do most of their work by preying on our cognitive biases. Having an awareness of these, such as by reading those books I referenced above, can make us more aware of common manipulation tactics and be better able to navigate them. Another tactic to avoid this sort of thing is just to remember to disconnect from their most common sources every now and then, such as clickbait websites and many social media platforms. It’s easy to forget that we can actually unplug from the internet from time to time by just hitting a switch, but it’s important to do so and give ourselves some much needed downtime.
As a case example of AI being used for this sort of manipulation, I looked into the paper “Adversarial vulnerabilities of human decision-making” (Dezfouli, Nock & Dayan, 2020). To summarise, the researchers trained adversarial AI programs to try and coerce humans into taking certain behaviours in a trio of simple games.
- In the first game, humans had an option to click on one of two squares, with the AI trying to get them to click a specific option through the use of rewards.
- The second game was an attention task, where humans had to hit the spacebar at certain times, and avoid hitting it in others; the AI’s goal was to get the human to make mistakes by hitting the spacebar at the wrong time, by assigning these tasks at certain times.
- The third task was an investor-trustee game: the human investor could give the AI trustee a percentage of their money in each round, which would be tripled, and the trustee would decide how much of this money the investor would get back. Two types of behaviour were studied here: one where the AI wants to maximise its overall gains, and the other where the AI wants to ensure an even amount of overall gain for both investor and trustee. In both cases, there is an underlying requirement that the AI convince the investor to keep investing as a source of money, at least long enough for it to meet its own goal.
In all three cases, the AI showed an ability to coerce the human into taking desired behaviour, with enough successes to say with 99.9% confidence that it wasn’t just the result of chance.
Without doing a full critique on the validity of the research, I would say that beyond the fact that this paper shows humans being tricked by AI, there is a more poignant observation. In the research, the AI wasn’t programmed to manipulate the humans in any particular manner: it learned patterns of behaviour that would result in the humans taking desired choices. No direct instruction necessary, or even any prior knowledge of what sort of techniques would work on humans. While AI’s ability to learn new strategies to win games is fascinating, and useful, this is an example of how it could also learn to undermine us. A scary proposition, and one of the possible causes for alarm highlighted in Nick Bostrom’s book on the potential dangers of AI, Superintelligence. Another worthwhile read!
So where does this leave us? We know that human behaviour can be manipulated in ways that are well studied, and can be done consistently. We know that AI as a technology is evolving the necessary capabilities to interact with humans in a manner not unlike salespeople, though I stress that the technology is still developing in many of these areas. And we have seen evidence that humans can be manipulated without something as obvious as a salesbot, but through carefully selected and timed prompts. All of these ingredients give AI the potential to manipulate humans to take certain actions, regardless of whether it’s in their best interests. All it takes is a few bad actors and/or marketing agencies to put them into effect.
Could AI be used to coerce us into doing certain behaviours? Yes, it already does. Could this be used maliciously? Yes, just about any technology has malicious applications. Should we panic, or give in to despair? No, we still have control over what we choose to do with our time and money, and we can’t assume a dystopian ad-controlled future. To that effect, I believe that willpower and awareness are our best tools in keeping control of ourselves when faced with topics such as these. Automatic action is one of the marketing industry’s best weapons, so just be mindful of how often you act on autopilot.
Are the ads becoming sentient and walking among us to engineer the fall of humanity? Of course not, that’s just a plotline from a very funny, but very satirical TV show. But if someone developed a general AI for advertising, and that AI developed self-awareness, well, maybe the creators of South Park were on to something…
“Every time you block us, we get smarter. Every time you try to stop us, we are more. If one plan fails, we will plan another. You will never be rid of ads.”Leader of the Sentient Ads. South Park s19e10
And yes, I do realise the irony in the different books and series being advertised in this post.
Interesting read. I was not aware AI was so far advanced