How to spot — and resist — digital misdirection
What did you feel when you heard about this (1)?
Or when you saw this (2)?
I can tell you how I felt: outraged. And, in all these instances, I felt compelled to share my outrage on social media. And I did.
Never has the world felt like such a complex, less calm place. Many of us are riddled with pandemic-induced anxiety. The world feels more tribal than ever before. A virus has us all in a stranglehold, but the grip of political polarisation is even stronger. Many people appear to exist within a vortex of perpetual fury, panic or fear as a result of things we read or see online. And I’m undoubtedly one of them.
But what it’s taken me many years to realise is that, very often, this is a deliberately-constructed state of affairs. Because things we see on social media are no longer reflecting human behaviour and emotion. Instead, they are driving and manipulating human behaviour and emotion.
But let’s rewind a bit first. I’m no social media historian, but I can clearly recall the first piece of online content that I saw being widely ‘hateshared’ on Twitter back in 2009. It was this hugely-controversial Jan Moir column about the tragic death of Stephen Gately. The piece was rightly condemned, but I distinctly remember feeling uneasy about how many clicks MailOnline would be getting as a result of everyone linking to it in disgust. To try and mitigate this web traffic boost I copied and pasted the article into a public Google doc, and had a minor viral moment as a result.
That article — or others like it — would have been a lightbulb moment for publishers like MailOnline: outrage means business. Because they don’t care where the clicks come from. Clicks mean eyes, and eyes mean advertising revenue. Today, certain media organisations almost certainly commission content on the basis that they know it will rile their reverse target market. And that’s why certain commentators and social media figures have made a career out of spreading hateful views that I’m not even convinced are real.
Controversy → Outrage → Amplification → Relevance → Revenue
And it’s not just the news media who’ve spotted this winning formula. Deliberately awful online recipes exist solely to disgust us in order to generate clicks and get eyes on the ads embedded within them. And, of course, certain brands have been making deliberately controversial ads for decades.
However, in recent years, this model has moved up a notch. And because the commodification of outrage has been so lucrative, outrage has gone on to be weaponised by those in power.
Because if spaces for public discourse — which are more or less entirely digital these days — are saturated with indignation, any meaningful discussion or questioning is lost in the din.
And the coronavirus crisis has been a prime example of this. Social media has been awash with ‘Stay Alert’ memes, disbelief about Dominic Cummings’s drive to Barnard Castle and general lockdown messaging confusion (4).
And the question I have all about this is…are we being deliberately trolled?
Let’s dig a little deeper into one of the examples (1) I linked to at the very top of this post. In 2019, the Conservative Party changed its press office’s Twitter branding to apparently disguise itself as an official “fact-checker” during one of the televised leaders’ debates.
Liberal Twitter exploded with ire — so much so that I imagine most of us stopped listening to what was actually being said in the debate itself. And this was exactly the intended consequence.
In other words:
Controversy → Outrage → Noise → Saturation → Misdirection
That night, those of us foaming at the mouth about the misleading nature of the rebrand (indeed, I spent most of the debate itself reporting the account to Twitter for impersonation) were actually the very people who were being literally misled. Misdirected away from the issues and the content of the debate as a result of a carefully-constructed digital communications plan. The goal of which had nothing to do with fooling the internet to believe they were actual factcheckers and everything to do with pissing off the Left and exploiting their anger to spread their messages.
And, since COVID-19, this tactic has been used repeatedly.
Because if we’re all too busy debating painful kerning (5), extra limbs and rogue apostrophes (6), we won’t have any time left to properly scrutinise the government’s policies. And, by sharing them — even with eyebrows firmly raised — we’re actually sharing their messages, too.
So, what do examples 1, 2 and 3 (and probably 4, 5 and 6, though I can’t prove this) all have in common? They were created by the same digital and creative agency who were brought on board by the Tories in Autumn 2019. They appear to be specifically tasked with spreading digital content to ‘unappeal’ to the left. As this article states so clearly, “they’re doing this badly on purpose” in order to saturate digital airwaves with pointless debate in order to distract from real issues and use ‘shitposting’ to spread their messages further.
And it works an absolute treat.
So what can we actually do to stop this?
The most obvious solution is to leave social media. After all, digital platforms can’t be abused if no one’s there to listen. At the time of writing I’ve not been active on Twitter for a couple of months. I’ve stopped short of permanently deactivating my account, but I’m enjoying my time away and have no plans to return. But is everyone else going to do this? Nah.
So here’s what I propose. We need to start exercising radical refrain when we feel the urge to share content which deliberately intends to rile, outrage, divide and distract. This means we might see such content, and are very likely to feel outrage as a result of it. But by practising radical refrain we understand that the most powerful thing we can do in that moment is to resist our now-instinctive temptation to “hateshare” it on social media.
Instead, we should work harder to pause, take stock and internally interrogate that piece of content’s origins and motivations and, if we suspect that its intentions are malicious, ignore it into irrelevance and move on — crucially without telling anyone.
We have every right to feel outraged about the world and its many flaws. But we should direct our energy towards exposing the true motivations behind rage-inducing content, rather than meaninglessly sharing the content itself. If we refuse to share, amplify and saturate spaces with manufactured noise, then we refuse to be manipulated by those who have created such content for ill-gain.
The huge irony of this very essay is that, the time (months!) I’ve invested into researching it means that I, too, am guilty of misdirecting my efforts. So this piece of writing is me drawing a line. If I see people sharing content which I believe has been designed to be hateshared I’ll point them to this post. Beyond that I’ll save my efforts for trying to fix the world before my children notice it’s broken.