In 1996 a maverick called John Perry Barlow published ‘A Declaration of the Independence of Cyberspace’. In it he set out a vision of cyberspace that was depoliticised and egalitarian:
“We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.”
As a founder of the Electronic Frontier Foundation, and later a fellow at Harvard University’s Berkman Klein Center for Internet & Society, underpinning his vision was concern of government over-regulation and the web being turned into a tool for nefarious uses.
Those were early days for the internet. Most sites were run by universities, scientific institutions and hobbyists. Jeff Bezos had just quit his big money job at a hedge fund and founded a bookstore out of his garage. Google didn’t exist, but you could use AltaVista, and service providers such as AOL ran popular closed communities. To access these delights all you needed was a dial-up modem and plenty of patience.
How far we have come in the decades since. Tools such as Sora 2 enable the most novice of users to produce cinema quality visuals. Startups such as Mantic produce AI forecasts that are well calibrated. Innovators like Artificial Societies provide detailed simulation of public campaigns before they’re unleashed on the world. The tech heavy Nasdaq currently owes well over 40% of its value to the so-called Magnificent Seven.
When The Economist penned their obituary of Barlow they acknowledged the obvious point that the web never achieved the lofty visions set out in the Declaration. But this rather misses the point. A man as smart as Barlow wasn’t foolish enough to be making a prediction about the future of the internet, rather he was issuing an early defensive rallying cry for online liberty.
Had the visions laid out the Declaration become true, we could perhaps look to today’s online communities as a worthy source of authoritative and verifiable information. Unfortunately, nothing could be further from the state of affairs that exists today.
Across the world governments are controlling what their citizens say and do online with remarkable efficiency. There is a sliding scale of control: from state-imposed firewalls, to government controlled ‘everything apps’, to misleading foreign influence campaigns, to implicit political control of internet companies.
Internet users are showing an ever-greater capability and propensity to develop and seed fake news. This is being fuelled by an increasingly interconnected world and access to powerful new tools like the ones described above.
The Covid pandemic saw the worst excesses of fake news, from people being encouraged to try questionable treatments to outlandish claims about the virus’s origins. There are many reasons why the disease was a seminal moment for mankind, one of the lesser discussed ones is that it represented the moment when online fakery went mainstream.
Of course, lies and conspiracies have always been spread in the real and virtual world, but when lockdown hit this was supercharged. People stayed home, bunkered down behind their keyboards and in many cases became out of touch with reality. This is understandable: it was an anxious time when many were eager to get to grips with what was going on. But this created a breeding ground for a maelstrom of questionable advice and information.
Generally fake news is created by one of two sorts of people: those who know the information to be untrue, in which case it is disinformation, and those who believe it to be true, in which case it is misinformation.
Spreaders of disinformation are generally the disaffected. “Some people just want to watch the world burn” said Michael Caine’s Alfred Pennyworth in Batman flick ‘The Dark Knight’.
But they can also be pursuing a more defined objective, such as a geopolitical actor seeking to unsettle a foreign adversary, or an organisation seeking to undermine a competitor.
The objectives of the spreaders of misinformation matter less because they have a belief they might be doing others a favour.
One study from Israeli and American academics showed that approximately 2,000 registered U.S. voters spread 80% of the fake news during the 2020 election. The effect of sharing this content on social media has been compared to a murmuration of starlings: one bird forces a change in direction, and the rest of the flock follows unwittingly.
American neural scientist and AI expert Gary Marcus has talked about the potential for AI misinformation to one day trigger nuclear war. That’s not far-fetched.
So, what can we do to mitigate some of the worst excesses of fake news online?
Firstly, there’s the personal responsibility to always check the authenticity of anything you see online. We should reprogram our brains – which are hardwired to trust others after thousands of years of evolution – to approach the online world with total scepticism.
Instead of “could this information interest me?”, or “is this information relevant to the work I am doing?”, the first question needs to be more foundational: “can I trust what I am seeing?” This requires an underappreciated but totally fundamental mental shift.
Secondly, for organisations it’s about acknowledging that we’re operating in a post-truth world where your reputation is carefully built and easily destroyed.
Double down on cross-checking the voracity of your output. Show the public that you embrace fact checking tools such as C2PA. Remain on the cutting edge of the latest tools such as audio deepfake detection methods. Make sure you are using third party platforms to understand what your customers are seeing.
LLMs are not, at least for the moment, a panacea for fake news. While continually improving, Chat GPT-5’s main model still has a factual hallucination rate of 9.6% when internet browsing is enabled. Even when browsing is disabled the world’s leading chatbot is not immune from hallucination, which shows the underlying difficulty in finding 100% accurate training data. The AI Incident Database includes hundreds of examples of AI enabled fake news, deepfakes and phishing scams.
ForgeFront recently undertook an analysis of which future technologies could impact a client’s ability to convey accurate information to their customers. This enabled them to get ahead of the game by putting digital watermarks on their output. We put in place an early warning system to help identify when fake news was being spread about them online. Working with their other agencies we constructed a worst-case scenario management plan.
But the private sector can only do so much, and it is incumbent on governments around the world to turn the tide. In a report for Washington DC-based New Lines Institute I described some of the steps we believe are needed in the USA, but these have wide applicability to other countries too.
Firstly, a-political, expert led, and well-funded fact checking panels should be established by the public sector. These should be tasked with not checking opinions but ensuring the accuracy of prominent news stories or viral social content. They should be given ample resources and free reign to investigate unencumbered by outside interests.
Secondly, public awareness campaigns should encourage people to do their own research after seeing something online. These campaigns should embed the sceptics’ mindset described above. They should be prioritised by governments in the same way as historic drives to stop smoking, encourage the wearing of a seatbelt, preventing online scams or attracting tourists.
Finally, support should be given to new technology and social media platforms that aim to minimise the spread of fake news. The balance is currently in favour of funding those popular generative tools that create new content, or the social media algorithms that keep users hooked to their feeds.
However, there needs to be a rebalancing to provide commensurate support to the disparate communities of academics, programmers and companies that are working to tackle the problem through defensive technologies that combat fake news.
We can’t know for sure how Barlow would feel about the internet in the post-Covid age, but we might hazard a plausible guess. Not content with focusing his maverick outputs on the future of cyberspace, he was also a prominent lyricist for US band The Grateful Dead.
In ‘Throwing Stones’ he writes of the planet we call home:
“A peaceful place, or so it looks from space; Closer look reveals the human race; Full of hope, full of grace is the human face; But afraid we may lay our home to waste”
We live in a time when the existential risk to humanity from geopolitical rivalry, war and accelerating technology developments feels greater than it has done for generations.
The problem with fake news is that it acts as a compounding influence on all these issues by adding fuel to the fire.
However, unlike these issues there are obvious and immediate actions the government, companies and internet users can take today to defend against the threat.