Nexus – A Brief History of Information Networks from the Stone Age to AI, by Yuval Noah Harari, is for anyone concerned about the risks to democracy from disinformation and totalitarian control.
First there was oral storytelling, then books, then the internet and, with it, algorithms that monetise and weaponise information. But what is information?
Our brains process a lot of inputs, and while we struggle to recall, say, lists or dry facts in a text book, studies show we’re very good at absorbing stories.
What is a nexus? Harari argues information connects us into networks, which become a social nexus. Information may not faithfully represent reality, and may even misrepresent reality.
Either way, information, as story, provides a nexus that connects humans to others. “Human groups are shaped by stories because [their] identities are shaped by stories.”
With the advent of the printing press came the means to disseminate, and therefore control, information. Harari cites Biblical editorial decisions which preferenced misogyny over tolerance. In another example, the mad medieval misogynist, Heinrich Kramer, wrote the infamous Malleus Maleficarum, sexualising witchcraft, making it an instant bestseller. This elevated Kramer to high status and resulted in tens of thousands of people persecuted and killed as ‘witches’.
In the current era, studies confirm that lies spread six times further and faster than truth on social media.
“Outrage drives engagement, while moderation tends not to.” This is why creating outrage is a business model, why conspiracy theories prosper, and why social media algorithms are fuelling the global rise of the Far Right.
“In 2016, an internal Facebook report discovered that ‘64% of all extremist group joins are due to our recommendation tools… [Facebook] grow[s] the problem.’”
Socially destructive and dishonest content wins ‘user engagement’ plus increased ad revenue.
Harari outlines the early history of ‘self-correcting mechanisms’ within power structures, and argues an urgent need for more now. Science is the most powerful ‘correcting mechanism’ for truth to displace lies, which is why the political Far Right prefer strategies like those used by former Trump advisor, Steve Bannon, who advised “flood the room with shit…this is not about persuasion, this is about disorientation.”
People crave certainty, and want simple explanations. Religion and conspiracy theories provide easy answers.
In social media, algorithms that preference content which get more clicks, earn more cash. While you engage with your screen to Like, Share or Comment, tech companies take some of your data and some ad revenue. If conspiracy posts get the widest reach and returns, who cares? No one, until…
The most-used social media in Myanmar is Facebook, and when the company’s auto-play algorithm promoted violent anti-Rohingya propaganda, Myanmar citizens then raped, killed or drove out three-quarters of a million Rohingya.
Hate-mongers made the original posts, but it was the algorithm – dumb AI – that caused the slaughter.
This is how democracy dies. When conversations are hijacked by AI algorithms designed for commercial gain or political power, citizens are disempowered. In a world ‘flooded with shit’, the confused turn to ‘strong leaders’ and simple stories to explain it all.
Worse, when data mining and AI combine to form Social Credit systems such as that evolving in China, citizens have no outlet for dissent lest their score drops, and they’re refused a loan, a pay rise, or a flight.
Harari also asks “what might happen when the new technology of social credit systems meet traditional religion.” Imagine the totalitarian theocrats of Afghanistan or Texas wielding control of a social credit system. People would have to constantly second-guess their every thought or act, fearing an all-seeing, all-knowing digital God. It’s a perfect regime when citizens police themselves 24/7.
Harari also warns that our finance markets are increasingly run by still evolving algorithms, which we have little understanding of, and which operate outside governmental control. All of this will tend to concentrate power and global finances in fewer and fewer hands.
We’ve seen, at the local scale, when financial options are subject to algorithms not overseen by humans – Robodebt. Even bank loans can be automatically refused because an algorithm determined, based on data you have no means of accessing, that you’re 0.03% more likely to default on your loan application.
What should we do? Harari suggests banning any form of AI impersonating a human – an AI doctor should never pretend to be human. He also suggests a ban on algorithms from curating public debates, from spreading disinformation, and for companies to be transparent about their algorithms’ business model. If we don’t control corporate AI, AI will control us.
As AI evolves greater power and authority, leaving regulation in the self-interested hands of billionaires and ‘the market’ sees lies being privileged over truth. Harari argues we need to be “building institutions with strong self-correcting mechanisms.”
If AI ever develops actual independent intelligence, it’ll likely be, Harari warns, like nothing we can readily comprehend.
On the other hand, perhaps we can ask them for help.
Nexus – A Brief History of Information Networks from the Stone Age to AI, by Yuval Noah Harari 2024, Penguin, RRP $39.99, ISBN: 9781911717089, 404pp
B.P. Marshall is a scriptwriter and author.