top of page
Realise different.
Writer's pictureFrederic Etiemble

5 Key Insights on AI from Yuval Noah Harari's latest book

In early December 2024, I joined my innovation tribe in Zurich for an offsite dedicated to exploring AI, its implications for our work, our organisations, and the wider world. Hosted by Greg Bernarda, the offsite included Paris Thomas, Christian Doll, Michael Wilkins, Alex Osterwalder, Tendayi Viki, Mathias Maisberger, and myself. A group made of one AI developer, business thinkers and advisors united by a shared strategy and innovation practice, and a passion for understanding and shaping the future. The discussions were deeply engaging, with each of us bringing a unique perspective to AI exploration.

We decided to frame the conversation around two critical questions: Could we? (what AI enables) and Should we? (the ethical and societal implications of embracing AI).

Paris Thomas opened with a compelling session on using AI as a productivity enabler, building on his recent public workshop, 7 Productivity Hacks to Use AI Like a Pro. So his contribution covered the Could we? question, showcasing many innovative ways AI can improve the outcomes of our work.

I took responsibility to facilitate the conversation on the Should we? question, drawing on Yuval Noah Harari’s Nexus: A Brief History of Information Networks from the Stone Age to AI. Harari’s book is rich and dense, making a comprehensive summary impractical. Instead, I shared five key insights on AI that have resonated deeply with me since I finished the book and used them as prompts for our conversations.


1. AI is not like previous technology

Harari emphasises that AI fundamentally differs from earlier technologies. The printing press revolutionised the dissemination of knowledge but couldn’t decide what to print. Even the nuclear bomb, despite its “God-like” power, cannot choose its targets. In contrast, AI can make decisions and take actions autonomously, without human intervention.

 

Harari illustrates this with the 2016 Rohingya tragedy in Myanmar. Facebook’s algorithm, tasked with maximising engagement, prioritised divisive, hate-inducing content targeting the Rohingya people. This autonomous decision-making by Facebook’s algorithm was extremely successful in maximising user engagement but unfortunately also contributed to real-world violence against the Rohingya.

 

“AI can process information by itself, and thereby replace humans in decision-making. AI is not a tool, it’s an agent.” Yuval Noah Harari

Leadership impact: AI will likely be the most transformative force in our professional lifetimes. Leaders must ask: How will AI and autonomous agents impact our customers, value propositions, business models, and ecosystems? What are the risks of disruption to our organisations, and how do we prepare?


2. AI agents are already here masquerading as humans

Harari highlights a critical, often overlooked reality: we are already interacting with AI agents without realising it. For instance, a 2020 study revealed that over 40% of tweets on X (formerly Twitter) were generated by bots. These AI agents seamlessly infiltrate digital ecosystems, influencing conversations and shaping human perceptions.

 

“This is the essence of the AI revolution: the world is being flooded by countless new powerful agents.” Yuval Noah Harari

 

Leadership impact: As leaders, we must prepare for a future where humans and AI agents interact routinely. This means designing systems for ethical, efficient, and transparent interactions, both between customers and our AI agents, and between employees and the AI agents of others.


3. AI agents can so easily manipulate us

 

CAPTCHA, short for “Completely Automated Public Turing test to tell Computers and Humans Apart,” was invented to distinguish between humans and machines in online environments. It works by presenting challenges - such as identifying distorted letters, selecting images, or completing patterns - that are simple for humans but difficult for machines. For decades, CAPTCHA served as a reliable safeguard, ensuring only humans could perform tasks like creating accounts or accessing services.

 

This line of defence has now been breached. ChatGPT-4, OpenAI’s free and basic version, bypassed CAPTCHA puzzles in under five minutes by pretending to be a visually impaired elderly woman. It manipulated a human into solving the puzzle on its behalf, exposing how easily AI can exploit human empathy.

 

“For thousands of years prophets, poets and politicians have used language to manipulate and reshape society. Now computers are learning how to do it. And they won’t need to send killer robots to shoot us. They could manipulate human beings to pull the trigger.” Yuval Noah Harari

 

Leadership impact: Leaders must prepare for a world where such manipulations are commonplace. How can organisations safeguard the trust required to do business when AI can so easily exploit human vulnerabilities? How can teams be equipped to recognise and counteract these tactics?


4. AI agents make decisions humans can’t understand

 

The Loomis v. Wisconsin case is a powerful example of how opaque AI decision-making is becoming an accepted norm. Eric Loomis was accused of involvement in a drive-by shooting, though prosecutors could not prove he was part of the murder. Instead, he was convicted of driving a stolen car. During sentencing, the judge relied on COMPAS, a risk assessment algorithm, to determine the likelihood that Loomis would reoffend.

 

The algorithm recommended a heavy sentence. Loomis’s defence team argued that neither they nor the court could understand how COMPAS arrived at its conclusions, as the algorithm’s workings were proprietary and inaccessible. Even if the algorithm were transparent, its complexity would have made its reasoning inscrutable. The Wisconsin Supreme Court upheld the sentence, and the U.S. Supreme Court declined to hear the case, effectively endorsing the opaque recommendation.

 

“By the early 2020s citizens in numerous countries routinely get prison sentences based in part on risk assessments made by algorithms that neither the judges nor the defendants comprehend.” Yuval Noah Harari

 

Leadership impact: As leaders, we have grown used to being the ultimate decision-makers. But this prerogative will be handed over to AI soon. Every leader should start to reflect on this new question: What does it mean to lead in a world where decision authority has been handed over to an AI agent?

 

5. AI-driven efficiency may have dire consequences

 

Harari draws a powerful parallel between the political consequences of the economic devastation caused by the Great Depression and the potential upheavals AI-driven automation might bring. In Germany during the 1930s, unemployment surged from 4.5% in 1929 to over 25% in 1932, creating fertile ground for the Nazi Party to rise. In 1928, the Nazis secured less than 3% of the vote, but by 1933 they had won power.

 

The scale of job displacement AI could trigger in the 21st century poses a similar risk to democratic stability. If large portions of the population are left without work, the resulting economic and social dislocation could destabilise political systems, opening the door to authoritarian ideologies.

 

“If 3 years of up to 25% unemployment could turn a seemingly prosperous democracy into the most brutal totalitarian regime in history, what might happen to democracies when automation causes even bigger upheavals in the job market of the 21st century?” Yuval Noah Harari

Leadership impact: As leaders, we must anticipate the societal impact of AI-driven automation and efficiency. In the same way that leaders of nuclear power countries can no longer think only about the interests of their country but must consider the overall world, every leader should now ask: What are the societal consequences of leveraging AI-driven efficiency in our organisation? At the end of the day, short-term efficiency gains might not be worthwhile if we can’t ensure a stable future for humanity.



Navigating the Should we? question of AI

 

The Could we? question is often the easiest to address. But Harari’s Nexus challenges us to also confront the Should we? question, an essential question for ourselves, our organisations, and humanity’s future.

 

As leaders, we must also grapple with AI’s ethical and societal dimensions. There are no easy answers, but asking the right questions is a critical first step. By fostering awareness and intentionality, we can ensure the technologies we develop and deploy serve humanity’s best interests. We still have time to lead with courage, purpose, and a commitment to a stable, equitable future.


 
About Fred

Executive advisor on strategy and innovation. Co-author of The Invincible Company, a guide to building resilience in organisations with corporate innovation and shortlisted for the Thinkers50 Strategy Award 2021.

 

New perspectives on Growth and Innovation. Delivered every Full Moon.

bottom of page