Questioning Tomorrow: a review of Harari’s Homo Deus

Sierra Duffey
7 min readFeb 5, 2018

One of my personal goals for 2018 is to read at least 30 books by the end of the year. Although there are no set rules about what books I read, I decided to start strong with the thought-provoking Homo Deus by Yuval Noah Harari.

Homo Deus is a true work of art, and it’s one that has contributed to my ever-growing existential crisis for the 21st century. And I’m not alone. So many other great thinkers of our time have been deeply affected by the potential reality Harari throws at us. Others who are involved in the world of technology (as most of us are) are realizing that we need to start asking deeper questions than which cat video is funnier or what song Alexa should play next.

Our world is changing faster than we can process, and eventually, if we don’t have answers to these bigger existential questions, the answers may be made up for us.

Much like his first bestseller, Sapiens, Harari uses his extensive knowledge of history and marries it with other disciplines to create a brilliant reflection on our purpose as human beings. While Sapiens helped us understand how we got to today, Homo Deus asks tough questions about our future.

Black Mirror. Season 1, episode 2. Fifteen Million Merits.

Homo Deus is like Black Mirror* in book form;

It’s scary not because of some imagined future where robots take over, but because it outlines potential consequences of technology that amplifies existing human flaws.

* If you really want to worry about our future, go watch Black Mirror, the anthology series merging sci-fi and current-day horrors.

Harari starts off his book by noting that we are at a turning point in history. In general, plague, famine and war are no longer societies’ greatest threats. In the 21st century you are more likely to die from old age or suicide, and countries fight wars that extend beyond a single nation (i.e. climate change). In a world where we’ve finally stabilized our greatest threats, what comes next?

“In the 21st century the individual is more likely to disintegrate gently from within than to be brutally crushed from without.” (p.g. 402)

The new agenda that Harari proposes is no small feat. He argues that with our energy no longer focused on reducing pain, humans inevitably are going to strive for immortality, bliss and divinity.

The big questions that come up here of course are why would we strive for these things, and even so, how would we get there?

Harari doesn’t pretend to answer these questions fully, but he does offer some clues. For one, in the most recent era, humans have always worked on the assumption that we can achieve better, that our experiences are inherently valuable, and that we should continue to advance because we can.

It makes sense then, that we will continue to innovate, even if we end up creating machines that are smarter than ourselves and put our entire existence into question.

But what happens when we advance our technology to such a level that it surpasses even our greatest abilities as humans? Do we still value our subjective experiences even if they’re proven to be inaccurate and a source of pain?

The chapter The Great Uncoupling was a rough tumble into this reality. I myself am a firm believer in humanism as Harari describes it and most of us are probably still living life under the same assumptions. We like to think we have free will, that we have some control our lives. In reality our lives are determined by so many factors that are out of our control — and that’s only going to get worse as our technology learns more about us.

In essence, all of our thoughts and decisions are due to purely biological functions within the brain. If we have technology that can break this complex, but objective, process into small chunks of data, do subjective experiences really matter?

In this chapter, Harari also noted the harsh truth that intelligence and consciousness don’t necessarily have to be the same thing even though they were in the past. Our machines are becoming ever intelligent without needing to also have conscious and emotional thoughts about the world. Robots and algorithms will eventually know ourselves objectively far better than we do, and this may lead us to take their advice on a regular basis.

Smart watches already capture things like heart rate, and sleep patterns. Other devices can monitor hormone levels, brainwaves etc and begin to decipher our emotions. PHOTO: IAIN MASTERTON/CORBIS

Harari makes an important distinction here: the technology doesn’t have to be perfect, our current system will fall apart as soon as we trust machines more than ourselves.

Where is my free will when I trust a machine to make better decisions than I do? What is my purpose if I can live through life making virtually no mistakes? Consciousness will no longer be necessary to make smart decisions and bring upon joy.

In the third part of the book, there are two main problems that Harari brings attention to:

First, Harari suggests that eventually we are all going to fall privy to a new sort of religion called Dataism. Following dataism, we will all recognize that the data we can provide to our machine-connected world is our greatest asset. As humans, we are no longer valuable because we can think for ourselves, but because we can feed more data into the system, thus making the system work more effectively.

In this new religion then, are human experiences inherently valuable? Harari would argue that our lives may become valuable only in so far as to the data we openly share.

Secondly, Harari warns that technology will continue to create even wider gaps among rich and poor. In the current era of humanism, we hold the assumption that all human experience has value, even if these experiences differ. Harari says, “in the future, however, we may see real gaps in physical and cognitive abilities opening between an upgraded upper class and the rest of society”. In a new world where the rich may actually be physically superior to the poor, and where data is power, inequalities will only continue to increase in extremely damaging ways.

So in the future, are we all really as doomed as Harari claims? Well, that depends. I personally think that Harari brings up these extreme consequences so that we realize the potential of the technology we create. But if we take time to actually reflect on what we’re building and how we’re building it, we may be able to choose an outcome that serves humankind.

We can build technology in a way that improves human experiences rather than overrides them, but this is something we have to consciously choose.

If we keep going at break-neck speed — improving robots and algorithms to do amazing things without really knowing why, then we are going to be surprised with whatever outcome emerges. Without consciously choosing what values our technology holds, it is fully possible that the technology will determine our values for us.

That’s why, in my mind, understanding psychology, design, ethics, and human interaction is more important than ever. We need to purposefully design every product we make. Every time we create something that has the potential to be smarter than we are — we need to question our ethics, our biases and ask whether this product is going to improve the human experience or diminish it.

I don’t know about you, but 50 years from now I still want my subjective experiences to matter.

In the future, will we continue to do activities because we inherently enjoy them, or because one of our devices says it’s the best thing for us?

Even if there are multiple robots and programs that can perform almost all cognitive tasks better than I can, I don’t want to become a being that just spends my life doing as I’m told. I want technology to make my life better rather than cheapen my thoughts and actions into nothing more than quantifiable bits of data. I want consciousness to have a purpose even if it makes our lives a bit messy and unpredictable.

But we can’t just sit here and hope that technology and the people who create it will always value the human experience. We have to actively become the designers, the thinkers, and the voices that influence how technology serves us rather than the other way around. We need to ask questions that are far more complicated than we may be comfortable with, and we need to start asking these questions now.

If you haven’t already, go read Homo Deus. Our future might just depend on it.

Further Reading:

Like what you read? Let me know in the comments below which books I should read and review next!

--

--

Sierra Duffey

Photographer and SEO specialist. I work with small business owners and creatives across Canada. https://www.sierraduffey.ca/ I write poetry and other things