Home

Homo Deus: A Brief History Of Tomorrow Part 9

Homo Deus: A Brief History Of Tomorrow - novelonlinefull.com

You’re read light novel Homo Deus: A Brief History Of Tomorrow Part 9 online at NovelOnlineFull.com. Please use the follow button to get notification about the latest chapter next time when you visit NovelOnlineFull.com. Use F11 button to read novel in full-screen(PC only). Drop by anytime you want to read free – fast – latest novel. It’s great if you could leave a comment, share your opinion about the new chapters, new novel with others on the internet. We’ll do our best to bring you the finest, latest novel everyday. Enjoy

Evolution discovered this trick aeons before the paediatricians. Given the unbearable torments women undergo at childbirth, you might think that after going through it once, no sane woman would ever agree to do it again. However, at the end of labour and in the following days the hormonal system secretes cortisol and beta-endorphins, which reduce the pain and create a feeling of relief and sometimes even of elation. Moreover, the growing love towards the baby, and the acclaim from friends, family members, religious dogmas and nationalist propaganda, conspire to turn childbirth from a terrible trauma into a positive memory.

An iconic image of the Virgin Mary holding baby Jesus. In most cultures, childbirth is narrated as a wonderful experience rather than as a trauma.

Virgin and Child, Sa.s.soferrato, Il (Giovanni Battista Salvi) (160985), Musee Bonnat, Bayonne, France Bridgeman Images.

One study conducted at the Rabin Medical Center in Tel Aviv showed that the memory of labour reflected mainly the peak and end points, while the overall duration had almost no impact at all.16 In another research project, 2,428 Swedish women were asked to recount their memories of labour two months after giving birth. Ninety per cent reported that the experience was either positive or very positive. They didn't necessarily forget the pain 28.5 per cent described it as the worst pain imaginable yet it did not prevent them from evaluating the experience as positive. The narrating self goes over our experiences with a sharp pair of scissors and a thick black marker. It censors at least some moments of horror, and files in the archive a story with a happy ending.17 Most of our critical life choices of partners, careers, residences and holidays are taken by our narrating self. Suppose you can choose between two potential holidays. You can go to Jamestown, Virginia, and visit the historic colonial town where the first English settlement on mainland North America was founded in 1607. Alternatively, you can realise your number one dream vacation, whether it is trekking in Alaska, sunbathing in Florida or having an unbridled baccha.n.a.lia of s.e.x, drugs and gambling in Las Vegas. But there is a caveat: if you choose your dream vacation, then just before you board the plane home, you must take a pill which will wipe out all your memories of that vacation. What happened in Vegas will forever remain in Vegas. Which holiday would you choose? Most people would opt for colonial Jamestown, because most people give their credit card to the narrating self, which cares only about stories and has zero interest in even the most mind-blowing experiences if it cannot remember them.

Truth be told, the experiencing self and the narrating self are not completely separate ent.i.ties but are closely intertwined. The narrating self uses our experiences as important (but not exclusive) raw materials for its stories. These stories, in turn, shape what the experiencing self actually feels. We experience hunger differently when we fast on Ramadan, when we fast in preparation for a medical examination, and when we don't eat because we have no money. The different meanings ascribed to our hunger by the narrating self create very different actual experiences.



Furthermore, the experiencing self is often strong enough to sabotage the best-laid plans of the narrating self. For example, I can make a New Year resolution to start a diet and go to the gym every day. Such grand decisions are the monopoly of the narrating self. But the following week when it's gym time, the experiencing self takes over. I don't feel like going to the gym, and instead I order pizza, sit on the sofa and turn on the TV.

Nevertheless, most people identify with their narrating self. When they say 'I', they mean the story in their head, not the stream of experiences they undergo. We identify with the inner system that takes the crazy chaos of life and spins out of it seemingly logical and consistent yarns. It doesn't matter that the plot is full of lies and lacunas, and that it is rewritten again and again, so that today's story flatly contradicts yesterday's; the important thing is that we always retain the feeling that we have a single unchanging ident.i.ty from birth to death (and perhaps even beyond the grave). This gives rise to the questionable liberal belief that I am an individual, and that I possess a consistent and clear inner voice, which provides meaning for the entire universe.18 The Meaning of Life The narrating self is the star of Jorge Luis Borges's story 'A Problem'.19 The story deals with Don Quixote, the eponymous hero of Miguel Cervantes's famous novel. Don Quixote creates for himself an imaginary world in which he is a legendary champion going forth to fight giants and save Lady Dulcinea del Toboso. In reality, Don Quixote is Alonso Quixano, an elderly country gentleman; the n.o.ble Dulcinea is an uncouth farm girl from a nearby village; and the giants are windmills. What would happen, wonders Borges, if out of his belief in these fantasies, Don Quixote attacks and kills a real person? Borges asks a fundamental question about the human condition: what happens when the yarns spun by our narrating self cause great harm to ourselves or those around us? There are three main possibilities, says Borges.

One option is that nothing much happens. Don Quixote will not be bothered at all by killing a real man. His delusions are so overpowering that he could not tell the difference between this incident and his imaginary duel with the windmill giants. Another option is that once he takes a real life, Don Quixote will be so horrified that he will be shaken out of his delusions. This is akin to a young recruit who goes to war believing that it is good to die for one's country, only to be completely disillusioned by the realities of warfare.

And there is a third option, much more complex and profound. As long as he fought imaginary giants, Don Quixote was just play-acting, but once he actually kills somebody, he will cling to his fantasies for all he is worth, because they are the only thing giving meaning to his terrible crime. Paradoxically, the more sacrifices we make for an imaginary story, the stronger the story becomes, because we desperately want to give meaning to these sacrifices and to the suffering we have caused.

In politics this is known as the 'Our Boys Didn't Die in Vain' syndrome. In 1915 Italy entered the First World War on the side of the Entente powers. Italy's declared aim was to 'liberate' Trento and Trieste two 'Italian' territories that the Austro-Hungarian Empire held 'unjustly'. Italian politicians gave fiery speeches in parliament, vowing historical redress and promising a return to the glories of ancient Rome. Hundreds of thousands of Italian recruits went to the front shouting, 'For Trento and Trieste!' They thought it would be a walkover.

It was anything but. The Austro-Hungarian army held a strong defensive line along the Isonzo River. The Italians hurled themselves against the line in eleven gory battles, gaining a few kilometres at most, and never securing a breakthrough. In the first battle they lost 15,000 men. In the second battle they lost 40,000 men. In the third battle they lost 60,000. So it continued for more than two dreadful years until the eleventh engagement, when the Austrians finally counter-attacked, and in the Battle of Caporreto soundly defeated the Italians and pushed them back almost to the gates of Venice. The glorious adventure became a bloodbath. By the end of the war, almost 700,000 Italian soldiers were killed, and more than a million were wounded.20 After losing the first Isonzo battle, Italian politicians had two choices. They could admit their mistake and sign a peace treaty. AustriaHungary had no claims against Italy, and would have been delighted to sign a peace treaty because it was busy fighting for survival against the much stronger Russians. Yet how could the politicians go to the parents, wives and children of 15,000 dead Italian soldiers, and tell them: 'Sorry, there has been a mistake. We hope you don't take it too hard, but your Giovanni died in vain, and so did your Marco.' Alternatively they could say: 'Giovanni and Marco were heroes! They died so that Trieste would be Italian, and we will make sure they didn't die in vain. We will go on fighting until victory is ours!' Not surprisingly, the politicians preferred the second option. So they fought a second battle, and lost another 40,000 men. The politicians again decided it would be best to keep on fighting, because 'our boys didn't die in vain'.

A few of the victims of the Isonzo battles. Was their sacrifice in vain?

Bettmann/Corbis.

Yet you cannot blame only the politicians. The ma.s.ses also kept supporting the war. And when after the war Italy did not get all the territories it demanded, Italian democracy placed at its head Benito Mussolini and his fascists, who promised they would gain for Italy a proper compensation for all the sacrifices it had made. While it's hard for a politician to tell parents that their son died for no good reason, it is far more difficult for parents to say this to themselves and it is even harder for the victims. A crippled soldier who lost his legs would rather tell himself, 'I sacrificed myself for the glory of the eternal Italian nation!' than 'I lost my legs because I was stupid enough to believe self-serving politicians.' It is much easier to live with the fantasy, because the fantasy gives meaning to the suffering.

Priests discovered this principle thousands of years ago. It underlies numerous religious ceremonies and commandments. If you want to make people believe in imaginary ent.i.ties such as G.o.ds and nations, you should make them sacrifice something valuable. The more painful the sacrifice, the more convinced people are of the existence of the imaginary recipient. A poor peasant sacrificing a priceless bull to Jupiter will become convinced that Jupiter really exists, otherwise how can he excuse his stupidity? The peasant will sacrifice another bull, and another, and another, just so he won't have to admit that all the previous bulls were wasted. For exactly the same reason, if I have sacrificed a child to the glory of the Italian nation, or my legs to the communist revolution, it's enough to turn me into a zealous Italian nationalist or an enthusiastic communist. For if Italian national myths or communist propaganda are a lie, then I will be forced to admit that my child's death or my own paralysis have been completely pointless. Few people have the stomach to admit such a thing.

The same logic is at work in the economic sphere too. In 1999 the government of Scotland decided to erect a new parliament building. According to the original plan, the construction was supposed to take two years and cost 40 million. In fact, it took five years and cost 400 million. Every time the contractors encountered unexpected difficulties and expenses, they went to the Scottish government and asked for more time and money. Every time this happened, the government told itself: 'Well, we've already sunk 40 million into this and we'll be completely discredited if we stop now and end up with a half-built skeleton. Let's authorise another 40 million.' Six months later the same thing happened, by which time the pressure to avoid ending up with an unfinished building was even greater; and six months after that the story repeated itself, and so on until the actual cost was ten times the original estimate.

Not only governments fall into this trap. Business corporations often sink millions into failed enterprises, while private individuals cling to dysfunctional marriages and dead-end jobs. For the narrating self would much prefer to go on suffering in the future, just so it won't have to admit that our past suffering was devoid of all meaning. Eventually, if we want to come clean about past mistakes, our narrating self must invent some twist in the plot that will infuse these mistakes with meaning. For example, a pacifist war veteran may tell himself, 'Yes, I've lost my legs because of a mistake. But thanks to this mistake, I understand that war is h.e.l.l, and from now onwards I will dedicate my life to fight for peace. So my injury did have some positive meaning: it taught me to value peace.'

The Scottish Parliament building. Our sterling did not die in vain.

Jeremy Sutton-Hibbert/Getty Images.

We see, then, that the self too is an imaginary story, just like nations, G.o.ds and money. Each of us has a sophisticated system that throws away most of our experiences, keeps only a few choice samples, mixes them up with bits from movies we saw, novels we read, speeches we heard, and from our own daydreams, and weaves out of all that jumble a seemingly coherent story about who I am, where I came from and where I am going. This story tells me what to love, whom to hate and what to do with myself. This story may even cause me to sacrifice my life, if that's what the plot requires. We all have our genre. Some people live a tragedy, others inhabit a never-ending religious drama, some approach life as if it were an action film, and not a few act as if in a comedy. But in the end, they are all just stories.

What, then, is the meaning of life? Liberalism maintains that we shouldn't expect an external ent.i.ty to provide us with some readymade meaning. Rather, each individual voter, customer and viewer ought to use his or her free will in order to create meaning not just for his or her life, but for the entire universe.

The life sciences undermine liberalism, arguing that the free individual is just a fictional tale concocted by an a.s.sembly of biochemical algorithms. Every moment, the biochemical mechanisms of the brain create a flash of experience, which immediately disappears. Then more flashes appear and fade, appear and fade, in quick succession. These momentary experiences do not add up to any enduring essence. The narrating self tries to impose order on this chaos by spinning a never-ending story, in which every such experience has its place, and hence every experience has some lasting meaning. But, as convincing and tempting as it may be, this story is a fiction. Medieval crusaders believed that G.o.d and heaven provided their lives with meaning. Modern liberals believe that individual free choices provide life with meaning. They are all equally delusional.

Doubts about the existence of free will and individuals are nothing new, of course. Thinkers in India, China and Greece argued that 'the individual self is an illusion' more than 2,000 years ago. Yet such doubts don't really change history unless they have a practical impact on economics, politics and day-to-day life. Humans are masters of cognitive dissonance, and we allow ourselves to believe one thing in the laboratory and an altogether different thing in the courthouse or in parliament. Just as Christianity didn't disappear the day Darwin published On the Origin of Species, so liberalism won't vanish just because scientists have reached the conclusion that there are no free individuals.

Indeed, even Richard Dawkins, Steven Pinker and the other champions of the new scientific world view refuse to abandon liberalism. After dedicating hundreds of erudite pages to deconstructing the self and the freedom of will, they perform breathtaking intellectual somersaults that miraculously land them back in the eighteenth century, as if all the amazing discoveries of evolutionary biology and brain science have absolutely no bearing on the ethical and political ideas of Locke, Rousseau and Thomas Jefferson.

However, once the heretical scientific insights are translated into everyday technology, routine activities and economic structures, it will become increasingly difficult to sustain this double-game, and we or our heirs will probably require a brand-new package of religious beliefs and political inst.i.tutions. At the beginning of the third millennium, liberalism is threatened not by the philosophical idea that 'there are no free individuals' but rather by concrete technologies. We are about to face a flood of extremely useful devices, tools and structures that make no allowance for the free will of individual humans. Can democracy, the free market and human rights survive this flood?

9.

The Great Decoupling The preceding pages took us on a brief tour of recent scientific discoveries that undermine the liberal philosophy. It's time to examine the practical implications of these scientific discoveries. Liberals uphold free markets and democratic elections because they believe that every human is a uniquely valuable individual, whose free choices are the ultimate source of authority. In the twenty-first century three practical developments might make this belief obsolete: 1. Humans will lose their economic and military usefulness, hence the economic and political system will stop attaching much value to them.

2. The system will still find value in humans collectively, but not in unique individuals.

3. The system will still find value in some unique individuals, but these will be a new elite of upgraded superhumans rather than the ma.s.s of the population.

Let's examine all three threats in detail. The first that technological developments will make humans economically and militarily useless will not prove that liberalism is wrong on a philosophical level, but in practice it is hard to see how democracy, free markets and other liberal inst.i.tutions can survive such a blow. After all, liberalism did not become the dominant ideology simply because its philosophical arguments were the most accurate. Rather, liberalism succeeded because there was much political, economic and military sense in ascribing value to every human being. On the ma.s.s battlefields of modern industrial wars, and in the ma.s.s production lines of modern industrial economies, every human counted. There was value to every pair of hands that could hold a rifle or pull a lever.

In 1793 the royal houses of Europe sent their armies to strangle the French Revolution in its cradle. The firebrands in Paris reacted by proclaiming the levee en ma.s.se and unleashing the first total war. On 23 August, the National Convention decreed that 'From this moment until such time as its enemies shall have been driven from the soil of the Republic, all Frenchmen are in permanent requisition for the services of the armies. The young men shall fight; the married men shall forge arms and transport provisions; the women shall make tents and clothes and shall serve in the hospitals; the children shall turn old lint into linen; and the old men shall betake themselves to the public squares in order to arouse the courage of the warriors and preach hatred of kings and the unity of the Republic.'1 This decree sheds interesting light on the French Revolution's most famous doc.u.ment The Declaration of the Rights of Man and of the Citizen which recognised that all citizens have equal value and equal political rights. Is it a coincidence that universal rights were proclaimed at the same historical juncture that universal conscription was decreed? Though scholars may quibble about the exact relations between the two, in the following two centuries a common argument in defence of democracy explained that giving people political rights is good, because the soldiers and workers of democratic countries perform better than those of dictatorships. Allegedly, granting people political rights increases their motivation and their initiative, which is useful both on the battlefield and in the factory.

Thus Charles W. Eliot, president of Harvard from 1869 to 1909, wrote on 5 August 1917 in the New York Times that 'democratic armies fight better than armies aristocratically organised and autocratically governed' and that 'the armies of nations in which the ma.s.s of the people determine legislation, elect their public servants, and settle questions of peace and war, fight better than the armies of an autocrat who rules by right of birth and by commission from the Almighty'.2 A similar rationale stood behind the enfranchis.e.m.e.nt of women in the wake of the First World War. Realising the vital role of women in total industrial wars, countries saw the need to give them political rights in peacetime. Thus in 1918 President Woodrow Wilson became a supporter of women's suffrage, explaining to the US Senate that the First World War 'could not have been fought, either by the other nations engaged or by America, if it had not been for the services of women services rendered in every sphere not only in the fields of effort in which we have been accustomed to see them work, but wherever men have worked and upon the very skirts and edges of the battle itself. We shall not only be distrusted but shall deserve to be distrusted if we do not enfranchise them with the fullest possible enfranchis.e.m.e.nt.'3 However, in the twenty-first century the majority of both men and women might lose their military and economic value. Gone is the ma.s.s conscription of the two world wars. The most advanced armies of the twenty-first century rely far more on cutting-edge technology. Instead of limitless cannon fodder, you now need only small numbers of highly trained soldiers, even smaller numbers of special forces super-warriors and a handful of experts who know how to produce and use sophisticated technology. Hi-tech forces 'manned' by pilotless drones and cyber-worms are replacing the ma.s.s armies of the twentieth century, and generals delegate more and more critical decisions to algorithms.

Aside from their unpredictability and their susceptibility to fear, hunger and fatigue, flesh-and-blood soldiers think and move on an increasingly irrelevant timescale. From the days of Nebuchadnezzar to those of Saddam Hussein, despite myriad technological improvements, war was waged on an organic timetable. Discussions lasted for hours, battles took days, and wars dragged on for years. Cyber-wars, however, may last just a few minutes. When a lieutenant on shift at cyber-command notices something odd is going on, she picks up the phone to call her superior, who immediately alerts the White House. Alas, by the time the president reaches for the red handset, the war has already been lost. Within seconds, a sufficiently sophisticated cyber strike might shut down the US power grid, wreck US flight control centres, cause numerous industrial accidents in nuclear plants and chemical installations, disrupt the police, army and intelligence communication networks and wipe out financial records so that trillions of dollars simply vanish without trace and n.o.body knows who owns what. The only thing curbing public hysteria is that with the Internet, television and radio down, people will not be aware of the full magnitude of the disaster.

On a smaller scale, suppose two drones fight each other in the air. One drone cannot fire a shot without first receiving the go-ahead from a human operator in some bunker. The other drone is fully autonomous. Which do you think will prevail? If in 2093 the decrepit European Union sends its drones and cyborgs to snuff out a new French Revolution, the Paris Commune might press into service every available hacker, computer and smartphone, but it will have little use for most humans, except perhaps as human shields. It is telling that already today in many asymmetrical conflicts the majority of citizens are reduced to serving as human shields for advanced armaments.

Left: Soldiers in action at the Battle of the Somme, 1916. Right: A pilotless drone.

Left: Fototeca Gilardi/Getty Images. Right: alxpin/Getty Images.

Even if you care more about justice than victory, you should probably opt to replace your soldiers and pilots with autonomous robots and drones. Human soldiers murder, rape and pillage, and even when they try to behave themselves, they all too often kill civilians by mistake. Computers programmed with ethical algorithms could far more easily conform to the latest rulings of the international criminal court.

In the economic sphere too, the ability to hold a hammer or press a b.u.t.ton is becoming less valuable than before. In the past, there were many things only humans could do. But now robots and computers are catching up, and may soon outperform humans in most tasks. True, computers function very differently from humans, and it seems unlikely that computers will become humanlike any time soon. In particular, it doesn't seem that computers are about to gain consciousness, and to start experiencing emotions and sensations. Over the last decades there has been an immense advance in computer intelligence, but there has been exactly zero advance in computer consciousness. As far as we know, computers in 2016 are no more conscious than their prototypes in the 1950s. However, we are on the brink of a momentous revolution. Humans are in danger of losing their value, because intelligence is decoupling from consciousness.

Until today, high intelligence always went hand in hand with a developed consciousness. Only conscious beings could perform tasks that required a lot of intelligence, such as playing chess, driving cars, diagnosing diseases or identifying terrorists. However, we are now developing new types of non-conscious intelligence that can perform such tasks far better than humans. For all these tasks are based on pattern recognition, and non-conscious algorithms may soon excel human consciousness in recognising patterns. This raises a novel question: which of the two is really important, intelligence or consciousness? As long as they went hand in hand, debating their relative value was just a pastime for philosophers. But in the twenty-first century, this is becoming an urgent political and economic issue. And it is sobering to realise that, at least for armies and corporations, the answer is straightforward: intelligence is mandatory but consciousness is optional.

Armies and corporations cannot function without intelligent agents, but they don't need consciousness and subjective experiences. The conscious experiences of a flesh-and-blood taxi driver are infinitely richer than those of a self-driving car, which feels absolutely nothing. The taxi driver can enjoy music while navigating the busy streets of Seoul. His mind may expand in awe as he looks up at the stars and contemplates the mysteries of the universe. His eyes may fill with tears of joy when he sees his baby girl taking her very first step. But the system doesn't need all that from a taxi driver. All it really wants is to bring pa.s.sengers from point A to point B as quickly, safely and cheaply as possible. And the autonomous car will soon be able to do that far better than a human driver, even though it cannot enjoy music or be awestruck by the magic of existence.

Indeed, if we forbid humans to drive taxis and cars altogether, and give computer algorithms monopoly over traffic, we can then connect all vehicles to a single network, and thereby make car accidents virtually impossible. In August 2015, one of Google's experimental self-driving cars had an accident. As it approached a crossing and detected pedestrians wishing to cross, it applied its brakes. A moment later it was. .h.i.t from behind by a sedan whose careless human driver was perhaps contemplating the mysteries of the universe instead of watching the road. This could not have happened if both vehicles were steered by interlinked computers. The controlling algorithm would have known the position and intentions of every vehicle on the road, and would not have allowed two of its marionettes to collide. Such a system will save lots of time, money and human lives but it will also do away with the human experience of driving a car and with tens of millions of human jobs.4 Some economists predict that sooner or later, unenhanced humans will be completely useless. While robots and 3D printers replace workers in manual jobs such as manufacturing shirts, highly intelligent algorithms will do the same to white-collar occupations. Bank clerks and travel agents, who a short time ago were completely secure from automation, have become endangered species. How many travel agents do we need when we can use our smartphones to buy plane tickets from an algorithm?

Stock-exchange traders are also in danger. Most trade today is already being managed by computer algorithms, which can process in a second more data than a human can in a year, and that can react to the data much faster than a human can blink. On 23 April 2013, Syrian hackers broke into a.s.sociated Press's official Twitter account. At 13:07 they tweeted that the White House had been attacked and President Obama was hurt. Trade algorithms that constantly monitor newsfeeds reacted in no time, and began selling stocks like mad. The Dow Jones went into free fall, and within sixty seconds lost 150 points, equivalent to a loss of $136 billion! At 13:10 a.s.sociated Press clarified that the tweet was a hoax. The algorithms reversed gear, and by 13:13 the Dow Jones had recuperated almost all the losses.

Three years previously, on 6 May 2010, the New York stock exchange underwent an even sharper shock. Within five minutes from 14:42 to 14:47 the Dow Jones dropped by 1,000 points, wiping out $1 trillion. It then bounced back, returning to its pre-crash level in a little over three minutes. That's what happens when super-fast computer programs are in charge of our money. Experts have been trying ever since to understand what happened in this so-called 'Flash Crash'. We know algorithms were to blame, but we are still not sure exactly what went wrong. Some traders in the USA have already filed lawsuits against algorithmic trading, arguing that it unfairly discriminates against human beings, who simply cannot react fast enough to compete. Quibbling whether this really const.i.tutes a violation of rights might provide lots of work and lots of fees for lawyers.5 And these lawyers won't necessarily be human. Movies and TV series give the impression that lawyers spend their days in court shouting 'Objection!' and making impa.s.sioned speeches. Yet most run-of-the-mill lawyers spend their time going over endless files, looking for precedents, loopholes and tiny pieces of potentially relevant evidence. Some are busy trying to figure out what happened on the night John Doe got killed, or formulating a gargantuan business contract that will protect their client against every conceivable eventuality. What will be the fate of all these lawyers once sophisticated search algorithms can locate more precedents in a day than a human can in a lifetime, and once brain scans can reveal lies and deceptions at the press of a b.u.t.ton? Even highly experienced lawyers and detectives cannot easily spot deceptions merely by observing people's facial expressions and tone of voice. However, lying involves different brain areas to those used when we tell the truth. We're not there yet, but it is conceivable that in the not too distant future fMRI scanners could function as almost infallible truth machines. Where will that leave millions of lawyers, judges, cops and detectives? They might need to go back to school and learn a new profession.6 When they get in the cla.s.sroom, however, they may well discover that the algorithms have got there first. Companies such as Mindojo are developing interactive algorithms that not only teach me maths, physics and history, but also simultaneously study me and get to know exactly who I am. Digital teachers will closely monitor every answer I give, and how long it took me to give it. Over time, they will discern my unique weaknesses as well as my strengths. They will identify what gets me excited, and what makes my eyelids droop. They could teach me thermodynamics or geometry in a way that suits my personality type, even if that particular way doesn't suit 99 per cent of the other pupils. And these digital teachers will never lose their patience, never shout at me, and never go on strike. It is unclear, however, why on earth I would need to know thermodynamics or geometry in a world containing such intelligent computer programs.7 Even doctors are fair game for the algorithms. The first and foremost task of most doctors is to diagnose diseases correctly, and then suggest the best available treatment. If I arrive at the clinic complaining about fever and diarrhoea, I might be suffering from food poisoning. Then again, the same symptoms might result from a stomach virus, cholera, dysentery, malaria, cancer or some unknown new disease. My doctor has only five minutes to make a correct diagnosis, because this is what my health insurance pays for. This allows for no more than a few questions and perhaps a quick medical examination. The doctor then cross-references this meagre information with my medical history, and with the vast world of human maladies. Alas, not even the most diligent doctor can remember all my previous ailments and check-ups. Similarly, no doctor can be familiar with every illness and drug, or read every new article published in every medical journal. To top it all, the doctor is sometimes tired or hungry or perhaps even sick, which affects her judgement. No wonder that doctors often err in their diagnoses, or recommend a less-than-optimal treatment.

Now consider IBM's famous Watson an artificial intelligence system that won the Jeopardy! television game show in 2011, beating human former champions. Watson is currently groomed to do more serious work, particularly in diagnosing diseases. An AI such as Watson has enormous potential advantages over human doctors. Firstly, an AI can hold in its databanks information about every known illness and medicine in history. It can then update these databanks every day, not only with the findings of new researches, but also with medical statistics gathered from every clinic and hospital in the world.

IBM's Watson defeating its two humans opponents in Jeopardy! in 2011.

Sony Pictures Television.

Secondly, Watson can be intimately familiar not only with my entire genome and my day-to-day medical history, but also with the genomes and medical histories of my parents, siblings, cousins, neighbours and friends. Watson will know instantly whether I visited a tropical country recently, whether I have recurring stomach infections, whether there have been cases of intestinal cancer in my family or whether people all over town are complaining this morning about diarrhoea.

Thirdly, Watson will never be tired, hungry or sick, and will have all the time in the world for me. I could sit comfortably on my sofa at home and answer hundreds of questions, telling Watson exactly how I feel. This is good news for most patients (except perhaps hypochondriacs). But if you enter medical school today in the expectation of still being a family doctor in twenty years, maybe you should think again. With such a Watson around, there is not much need for Sherlocks.

This threat hovers over the heads not only of general pract.i.tioners, but also of experts. Indeed, it might prove easier to replace doctors specialising in a relatively narrow field such as cancer diagnosis. For example, in a recent experiment a computer algorithm diagnosed correctly 90 per cent of lung cancer cases presented to it, while human doctors had a success rate of only 50 per cent.8 In fact, the future is already here. CT scans and mammography tests are routinely checked by specialised algorithms, which provide doctors with a second opinion, and sometimes detect tumours that the doctors missed.9 A host of tough technical problems still prevent Watson and its ilk from replacing most doctors tomorrow morning. Yet these technical problems however difficult need only be solved once. The training of a human doctor is a complicated and expensive process that lasts years. When the process is complete, after ten years of studies and internships, all you get is one doctor. If you want two doctors, you have to repeat the entire process from scratch. In contrast, if and when you solve the technical problems hampering Watson, you will get not one, but an infinite number of doctors, available 24/7 in every corner of the world. So even if it costs $100 billion to make it work, in the long run it would be much cheaper than training human doctors.

And what's true of doctors is doubly true of pharmacists. In 2011 a pharmacy opened in San Francisco manned by a single robot. When a human comes to the pharmacy, within seconds the robot receives all of the customer's prescriptions, as well as detailed information about other medicines taken by them, and their suspected allergies. The robot makes sure the new prescriptions don't combine adversely with any other medicine or allergy, and then provides the customer with the required drug. In its first year of operation the robotic pharmacist provided 2 million prescriptions, without making a single mistake. On average, flesh-and-blood pharmacists get wrong 1.7 per cent of prescriptions. In the United States alone this amounts to more than 50 million prescription errors every year!10 Some people argue that even if an algorithm could outperform doctors and pharmacists in the technical aspects of their professions, it could never replace their human touch. If your CT indicates you have cancer, would you like to receive the news from a caring and empathetic human doctor, or from a machine? Well, how about receiving the news from a caring and empathetic machine that tailors its words to your personality type? Remember that organisms are algorithms, and Watson could detect your emotional state with the same accuracy that it detects your tumours.

This idea has already been implemented by some customer-services departments, such as those pioneered by the Chicago-based Mattersight Corporation. Mattersight publishes its wares with the following advert: 'Have you ever spoken with someone and felt as though you just clicked? The magical feeling you get is the result of a personality connection. Mattersight creates that feeling every day, in call centers around the world.'11 When you call customer services with a request or complaint, it usually takes a few seconds to route your call to a representative. In Mattersight systems, your call is routed by a clever algorithm. You first state the reason for your call. The algorithm listens to your request, a.n.a.lyses the words you have chosen and your tone of voice, and deduces not only your present emotional state but also your personality type whether you are introverted, extroverted, rebellious or dependent. Based on this information, the algorithm links you to the representative that best matches your mood and personality. The algorithm knows whether you need an empathetic person to patiently listen to your complaints, or you prefer a no-nonsense rational type who will give you the quickest technical solution. A good match means both happier customers and less time and money wasted by the customer-services department.12 The most important question in twenty-first-century economics may well be what to do with all the superfluous people. What will conscious humans do, once we have highly intelligent non-conscious algorithms that can do almost everything better?

Throughout history the job market was divided into three main sectors: agriculture, industry and services. Until about 1800, the vast majority of people worked in agriculture, and only a small minority worked in industry and services. During the Industrial Revolution people in developed countries left the fields and herds. Most began working in industry, but growing numbers also took up jobs in the services sector. In recent decades developed countries underwent another revolution, as industrial jobs vanished, whereas the services sector expanded. In 2010 only 2 per cent of Americans worked in agriculture, 20 per cent worked in industry, 78 per cent worked as teachers, doctors, webpage designers and so forth. When mindless algorithms are able to teach, diagnose and design better than humans, what will we do?

This is not an entirely new question. Ever since the Industrial Revolution erupted, people feared that mechanisation might cause ma.s.s unemployment. This never happened, because as old professions became obsolete, new professions evolved, and there was always something humans could do better than machines. Yet this is not a law of nature, and nothing guarantees it will continue to be like that in the future. Humans have two basic types of abilities: physical abilities and cognitive abilities. As long as machines competed with us merely in physical abilities, you could always find cognitive tasks that humans do better. So machines took over purely manual jobs, while humans focused on jobs requiring at least some cognitive skills. Yet what will happen once algorithms outperform us in remembering, a.n.a.lysing and recognising patterns?

The idea that humans will always have a unique ability beyond the reach of non-conscious algorithms is just wishful thinking. The current scientific answer to this pipe dream can be summarised in three simple principles: 1. Organisms are algorithms. Every animal including h.o.m.o sapiens is an a.s.semblage of organic algorithms shaped by natural selection over millions of years of evolution.

2. Algorithmic calculations are not affected by the materials from which you build the calculator. Whether you build an abacus from wood, iron or plastic, two beads plus two beads equals four beads.

3. Hence there is no reason to think that organic algorithms can do things that non-organic algorithms will never be able to replicate or surpa.s.s. As long as the calculations remain valid, what does it matter whether the algorithms are manifested in carbon or silicon?

True, at present there are numerous things that organic algorithms do better than non-organic ones, and experts have repeatedly declared that something will 'for ever' remain beyond the reach of non-organic algorithms. But it turns out that 'for ever' often means no more than a decade or two. Until a short time ago, facial recognition was a favourite example of something which even babies accomplish easily but which escaped even the most powerful computers on earth. Today facial-recognition programs are able to recognise people far more efficiently and quickly than humans can. Police forces and intelligence services now use such programs to scan countless hours of video footage from surveillance cameras, tracking down suspects and criminals.

In the 1980s when people discussed the unique nature of humanity, they habitually used chess as primary proof of human superiority. They believed that computers would never beat humans at chess. On 10 February 1996, IBM's Deep Blue defeated world chess champion Garry Kasparov, laying to rest that particular claim for human pre-eminence.

Deep Blue was given a head start by its creators, who preprogrammed it not only with the basic rules of chess, but also with detailed instructions regarding chess strategies. A new generation of AI uses machine learning to do even more remarkable and elegant things. In February 2015 a program developed by Google DeepMind learned by itself how to play forty-nine cla.s.sic Atari games. One of the developers, Dr Demis Ha.s.sabis, explained that 'the only information we gave the system was the raw pixels on the screen and the idea that it had to get a high score. And everything else it had to figure out by itself.' The program managed to learn the rules of all the games it was presented with, from Pac-Man and s.p.a.ce Invaders to car racing and tennis games. It then played most of them as well as or better than humans, sometimes coming up with strategies that never occur to human players.13 Deep Blue defeating Garry Kasparov.

STAN HONDA/AFP/Getty Images.

Computer algorithms have recently proven their worth in ball games, too. For many decades, baseball teams used the wisdom, experience and gut instincts of professional scouts and managers to pick players. The best players fetched millions of dollars, and naturally enough the rich teams got the cream of the market, whereas poorer teams had to settle for the sc.r.a.ps. In 2002 Billy Beane, the manager of the low-budget Oakland Athletics, decided to beat the system. He relied on an arcane computer algorithm developed by economists and computer geeks to create a winning team from players that human scouts overlooked or undervalued. The old-timers were incensed by Beane's algorithm transgressing into the hallowed halls of baseball. They said that picking baseball players is an art, and that only humans with an intimate and long-standing experience of the game can master it. A computer program could never do it, because it could never decipher the secrets and the spirit of baseball.

They soon had to eat their baseball caps. Beane's shoestring-budget algorithmic team ($44 million) not only held its own against baseball giants such as the New York Yankees ($125 million), but became the first team ever in American League baseball to win twenty consecutive games. Not that Beane and Oakland could enjoy their success for long. Soon enough, many other baseball teams adopted the same algorithmic approach, and since the Yankees and Red Sox could pay far more for both baseball players and computer software, low-budget teams such as the Oakland Athletics now had an even smaller chance of beating the system than before.14 In 2004 Professor Frank Levy from MIT and Professor Richard Murnane from Harvard published a thorough research of the job market, listing those professions most likely to undergo automation. Truck drivers were given as an example of a job that could not possibly be automated in the foreseeable future. It is hard to imagine, they wrote, that algorithms could safely drive trucks on a busy road. A mere ten years later, Google and Tesla not only imagine this, but are actually making it happen.15 In fact, as time goes by, it becomes easier and easier to replace humans with computer algorithms, not merely because the algorithms are getting smarter, but also because humans are professionalising. Ancient hunter-gatherers mastered a very wide variety of skills in order to survive, which is why it would be immensely difficult to design a robotic hunter-gatherer. Such a robot would have to know how to prepare spear points from flint stones, how to find edible mushrooms in a forest, how to use medicinal herbs to bandage a wound, how to track down a mammoth and how to coordinate a charge with a dozen other hunters. However, over the last few thousand years we humans have been specialising. A taxi driver or a cardiologist specialises in a much narrower niche than a hunter-gatherer, which makes it easier to replace them with AI.

Even the managers in charge of all these activities can be replaced. Thanks to its powerful algorithms, Uber can manage millions of taxi drivers with only a handful of humans. Most of the commands are given by the algorithms without any need of human supervision.16 In May 2014 Deep Knowledge Ventures a Hong Kong venture-capital firm specialising in regenerative medicine broke new ground by appointing an algorithm called VITAL to its board. VITAL makes investment recommendations by a.n.a.lysing huge amounts of data on the financial situation, clinical trials and intellectual property of prospective companies. Like the other five board members, the algorithm gets to vote on whether the firm makes an investment in a specific company or not.

Examining VITAL's record so far, it seems that it has already picked up one managerial vice: nepotism. It has recommended investing in companies that grant algorithms more authority. With VITAL's blessing, Deep Knowledge Ventures has recently invested in Silico Medicine, which develops computer-a.s.sisted methods for drug research, and in Pathway Pharmaceuticals, which employs a platform called OncoFinder to select and rate personalised cancer therapies.17 As algorithms push humans out of the job market, wealth might become concentrated in the hands of the tiny elite that owns the all-powerful algorithms, creating unprecedented social inequality. Alternatively, the algorithms might not only manage businesses, but actually come to own them. At present, human law already recognises intersubjective ent.i.ties like corporations and nations as 'legal persons'. Though Toyota or Argentina has neither a body nor a mind, they are subject to international laws, they can own land and money, and they can sue and be sued in court. We might soon grant similar status to algorithms. An algorithm could then own a venture-capital fund without having to obey the wishes of any human master.

If the algorithm makes the right decisions, it could acc.u.mulate a fortune, which it could then invest as it sees fit, perhaps buying your house and becoming your landlord. If you infringe on the algorithm's legal rights say, by not paying rent the algorithm could hire lawyers and sue you in court. If such algorithms consistently outperform human fund managers, we might end up with an algorithmic upper cla.s.s owning most of our planet. This may sound impossible, but before dismissing the idea, remember that most of our planet is already legally owned by non-human inter-subjective ent.i.ties, namely nations and corporations. Indeed, 5,000 years ago much of Sumer was owned by imaginary G.o.ds such as Enki and Inanna. If G.o.ds can possess land and employ people, why not algorithms?

So what will people do? Art is often said to provide us with our ultimate (and uniquely human) sanctuary. In a world where computers replace doctors, drivers, teachers and even landlords, everyone would become an artist. Yet it is hard to see why artistic creation will be safe from the algorithms. Why are we so sure computers will be unable to better us in the composition of music? According to the life sciences, art is not the product of some enchanted spirit or metaphysical soul, but rather of organic algorithms recognising mathematical patterns. If so, there is no reason why non-organic algorithms couldn't master it.

David Cope is a musicology professor at the University of California in Santa Cruz. He is also one of the more controversial figures in the world of cla.s.sical music. Cope has written programs that compose concertos, chorales, symphonies and operas. His first creation was named EMI (Experiments in Musical Intelligence), which specialised in imitating the style of Johann Sebastian Bach. It took seven years to create the program, but once the work was done, EMI composed 5,000 chorales la Bach in a single day. Cope arranged a performance of a few select chorales in a music festival at Santa Cruz. Enthusiastic members of the audience praised the wonderful performance, and explained excitedly how the music touched their innermost being. They didn't know it was composed by EMI rather than Bach, and when the truth was revealed, some reacted with glum silence, while others shouted in anger.

EMI continued to improve, and learned to imitate Beethoven, Chopin, Rachmaninov and Stravinsky. Cope got EMI a contract, and its first alb.u.m Cla.s.sical Music Composed by Computer sold surprisingly well. Publicity brought increasing hostility from cla.s.sical-music buffs. Professor Steve Larson from the University of Oregon sent Cope a challenge for a musical showdown. Larson suggested that professional pianists play three pieces one after the other: one by Bach, one by EMI, and one by Larson himself. The audience would then be asked to vote who composed which piece. Larson was convinced people would easily tell the difference between soulful human compositions, and the lifeless artefact of a machine. Cope accepted the challenge. On the appointed date, hundreds of lecturers, students and music fans a.s.sembled in the University of Oregon's concert hall. At the end of the performance, a vote was taken. The result? The audience thought that EMI's piece was genuine Bach, that Bach's piece was composed by Larson, and that Larson's piece was produced by a computer.

Critics continued to argue that EMI's music is technically excellent, but that it lacks something. It is too accurate. It has no depth. It has no soul. Yet when people heard EMI's compositions without being informed of their provenance, they frequently praised them precisely for their soulfulness and emotional resonance.

Following EMI's successes, Cope created newer and even more sophisticated programs. His crowning achievement was Annie. Whereas EMI composed music according to predetermined rules, Annie is based on machine learning. Its musical style constantly changes and develops in reaction to new inputs from the outside world. Cope has no idea what Annie is going to compose next. Indeed, Annie does not restrict itself to music composition but also explores other art forms such as haiku poetry. In 2011 Cope published Comes the Fiery Night: 2,000 Haiku by Man and Machine. Of the 2,000 haikus in the book, some are written by Annie, and the rest by organic poets. The book does not disclose which are which. If you think you can tell the difference between human creativity and machine output, you are welcome to test your claim.18 In the nineteenth century the Industrial Revolution created a huge new cla.s.s of urban proletariats, and socialism spread because no one else managed to answer their unprecedented needs, hopes and fears. Liberalism eventually defeated socialism only by adopting the best parts of the socialist programme. In the twenty-first century we might witness the creation of a new ma.s.sive cla.s.s: people devoid of any economic, political or even artistic value, who contribute nothing to the prosperity, power and glory of society.

In September 2013 two Oxford researchers, Carl Benedikt Frey and Michael A. Osborne, published 'The Future of Employment', in which they surveyed the likelihood of different professions being taken over by computer algorithms within the next twenty years. The algorithm developed by Frey and Osborne to do the calculations estimated that 47 per cent of US jobs are at high risk. For example, there is a 99 per cent probability that by 2033 human telemarketers and insurance underwriters will lose their jobs to algorithms. There is a 98 per cent probability that the same will happen to sports referees, 97 per cent that it will happen to cashiers and 96 per cent to chefs. Waiters 94 per cent. Paralegal a.s.sistants 94 per cent. Tour guides 91 per cent. Bakers 89 per cent. Bus drivers 89 per cent. Construction labourers 88 per cent. Veterinary a.s.sistants 86 per cent. Security guards 84 per cent. Sailors 83 per cent. Bartenders 77 per cent. Archivists 76 per cent. Carpenters 72 per cent. Lifeguards 67 per cent. And so forth. There are of course some safe jobs. The likelihood that computer algorithms will displace archaeologists by 2033 is only 0.7 per cent, because their job requires highly sophisticated types of pattern recognition, and doesn't produce huge profits. Hence it is improbable that corporations or government will make the necessary investment to automate archaeology within the next twenty years.19 Of course, by 2033 many new professions are likely to appear, for example, virtual-world designers. But such professions will probably require much more creativity and flexibility than your run-of-the-mill job, and it is unclear whether forty-year-old cashiers or insurance agents will be able to reinvent themselves as virtual-world designers (just try to imagine a virtual world created by an insurance agent!). And even if they do so, the pace of progress is such that within another decade they might have to reinvent themselves yet again. After all, algorithms might well outperform humans in designing virtual worlds too. The crucial problem isn't creating new jobs. The crucial problem is creating new jobs that humans perform better than algorithms.20 The technological bonanza will probably make it feasible to feed and support the useless ma.s.ses even without any effort on their side. But what will keep them occupied and content? People must do something, or they will go crazy. What will they do all day? One solution might be offered by drugs and computer games. Unnecessary people might spend increasing amounts of time within 3D virtual-reality worlds, which would provide them with far more excitement and emotional engagement than the drab reality outside. Yet such a development would deal a mortal blow to the liberal belief in the sacredness of human life and of human experiences. What's so sacred in useless b.u.ms who pa.s.s their days devouring artificial experiences in La La Land?

Some experts and thinkers, such as Nick Bostrom, warn that humankind is unlikely to suffer this degradation, because once artificial intelligence surpa.s.ses human intelligence, it might simply exterminate humankind. The AI is likely to do so either for fear that humankind would turn against it and try to pull its plug, or in pursuit of some unfathomable goal of its own. For it would be extremely difficult for humans to control the motivation of a system smarter than themselves.

Even preprogramming the system with seemingly benign goals might backfire horribly. One popular scenario imagines a corporation designing the first artificial super-intelligence, and giving it an innocent test such as calculating pi. Before anyone realises what is happening, the AI takes over the planet, eliminates the human race, launches a conquest campaign to the ends of the galaxy, and transforms the entire known universe into a giant super-computer that for billions upon billions of years calculates pi ever more accurately. After all, this is the divine mission its Creator gave it.21 A Probability of 87 Per Cent At the beginning of this chapter we identified several practical threats to liberalism. The first is that humans might become militarily and economically useless. This is just a possibility, of course, not a prophecy. Technical difficulties or political objections might slow down the algorithmic invasion of the job market. Alternatively, since much of the human mind is still uncharted territory, we don't really know what hidden talents humans might discover, and what novel jobs they might create to replace the losses. That, however, may not be enough to save liberalism. For liberalism believes not just in the value of human beings it also believes in individualism. The second threat facing liberalism is that in the future, while the system might still need humans, it will not need individuals. Humans will continue to compose music, to teach physics and to invest money, but the system will understand these humans better than they understand themselves, and will make most of the important decisions for them. The system will thereby deprive individuals of their authority and freedom.

The liberal belief in individualism is founded on the three important a.s.sumptions that we discussed earlier in the book: 1. I am an in-dividual i.e. I have a single essence which cannot be divided into any parts or subsystems. True, this inner core is wrapped in many outer layers. But if I make the effort to peel these external crusts, I will find deep within myself a clear and single inner voice, which is my authentic self.

2. My authentic self is completely free.

3. It follows from the first two a.s.sumptions that I can know things about myself n.o.body else can discover. For only I have access to my inner s.p.a.ce of freedom, and only I can hear the whispers of my authentic self. This is why liberalism grants the individual so much authority. I cannot trust anyone else to make choices for me, because no one else can know who I really am, how I feel and what I want. This is why the voter knows best, why the customer is always right and why beauty is in the eye of the beholder.

However, the life sciences challenge all three a.s.sumptions. According to the life sciences: 1. Organisms are algorithms, and humans are not individuals they are 'dividuals', i.e. humans are an a.s.semblage of many different algorithms lacking a single inner voice or a single self.

2. The algorithms const.i.tuting a human are not free. They are shaped by genes and environmental pressures, and take decisions either deterministically or randomly but not freely.

3. It follows that an external algorithm could theoretically know me much better than I can ever know myself. An algorithm that monitors each of the systems that comprise my body and my brain could know exactly who I am, how I feel and what I want. Once developed, such an algorithm could replace the voter, the customer and the beholder. Then the algorithm will know best, the algorithm will always be right, and beauty will be in the calculations of the algorithm.

During the nineteenth and twentieth centuries, the belief in individualism nevertheless made good practical sense, because there were no external algorithms that could actually monitor me effectively. States and markets may have wished to do exactly that, but they lacked the necessary technology. The KGB and FBI had only a vague understanding of my biochemistry, genome and brain, and even if agents bugged every phone call I made and recorded every chance encounter on the street, they did not have the computing power to a.n.a.lyse all this data. Consequently, given twentieth-century technological conditions, liberals were right to argue that n.o.body can know me better than I know myself. Humans therefore had a very good reason to regard themselves as an autonomous system, and to follow their own inner voices rather than the commands of Big Brother.

However, twenty-first-century technology may enable external algorithms to know me far better than I know myself, and once this happens, the belief in individualism will collapse and authority will shift from individual humans to networked algorithms. People will no longer see themselves as autonomous beings running their lives according to their wishes, and instead become accustomed to seeing themselves as a collection of biochemical mechanisms that is constantly monitored and guided by a network of electronic algorithms. For this to happen, there is no need of an external algorithm that knows me perfectly, and that never makes any mistakes; it is enough that an external algorithm will know me better than I know myself, and will make fewer mistakes than me. It will then make sense to trust this algorithm with more and more of my decisions and life choices.

We have already crossed this line as far as medicine is concerned. In the hospital, we are no longer individuals. Who do you think will make the most momentous decisions about your body and your health during your lifetime? It is highly likely that many of these decisions will be taken by computer algorithms such as IBM's Watson. And this is not necessarily bad news. Diabetics already carry sensors that automatically check their sugar level several times a day, alerting them whenever it crosses a dangerous threshold. In 2014 researchers at Yale University announced the first successful trial of an 'artificial pancreas' controlled by an iPhone. Fifty-two diabetics took part in the experiment. Each patient had a tiny sensor and a tiny pump implanted in his or her stomach. The pump was connected to small tubes of insulin and glucagon, two hormones that together regulate sugar levels in the blood. The sensor constantly measured the sugar level, transmitting the data to an iPhone. The iPhone hosted an application that a.n.a.lysed the information, and whenever necessary gave orders to the pump, which injected measured amounts of either insulin or glucagon without any need of human intervention.22 Many other people who suffer from no serious illnesses have begun to use wearable sensors and computers to monitor their health and activities. The devices incorporated into anything from smartphones and wrist.w.a.tches to armbands and underwear record diverse biometric data such as blood pressure. The data is then fed into sophisticated computer programs, which advise you how to change your diet and daily routines so as to enjoy improved health and a longer and more productive life.23 Google, together with the drug giant Novartis, are developing a contact lens that checks glucose levels in the blood every few seconds, by testing tear contents.24 Pixie Scientific sells 'smart diapers' that a.n.a.lyse baby p.o.o.p for clues about the baby's medical condition. Microsoft has launched the Microsoft Band in November 2014 a smart armband that monitors among other things your heartbeat, the quality of your sleep and the number of steps you take each day. An application called Deadline goes a step further, telling you how many years of life you have left, given your current habits.

Some people use these apps without thinking too deeply about it, but for others this is alr

Please click Like and leave more comments to support and keep us alive.

RECENTLY UPDATED MANGA

Cultivation Chat Group

Cultivation Chat Group

Cultivation Chat Group Chapter 3062: Chapter 3060: The Bewildered Dark Giant Author(s) : 圣骑士的传说, Legend Of The Paladin View : 4,371,321

Homo Deus: A Brief History Of Tomorrow Part 9 summary

You're reading Homo Deus: A Brief History Of Tomorrow. This manga has been translated by Updating. Author(s): Yuval Noah Harari. Already has 776 views.

It's great if you read and follow any novel on our website. We promise you that we'll bring you the latest, hottest novel everyday and FREE.

NovelOnlineFull.com is a most smartest website for reading manga online, it can automatic resize images to fit your pc screen, even on your mobile. Experience now by using your smartphone and access to NovelOnlineFull.com