Don’t get complacent about Amazon’s Robots: be optimistic instead!

In an article for the Technology Liberation Front, Adam Thierer of George Mason University becomes the latest academic to reassure us that AI and robots won’t steal our jobs.i His article relies on three observations: First, Amazon is keen to automate its warehouses, but it is still hiring more humans. Second, ATMs didn’t destroy the jobs of human tellers. Third, automation has not caused widespread lasting unemployment in the past.

TLF banner

Unfortunately, the first of these claims is true but irrelevant, the second is almost certainly false, and the third is both irrelevant and false.

Amazon has automated much of what humans previously did in warehouses, which has undoubtedly reduced the number of humans per dollar of value added, but the penetration of retail by e-commerce is rising very fast, and Amazon is taking share from other retailers, so it is not surprising that it is still hiring. Amazon may never get to the legendary “dark” warehouse staffed only by a human and a dog (where the dog’s job is to keep the human away from the expensive-looking machines), but it will keep pushing as far in that direction as it can. One of the major hurdles is in picking, and that looks like falling fairly soon – in years not decades.ii

Picking arm

ATMs did destroy bank tellers’ jobs, but some of the peak years of their introduction to the US market coincided with a piece of financial deregulation, the Riegle-Neal Interstate Banking and Branching Efficiency Act of 1994, which removed many of the restrictions on opening bank branches across state lines. Most of the growth in the branch network occurred after this Act was passed in 1994, not before it. Teller numbers did not rise in the same way in other countries during the period. In the UK, for instance, retail bank employment just about held steady at around 350,000 between 1997 and 2013, despite significant growth in the country’s population, its wealth, and its demand for sophisticated financial services.iii

As for the third observation, automation certainly has caused widespread lasting unemployment in the past – of horses. Almost all the automation we have seen so far has been mechanisation, the replacement of human and animal muscle power by steam and then electric power. In 1900 around 25 million horses were working on American farms and now there are none. The lives of humans, by contrast, were only temporarily disrupted – albeit severely and painfully – by this mechanisation. The grandchidren of US farm labourers from 1900 now work in cities in offices, shops and factories.

The question is whether it is different this time round, because the new wave of automation – which has hardly begun as yet – is cognitive automation, not mechanisation. My book “The Economic Singularity” presents a carefully-argued case that it is.

Of course no-one knows for certain what will happen in the next few decades. Mr Thierer and others may be right, and the people (like me) that he excoriates as “automation alarmists” and “techno-pessimists” who “suffer from a lack of imagination” may be wrong. Time will tell.

But if we are right, and society fails to prepare for the massively disruptive impact of technological unemployment, the outcome could be grave. If we are wrong, and some modest effort is spent on analysing a problem that never occurs, almost nothing would be lost. The equation is not too hard to figure out.


Finally, I reject the claim that the people who take the prospect of technological unemployment seriously are necessarily pessimistic. It is optimistic, not pessimistic, to believe that our most plausible and most positive route through the economic singularity is to work out how to build or evolve the post-scarcity Star Trek economy, in which the goods and services that we all need for a flourishing life are virtually free. We should aim for a world in which machines do all the boring stuff and humans get on with the important things in life, like playing, exploring, learning, socialising, discovering, and having fun. I refuse to believe that being an Amazon warehouse worker or an actuary is the pinnacle of human fulfillment.

Surely it is those who do think that, and who insist that we all have to stay in jobs forever, who are the true pessimists.

In the future, education may be vacational, not vocational

This post is co-written with Julia Begbie, who develops cutting-edge online courses as a director of a design college in London.

Five classrooms

Some people (including us) think that within a generation or two, many or most people will be unemployable because machines will perform every task that we can do for money better, faster and cheaper than we can.

Other people think that humans will always remain in paid employment because we will have skills to offer which machines never will. These people usually go on to argue that humans will need further education and training to remain in work – and lots of it: we will have to re-train many times over the course of a normal career as the machines keep taking over some of the tasks which comprise our jobs. “Turning truckers into coders” could be a slogan for these people, despite its apparent implausibility.

There are several problems with this policy prescription. First, we do not know what skills to train for. One school of thought says that we will work ever more closely with computers, and uses the metaphor of centaurs, the half-man, half-horse creatures from Greek mythology. This school argues that we should focus education and training on STEM subjects (scientific, technology, engineering and maths) and downgrade the resources allocated to the humanities and the social sciences. But a rival school of thought argues that the abilities which only humans can offer are our creativity and our empathy, and therefore the opposite approach should be adopted.

Science vs liberal arts
Secondly, the churn in the job market is accelerating, and within a few years, the education process will simply be too slow. It takes years to train a lawyer, or a coder, and if people are to stay ahead of the constantly-improving machines in the job market, we are likely to have to undergo numerous periods of re-training. How long will it be before each period of re-training takes longer than the career it equips us for?  And is that sustainable?

Third, reforming education systems is notoriously difficult. Over the years, educational reform has been proposed as the solution to many social and economic problems, and it rarely gets very far. Education has evolved over the last 100 years, and teachers are more professional and better trained than they used to be. But as the pictures above illustrate, most classrooms around the world today look much the same as they did 100 years ago, with serried ranks of children listening to mini-lectures from teachers. The fundamental educational processes and norms developed to build up the labour force required by the industrial revolution have survived numerous attempts to reform them, partly because reforming a vast social enterprise which looks after our children is hard, and partly because the educational establishment, like any establishment, tends to resist change.

It therefore seems unlikely that educational reform will be much assistance in tackling the wave of technological unemployment which may be heading our way.

And oddly, this may not be a problem. If, as we believe, many or most people will be unemployable within a generation or so, the kind of education we will benefit from most is one which will equip us to benefit from a life of leisure: education that is vacational rather than vocational. This means a broad combination of sciences, humanities and social sciences, which will teach us both how the world works (the business of science), and also how we work as humans – from the inside (the business of novelists, artists and philosophers). This is pretty much what our current educational systems attempt to do, and although they come in for a lot of criticism (some of it justified), by and large they don’t do a bad job of it in most places in most countries.

Although educational systems probably won’t be reformed by government diktat in order to help us stay in jobs, they will be reformed in due course anyway, because new technologies and approaches are becoming available which will make it more personalised, more effective and more enjoyable. Some of this will be enabled by artificial intelligence.


New-ish techniques like flipped learning, distance learning, and competency-based learning have been around for years. They have demonstrated their effectiveness in trials, and they have been adopted by some of the more forward-thinking institutions, but they have been slow to replace the older approaches more generally. More recently, massive open online courses (MOOCs) were heralded as the death-knell for traditional tertiary education in 2013, but they have gone quiet, because the support technologies they required (such as automated marking) were not ready for prime-time.

MOOCs will return, and the revolution which they and other new approaches promised will happen. We will have AI education assistants which know exactly which lessons and skills we have mastered, and which ones we need to acquire next. These assistants will understand which approach to learning suits us best, which times of day we are most receptive, and which times we are best left to relax or rest. Education will be less regimented, more flexible, and much more closely tailored to our individual preferences and needs. Above all, it will be more fun.

Making grammar lessons fun

The main contribution of education to technological unemployment will probably be to make it enjoyable rather than to prevent it.

Future Bites 8 – Reputation management and algocracy

The eighth in a series of un-forecasts* – little glimpses of what may lie ahead in the century of two singularities.

This article first appeared on the excellent blog run by Circus Street (here), a digital training provider for marketers.


In the old days, before artificial intelligence started to really work in the mid-2010s, the clients for reputation management services were rich and powerful: companies, government departments, environmental lobbying groups and other non-government organisations, and of course celebrities. The aims were simple: accentuate the good, minimise the bad. Sometimes the task was to squash a potentially damaging story that could grow into a scandal. Sometimes it was to promote a film, a book, or a policy initiative.

Practitioners needed privileged access to journalists in the mainstream media, to politicians and policy makers, and to the senior business people who shaped the critical buying decisions of large companies. They were formidable networkers with enviable contacts in the media and business elite. They usually had very blue-chip educational and early career backgrounds; offering patronage in the form of juicy stories and un-attributable briefings to compliant journalists.

Digital democratisation

The information revolution democratised reputation management along with everything else. It made the service available to a vastly wider range of people. If you were a serious candidate for a senior job in business, government, or the third sector, you needed to ensure that no skeletons came tumbling out of your closet at the wrong moment. Successful people needed to be seen as thought leaders and formidable networkers, and this did not happen by accident.

The aims of reputation management were the same as before, but just as the client base was now much wider, so too was the arena in which the service was provided. The mainstream media had lost its exclusive stranglehold on public attention and public opinion. Facebook and Twitter could often be more influential than a national newspaper. The blogosphere, YouTube, Pinterest, and Reddit were now crucial environments, along with many more, and the players were changing almost daily.

informal working

The practitioners were different too. No longer just Oxbridge-educated, Saville Row tailored types, they included T-shirt-clad young men and women whose main skill was being up-to-date with the latest pecking order between online platforms. People with no deep understanding of public policy, but a knack for predicting which memes would go viral on YouTube. Technically adept people who knew how to disseminate an idea economically across hundreds of different digital platforms. Most of all, they included people who knew how to wrangle AI bots.

Reputation bots

Bots scoured the web for good news and bad. They reviewed vast hinterlands of information, looking for subtle seeds of potential scandal sown by jealous rivals. Their remit was the entire internet, an impossibly broad arena for un-augmented humans to cover. Every mention of a client’s name, industry sector, or professional area of interest was tracked and assessed. Reputations were quantified. Indices were established where the reputations of brands and personalities could be tracked – and even traded.

All this meant lots of work for less traditionally qualified people. Clients who weren’t rich couldn’t afford the established consultants’ exorbitant fees, and they didn’t need them anyway. Less mainstream practitioners deploying clever bots could achieve impressive results for far less money. As the number of actual and potential clients for reputation management services grew exponentially, so did the number of practitioners. The same phenomenon was observed in many areas of professional services, and become known as the “iceberg effect”: a previous, restricted client base revealed to be just the tip of a previously unknown and inaccessible demand.

algocracy bot

But pretty soon, the bots started to learn from the judgement of practitioners and clients, and needed less and less input from humans to weave their magic. And as the bots became more adept, their services became more sophisticated. Practising offence as well as defence: placing stories about their clients’ competitors, and duelling with bots employed by those rivals: twisting each other’s messages into racist, sexist or otherwise offensive versions, tactics that many of their operators were happy to run with and help refine.


Of course, as the bots became increasingly autonomous, the number of real humans doing the job started to shrink again. Clients started to in-source the service. Personal AIs – descendants of Siri and Alexa, evolved by Moore’s Law, – offered the service. Users began relying on these AIs to the point where the machines had free access to censor their owners’ emails and other communications. People realised that the AIs’ judgement was better than their own, and surrendered willingly to this oversight. Social commentators railed against the phenomenon, clamouring that humans were diminishing themselves, and warning of the rise of a so called “algocracy”.

Their warnings were ignored. AI works: how could any sane person choose to make stupid decisions when their AI could make smart ones instead?

* This un-forecast is not a prediction.  Predictions are almost always wrong, so we can be pretty confident that the future will not turn out exactly like this.  It is intended to make the abstract notion of technological unemployment more real, and to contribute to scenario planning.  Failing to plan is planning to fail: if you have a plan, you may not achieve it, but if you have no plan, you most certainly won’t.  In a complex environment, scenario development is a valuable part of the planning process. Thinking through how we would respond to a sufficient number of carefully thought-out scenarios could well help us to react more quickly when we see the beginnings of what we believe to be a dangerous trend.

Don’t just speed up the mess

Guest post by Matt Buskell, head of customer engagement at Rainbird

One day back in 1999, I was sitting in a TV studio with a client. We were being interviewed about something called the world wide web. The interviewer was asking if it would change the world. It seems silly to say that now, but it was all very new back then.

The interviewer asked, “do you think this technology will transform your business?” The client was Peter Jones of Siemens, who was one of the most impressive transformation leaders I have ever met. He replied “Yes, but we need to be careful that we don’t just speed up the mess”.

Cluttered desk
What Peter meant was that applied without sufficient thought, technology doesn’t create a better process, just a faster one. It’s an observation I’ve thought about often over the years, and it is still as relevant now as it was back then.

We currently have clients in several different industries using robotic process automation (RPA). This is software (a “robot”) which captures and interprets existing applications to process a transaction; manipulating data, triggering responses, and communicating with other digital systems.

As Andrew Burgess, one of the most experienced consultants in this space, says, “RPA is already delivering huge benefits to businesses. By automating common processes, it reduces the cost of service whilst increasing accuracy and speed. But one of the biggest challenges for any automation project is to embed decision making.”

The next generation of RPA promises to automate even more, by applying Artificial Intelligence (AI). Using natural language processing, robots can read unstructured text and determine workflow; they can make auditable decisions and judgements.

But there is still a danger of simply speeding up the mess.

To check for this, we often ask clients, “Could you improve your efficiency by moving some of the decisions in your process flow further upstream?”

Consider this. A sales person aided by an AI co-pilot can provide the same value to a customer as a lawyer could. A call centre agent can become an underwriter, and a fitness instructor can become a physiotherapist. The co-pilot provides the human with knowledge and judgements far beyond their innate level of understanding.


Taxi drivers use mapping software as their co-pilots today. If they did not have the software they would have to learn the maps themselves, and not everyone has the patience to spend three years or more “doing the knowledge” which makes London’s black cab drivers so impressive.

An AI cannot mimic all the knowledge and understanding of a lawyer, but that’s not what we are asking it to do. We are asking it to make very specific judgements in very specific situations – and to do that on a massive scale.

Take an insurance help-desk. A customer sends a message saying, “I just had an accident; the other driver’s name was XYZ and their insurance company was ABC”. The RPA reads the text, determines that it’s a claim, creates a case file and adds it into the company’s workflow.

We have potentially cut out a call and automated several steps that the call centre agent would have gone through. We have saved time. So far, so good.

However, the case file still must follow a “First Notice of Loss” (FNOL) process and must still be reviewed by a liability expert to determine which party was at fault. It can take another 40 steps until the case reaches a recovery agent who tries to claim a settlement from the other insurer.

So the AI has been helpful but not yet transformative, because we are still following the same business process as before.

Now imagine you could take the knowledge of the liability expert and the recovery expert and graft it into the claims process system. The AI can make the same decisions that these experts would have made, but by connecting them we can change the process. Whereas liability would traditionally have been determined around step 12 in the process and recovery would have started around step 30, this can now all happen at step 1. There is a ripple effect on the number of follow-on steps, and the whole process becomes much faster.

Companies like Accredita are doing this for the FNOL process and making the technology available to insurers. Consultancies like Ernst and Young and PwC are evaluating the overall claims process and figuring out other ways that cognitive reasoning could change the business process. Large banks are assessing how cognitive reasoning and RPA can be used to automate and enhance the identification of credit card fraud.

Re-visiting our map analogy, RPA gets you to the right page on the map for your journey, while cognitive reasoning takes the map it has been given and calculates the route for you.

Peter has retired, but if he was still working he would surely agree there is a fundamental shift in what can be achieved with business processes. Significant limitations have been removed and with that comes a massive opportunity to do much more than just “speed up the mess”.

The second question asked in the interview back in 1999 was “How do you figure out where the value is in this new technology?”. Peter’s answer to that was “You won’t know until you get out there and try it”. That still holds true today.

Matt headshot
Matthew Buskell has been helping companies be more innovative since 1996, when he graduated from Birmingham University with a degree in Cognitive Psychology and Software Engineering. He has helped clients navigate a series of technological waves: the Internet, Mobile, Big Data, and now Artificial Intelligence. He was on the SAP UK board until 2016 when he became head of customer engagement at Rainbird, a leading Cognitive Reasoning engine that is re-defining decision-making with AI in Financial Services and Insurance.

Andrew Burgess is a management consultant, author and speaker with over 25 years’ experience. He is considered an authority on innovative and disruptive technology, artificial intelligence, robotic process automation and impact sourcing. He is a former CTO who has run sourcing advisory firms and built automation practices.  He has been involved in many major change projects, including strategic development, IT transformation and outsourcing, in a wide range of industries across four continents. He is a member of the Advisory Boards of a number of ambitious disruptive companies.

What’s wrong with UBI – responses

Last week I posted an article called “What’s wrong with UBI?” It argued that two of the three component parts of UBI are unhelpful: its universality and its basic-ness.

The article was viewed 100,000 times on LinkedIn and provoked 430-odd comments. This is too many to respond to individually, so this follow-up article is the best I can offer by way of response. Sorry about that.

Fortunately, the responses cluster into five themes, which makes a collective response possible. They mostly said this:

You're an idiot
Expanding a little, they said this:

1. You’re an idiot because UBI is communism and we know that doesn’t work

2. You’re a callous sonofabitch because UBI is a sane and fair response to an unjust world. Oh, and you’re an idiot too

3. You’re an idiot because we know that automation has never caused lasting unemployment, so it won’t in the future

4. That was interesting, thank you

5. Assorted random bat-shit craziness

Clearly UBI provokes strong feelings, which I think is a good thing. For today’s economy, UBI is a pretty terrible prescription, and isn’t making political headway outside the fringes. But it does seem to many smart people (eg Elon Musk) to be a sensible option for tackling the economic singularity, which I explored in detail in my book, called, unsurprisingly, The Economic Singularity.

It is tempting to believe that since my article annoyed both ends of the political spectrum, I must be onto something. But of course that is false logic: traffic jams annoy pretty much everyone, which doesn’t mean they have any merit.

Anyway, here are some brief responses to the objections.

1. You’re an idiot because UBI is communism and we know that doesn’t work

Before becoming a full-time writer and speaker about AI, I spent 30 years in business. I firmly believe that capitalism (plus the scientific method) have made today the best time ever to be human. Previously, life was nasty, brutal and short. Now it isn’t, for most people. In other words, I am not a communist.

Communism is the public ownership of the means of production, distribution and exchange, and UBI does not require that. It does, however, require sharply increased taxation, and this can damage enterprise – unless goods and services can be made far cheaper. I wrote about that in a post called The Star Trek Economy (here).

Star trek economy
2. You’re a callous sonofabitch because UBI is a sane and fair response to an unjust world. Oh, and you’re an idiot too

I recently heard a leading proponent of UBI respond to the question “How can we afford it?” with “How can we afford not to have it?” He seemed genuinely to think that was an adequate answer. Wow.

However, it is obvious that if technological automation renders half or more of the population unemployable, then we will need to find a way to give those unemployable people an income. In other words, we will have to de-couple income from jobs. I semi-seriously suggested an alternative to UBI called Progressive Comfortable Income, or PCI, because I see no sense in making payments to the initially large number of people who don’t need it because they are still working, and I don’t believe all the unemployed people will or should be content to live in penury: we want to live in comfort.

A lot of the respondents to my article argued that payments to the wealthy would be recovered in tax. But unless you’re going to set the marginal tax rate at 100% you will only recover part of the payment. You are also engaging in a pointless bureaucratic merry-go-round of payments.

3. You’re an idiot because we know that automation has never caused lasting unemployment, so it won’t in the future

The most pernicious response, I think, is the claim that automation cannot cause lasting unemployment – because it has not done so in the past. This is not just poor reasoning (past economic performance is no guarantee of future outcome); it is dangerous. It is also the view, as far as I can see, held by most mainstream economists today.  It is the Reverse Luddite Fallacy*.

In the past, automation has mostly been mechanisation – the replacement of muscle power. The mechanisation of farming reduced human employment in agriculture from 80% of the labour force in 1800 to around 1% today. Humans went on to do other jobs, but horses did not, as they had no cognitive skills to offer. The impact on the horse population was catastrophic.

I suspect that economists either refuse or just fail to take Moore’s Law into account. This doubling process (which is not dying, just changing) means that machines in 2027 will be 128 times smarter than today’s, and machines in 2037 will be 8,000 times smarter.

I’ll say that again. If Moore’s law continues, then machines in 2037 will be 8,000 smarter than today.

It’s very likely that these machines will be cheaper, better and faster at most of the tasks that humans could do for money.

Because the machines won’t be conscious (artificial general intelligence, or AGI, is probably further away than that), that still leaves the central role for humans of doing all the worthwhile things in life: namely, learning, exploring, socialising, paying, having fun.

That is surely the wonderful world we should be working towards, but there is no reason to think we will arrive there inevitably. Things could go wrong. It is no small task to work out what an economy looks like where income is de-coupled from jobs, and how to get there from here. Just waving a magic wand and saying “UBI will fix it” is not sufficient.

UBI is hot
* Thanks to my friends at the Singularity Bros podcast for inventing this handy term.

What’s wrong with UBI?

One out of three ain’t good

Universal Basic Income (UBI) is a fashionable policy idea comprising three elements: it is universal, it is basic, and it is an income. Unfortunately, two of these elements are unhelpful, and to paraphrase Meatloaf, one out of three ain’t good.

The giant sucking sound

The noted economist John Kay dealt the edifice of UBI a serious blow in May 2016 in an article (here, possibly behind a paywall) for the FT. He returned to his target a year later (here, no paywall) and pretty much demolished it. His argument is slightly technical, and it focuses on UBI as a policy for implementation today, so I won’t dwell on it. But if you are one of the many who think UBI is a great idea, it is well worth reading one or both articles to see how Kay demonstrates that “either the basic income is impossibly low, or the expenditure on it is impossibly high.”

To put it more bluntly than Kay does, if UBI was introduced at an adequate level in any one country (or group of countries) today, there would be a giant sucking sound, as many of the richer people in the jurisdiction would leave to avoid the punitive taxes that would pay for it.

UBI and technological unemployment

But what happens a few decades from now if a large minority – or a majority – of people are unemployable because smart machines have taken all the jobs that they could do? We don’t know for sure that this will happen, of course, but it is at least very plausible, so we would be crazy not to prepare for the eventuality. Kay explicitly ignores this question, but tech-savvy and thoughtful people like Elon Musk and Sam Altman think that UBI may be the answer.

Imagine a society where 40% of the population can no longer find paid employment because machines can do everything they could do for money cheaper, faster and better. Would the 60% who remained in work, including those in government, simply let them starve? I’m pretty sure they wouldn’t, even if only because 40% of a population being angry and desperate presents a serious security threat to the others.

Many people argue that UBI is the solution, and will be affordable because the machines will be so efficient that enormous wealth will be created in the economy which can support the burden of so many people who are not contributing. I describe elsewhere a “Generous Google” scenario in which a handful of tech firms are generating most of the world’s GDP, and in order to avoid social collapse they agree to share their vast wealth by funding a global UBI.

Google and cash
I suspect there are serious problems with the economics of this. Exceptional profits are usually competed away, and companies which manage to avoid that by establishing de facto monopolies sooner or later find themselves the subject of regulatory investigations. But putting that concern to one side, in the event of profound technological unemployment, should we ask the rich companies and individuals of the future to sponsor a UBI for the rest of us?

This is where Meatloaf comes in. (Yay.)


The first of UBI’s three characteristics is its universality. It is paid to all citizens regardless of their economic circumstances. There are several reasons why its proponents want this. Experience shows that many benefits are only taken up by those they are intended for if everyone receives them. Means-tested benefits can have low uptake among their target recipients because they are too complicated to claim, or the beneficiaries feel uncomfortable about claiming them, or simply never find out about them. Child benefits in the UK are one well-known example. There is also the concern that UBI should not be stigmatised as a sign of failure in any sense.

But in the case of UBI, these considerations are surely outweighed by the massive inefficiency of universality. In our scenario of 40% unemployability, paying UBI to Rupert Murdoch, Bill Gates, and the millions of others who are still earning healthy incomes would be a terrible waste of resources.

Murdoch and cash

The second characteristic of UBI is that it is Basic, and this is an even worse problem. “Basic” cannot mean anything other than extremely modest, and if we are to have a society in which a very large minority or a majority of people will be unemployable for the remainder of their lives, they are not going to be happy living on extremely modest incomes. Nor would that be a recipe for a stable, happy society.

Many proponents of UBI think that the payment will prevent everyone from starving, and we will supplement our universal basic incomes with activities which we enjoy rather than the wage slave drudgery faced by many people today. But the scenario envisaged here is one in which many or most humans simply cannot get paid for their work, because machines can do it cheaper, better and faster. The humans will still work: they will be painters, athletes, explorers, builders, virtual reality games consultants, and they will derive enormous satisfaction from it. But they won’t get paid for it.

If we are heading for a post-jobs society for many or most people, we will need a form of economy which provides everyone with a comfortable standard of living, and the opportunity to enjoy the many good things in life which do not come free – at least currently.


UBI isn’t all bad. After all, it is in part an attempt to save the unemployable from starving. And the debate about it helps draw attention to the problem that many people hope it will solve – namely, technological unemployment. So UBI isn’t the right answer, but it is at least an attempt to ask the right question.

Perhaps we can salvage the good part of UBI and improve the bad parts. Perhaps what we need instead of UBI is a PCI – a Progressive Comfortable Income. This would be paid to those who need it, rather than wasting resources on those who have no need. It would provide sufficient income to allow a rich and satisfying life.

Now all we have to do is figure out how to pay for it.

Future Bites 7 – The Star Trek Economy

The seventh in a series of un-forecasts* – little glimpses of what may lie ahead in the century of two singularities.

In 2050 Lauren turned sixty. She reflected that in a previous era she would now be thinking about retiring, but this wasn’t necessary for Lauren since she hadn’t had a job for decades. Neither had most of her family and friends.

She was a Millennial, and hers was the lucky generation. It hadn’t seemed like that at the outset. When Lauren was in her teens in what was called the noughties – the early years of the century – it seemed as though the Baby Boomers, the post-WW2 generation, had eaten all the pies. In many countries their education was subsidised, while Lauren’s generation had to pay college fees. The Boomers could afford to buy properties before they reached middle age, even in property hot-spots like London, New York and San Francisco. And they invented sex, for heaven’s sake. (Apparently it hadn’t existed before the Swinging Sixties.)

But later on, when humanity muddled through the Economic Singularity without too much turmoil, it turned out that the Boomers’ luck was eclipsed by that of the Millennials.

During the 2020s, industry after industry succumbed to automation by intelligent machines, and unemployment began to soar. Professional drivers were the first to go, but they were quickly followed by the staff in car insurance companies, call centres, fast food outlets and most other types of retail. At the same time, junior positions in the middle-class professions started thinning out so that there were no trainee jobs for accountants, lawyers, architects and journalists. By 2030 even economists were admitting that lasting widespread unemployability was a thing, although they did so using such obscure language that no-one could tell if they were apologising for having denied it for so long. (They weren’t.)

Economist Oh F..k

People survived thanks to increasingly generous welfare payments, which were raised by desperate governments just fast enough to ward off serious social unrest. The political left screamed for the introduction of a Universal Basic Income (UBI), but pragmatic politicians pointed out there was no point diverting much-needed funds towards the people still working, and also that no-one wanted to live forever on a “basic”, i.e. subsistence level of income.

Instead of UBI, a system of payments called HELP was introduced, which stood for Human Elective Leisure Payment. The name was chosen to avoid the stigmatism that living on welfare had often aroused in the past, and also to acknowledge the fact that many of the people who received it were giving up their jobs voluntarily so that other people, less able than themselves to find meaning outside structured employment, could carry on as employees.


HELP staved off immediate disaster, but those pragmatic politicians were increasingly concerned about its affordability. The demands on the public purse were growing fast, while the tax base of most economies was shrinking. Smart machines were making products and services more efficiently, but the gains didn’t show up in increased profits to the companies that owned the machines. Instead they generated lower and lower prices for consumers. Fortunately, as it turned out, this enabled governments to reduce the level of HELP without squeezing the living standards of their citizens.

The race downhill between the incomes of governments and the costs they needed to cover for their citizens was nerve-wracking for a few years, but by the time Lauren hit middle age it was clear the outcome would be good. Most kinds of products had now been converted into services, so cars, houses, and even clothes were almost universally rented rather than bought: Lauren didn’t know anyone who owned a car. The cost of renting a car for a journey was so close to zero that the renting companies – auto manufacturers or AI giants and often both – generally didn’t bother to collect the payment. Money was still in use, but was becoming less and less necessary.

As a result, the prices of most asset classes had crashed. Huge fortunes had been wiped out as property prices collapsed, especially in the hot-spot cities, but few people minded all that much as they could get whatever they needed so easily. Art collections had mostly been donated to public galleries – which were of course free to visit, and most of the people who had previously had the good fortune to occupy the very nicest homes had surrendered their exclusive occupation.

Self-driving RV

The populations of most countries were highly mobile, gradually migrating from one interesting place to another as the fancy took them. This weekend Lauren was “renting” a self-driving mobile home to drive her – at night, while she was asleep – to Portugal, where she would spend a couple of weeks on a walking trip with some college friends. With so much of what was important to people now being digital rather than material, no-one was bothered by the impracticality of having piles of material belongings tying them to one location. And with the universal free internet providing so much bandwidth, distance was much less of a barrier to communication and friendship than it used to be.

The means of production, and the server farms which were home to the titanic banks of AI-generating computers, were still in private ownership, as no-one had yet found a way to ensure that state ownership would avoid sliding into inefficiency and corruption. But because it was clear that the owners were not profiteering, this was not seen as a problem. The reason why the owners didn’t exploit their position was partly that they didn’t see any need to, and partly that if they did, somebody else would compete away their margins with equally efficient smart machines. Most people viewed the owners as heroes rather than villains.

There were a few voices warning that the scenario of “the gods and the useless” was still a possibility, because technological innovation was still accelerating, and the owners might have privileged access to tech that would render them qualitatively different to everyone else, and they would effectively become a different species.

But like most people, Lauren thought this was unlikely to happen before the first artificial general intelligence was created, followed soon after by the first superintelligence – an entity smarter than the smartest human. Lauren was very fond of her nephew Alex, a generation younger than her. It was widely assumed that when the first superintelligence appeared, humanity would somehow merge with it, and that Alex’s generation would be the last generation to reach middle age as “natural” humans. It was therefore fitting that they were called generation Z.

* This un-forecast is not a prediction.  Predictions are almost always wrong, so we can be pretty confident that the future will not turn out exactly like this.  It is intended to make the abstract notion of technological unemployment more real, and to contribute to scenario planning.  Failing to plan is planning to fail: if you have a plan, you may not achieve it, but if you have no plan, you most certainly won’t.  In a complex environment, scenario development is a valuable part of the planning process. Thinking through how we would respond to a sufficient number of carefully thought-out scenarios could well help us to react more quickly when we see the beginnings of what we believe to be a dangerous trend.