Future Bites 8 – Reputation management and algocracy

The eighth in a series of un-forecasts* – little glimpses of what may lie ahead in the century of two singularities.

This article first appeared on the excellent blog run by Circus Street (here), a digital training provider for marketers.

consultant

In the old days, before artificial intelligence started to really work in the mid-2010s, the clients for reputation management services were rich and powerful: companies, government departments, environmental lobbying groups and other non-government organisations, and of course celebrities. The aims were simple: accentuate the good, minimise the bad. Sometimes the task was to squash a potentially damaging story that could grow into a scandal. Sometimes it was to promote a film, a book, or a policy initiative.

Practitioners needed privileged access to journalists in the mainstream media, to politicians and policy makers, and to the senior business people who shaped the critical buying decisions of large companies. They were formidable networkers with enviable contacts in the media and business elite. They usually had very blue-chip educational and early career backgrounds; offering patronage in the form of juicy stories and un-attributable briefings to compliant journalists.

Digital democratisation

The information revolution democratised reputation management along with everything else. It made the service available to a vastly wider range of people. If you were a serious candidate for a senior job in business, government, or the third sector, you needed to ensure that no skeletons came tumbling out of your closet at the wrong moment. Successful people needed to be seen as thought leaders and formidable networkers, and this did not happen by accident.

The aims of reputation management were the same as before, but just as the client base was now much wider, so too was the arena in which the service was provided. The mainstream media had lost its exclusive stranglehold on public attention and public opinion. Facebook and Twitter could often be more influential than a national newspaper. The blogosphere, YouTube, Pinterest, and Reddit were now crucial environments, along with many more, and the players were changing almost daily.

informal working

The practitioners were different too. No longer just Oxbridge-educated, Saville Row tailored types, they included T-shirt-clad young men and women whose main skill was being up-to-date with the latest pecking order between online platforms. People with no deep understanding of public policy, but a knack for predicting which memes would go viral on YouTube. Technically adept people who knew how to disseminate an idea economically across hundreds of different digital platforms. Most of all, they included people who knew how to wrangle AI bots.

Reputation bots

Bots scoured the web for good news and bad. They reviewed vast hinterlands of information, looking for subtle seeds of potential scandal sown by jealous rivals. Their remit was the entire internet, an impossibly broad arena for un-augmented humans to cover. Every mention of a client’s name, industry sector, or professional area of interest was tracked and assessed. Reputations were quantified. Indices were established where the reputations of brands and personalities could be tracked – and even traded.

All this meant lots of work for less traditionally qualified people. Clients who weren’t rich couldn’t afford the established consultants’ exorbitant fees, and they didn’t need them anyway. Less mainstream practitioners deploying clever bots could achieve impressive results for far less money. As the number of actual and potential clients for reputation management services grew exponentially, so did the number of practitioners. The same phenomenon was observed in many areas of professional services, and become known as the “iceberg effect”: a previous, restricted client base revealed to be just the tip of a previously unknown and inaccessible demand.

algocracy bot

But pretty soon, the bots started to learn from the judgement of practitioners and clients, and needed less and less input from humans to weave their magic. And as the bots became more adept, their services became more sophisticated. Practising offence as well as defence: placing stories about their clients’ competitors, and duelling with bots employed by those rivals: twisting each other’s messages into racist, sexist or otherwise offensive versions, tactics that many of their operators were happy to run with and help refine.

Algocracy

Of course, as the bots became increasingly autonomous, the number of real humans doing the job started to shrink again. Clients started to in-source the service. Personal AIs – descendants of Siri and Alexa, evolved by Moore’s Law, – offered the service. Users began relying on these AIs to the point where the machines had free access to censor their owners’ emails and other communications. People realised that the AIs’ judgement was better than their own, and surrendered willingly to this oversight. Social commentators railed against the phenomenon, clamouring that humans were diminishing themselves, and warning of the rise of a so called “algocracy”.

Their warnings were ignored. AI works: how could any sane person choose to make stupid decisions when their AI could make smart ones instead?

* This un-forecast is not a prediction.  Predictions are almost always wrong, so we can be pretty confident that the future will not turn out exactly like this.  It is intended to make the abstract notion of technological unemployment more real, and to contribute to scenario planning.  Failing to plan is planning to fail: if you have a plan, you may not achieve it, but if you have no plan, you most certainly won’t.  In a complex environment, scenario development is a valuable part of the planning process. Thinking through how we would respond to a sufficient number of carefully thought-out scenarios could well help us to react more quickly when we see the beginnings of what we believe to be a dangerous trend.

Don’t just speed up the mess

Guest post by Matt Buskell, head of customer engagement at Rainbird

One day back in 1999, I was sitting in a TV studio with a client. We were being interviewed about something called the world wide web. The interviewer was asking if it would change the world. It seems silly to say that now, but it was all very new back then.

The interviewer asked, “do you think this technology will transform your business?” The client was Peter Jones of Siemens, who was one of the most impressive transformation leaders I have ever met. He replied “Yes, but we need to be careful that we don’t just speed up the mess”.

Cluttered desk
What Peter meant was that applied without sufficient thought, technology doesn’t create a better process, just a faster one. It’s an observation I’ve thought about often over the years, and it is still as relevant now as it was back then.

We currently have clients in several different industries using robotic process automation (RPA). This is software (a “robot”) which captures and interprets existing applications to process a transaction; manipulating data, triggering responses, and communicating with other digital systems.

As Andrew Burgess, one of the most experienced consultants in this space, says, “RPA is already delivering huge benefits to businesses. By automating common processes, it reduces the cost of service whilst increasing accuracy and speed. But one of the biggest challenges for any automation project is to embed decision making.”

The next generation of RPA promises to automate even more, by applying Artificial Intelligence (AI). Using natural language processing, robots can read unstructured text and determine workflow; they can make auditable decisions and judgements.

But there is still a danger of simply speeding up the mess.

To check for this, we often ask clients, “Could you improve your efficiency by moving some of the decisions in your process flow further upstream?”

Consider this. A sales person aided by an AI co-pilot can provide the same value to a customer as a lawyer could. A call centre agent can become an underwriter, and a fitness instructor can become a physiotherapist. The co-pilot provides the human with knowledge and judgements far beyond their innate level of understanding.

Botsplaining

Taxi drivers use mapping software as their co-pilots today. If they did not have the software they would have to learn the maps themselves, and not everyone has the patience to spend three years or more “doing the knowledge” which makes London’s black cab drivers so impressive.

An AI cannot mimic all the knowledge and understanding of a lawyer, but that’s not what we are asking it to do. We are asking it to make very specific judgements in very specific situations – and to do that on a massive scale.

Take an insurance help-desk. A customer sends a message saying, “I just had an accident; the other driver’s name was XYZ and their insurance company was ABC”. The RPA reads the text, determines that it’s a claim, creates a case file and adds it into the company’s workflow.

We have potentially cut out a call and automated several steps that the call centre agent would have gone through. We have saved time. So far, so good.

However, the case file still must follow a “First Notice of Loss” (FNOL) process and must still be reviewed by a liability expert to determine which party was at fault. It can take another 40 steps until the case reaches a recovery agent who tries to claim a settlement from the other insurer.

So the AI has been helpful but not yet transformative, because we are still following the same business process as before.

Now imagine you could take the knowledge of the liability expert and the recovery expert and graft it into the claims process system. The AI can make the same decisions that these experts would have made, but by connecting them we can change the process. Whereas liability would traditionally have been determined around step 12 in the process and recovery would have started around step 30, this can now all happen at step 1. There is a ripple effect on the number of follow-on steps, and the whole process becomes much faster.

Insurance
Companies like Accredita are doing this for the FNOL process and making the technology available to insurers. Consultancies like Ernst and Young and PwC are evaluating the overall claims process and figuring out other ways that cognitive reasoning could change the business process. Large banks are assessing how cognitive reasoning and RPA can be used to automate and enhance the identification of credit card fraud.

Re-visiting our map analogy, RPA gets you to the right page on the map for your journey, while cognitive reasoning takes the map it has been given and calculates the route for you.

Peter has retired, but if he was still working he would surely agree there is a fundamental shift in what can be achieved with business processes. Significant limitations have been removed and with that comes a massive opportunity to do much more than just “speed up the mess”.

The second question asked in the interview back in 1999 was “How do you figure out where the value is in this new technology?”. Peter’s answer to that was “You won’t know until you get out there and try it”. That still holds true today.

Matt headshot
Matthew Buskell has been helping companies be more innovative since 1996, when he graduated from Birmingham University with a degree in Cognitive Psychology and Software Engineering. He has helped clients navigate a series of technological waves: the Internet, Mobile, Big Data, and now Artificial Intelligence. He was on the SAP UK board until 2016 when he became head of customer engagement at Rainbird, a leading Cognitive Reasoning engine that is re-defining decision-making with AI in Financial Services and Insurance.

Andrew Burgess is a management consultant, author and speaker with over 25 years’ experience. He is considered an authority on innovative and disruptive technology, artificial intelligence, robotic process automation and impact sourcing. He is a former CTO who has run sourcing advisory firms and built automation practices.  He has been involved in many major change projects, including strategic development, IT transformation and outsourcing, in a wide range of industries across four continents. He is a member of the Advisory Boards of a number of ambitious disruptive companies.

What’s wrong with UBI – responses

Last week I posted an article called “What’s wrong with UBI?” It argued that two of the three component parts of UBI are unhelpful: its universality and its basic-ness.

The article was viewed 100,000 times on LinkedIn and provoked 430-odd comments. This is too many to respond to individually, so this follow-up article is the best I can offer by way of response. Sorry about that.

Fortunately, the responses cluster into five themes, which makes a collective response possible. They mostly said this:

You're an idiot
Expanding a little, they said this:

1. You’re an idiot because UBI is communism and we know that doesn’t work

2. You’re a callous sonofabitch because UBI is a sane and fair response to an unjust world. Oh, and you’re an idiot too

3. You’re an idiot because we know that automation has never caused lasting unemployment, so it won’t in the future

4. That was interesting, thank you

5. Assorted random bat-shit craziness

Clearly UBI provokes strong feelings, which I think is a good thing. For today’s economy, UBI is a pretty terrible prescription, and isn’t making political headway outside the fringes. But it does seem to many smart people (eg Elon Musk) to be a sensible option for tackling the economic singularity, which I explored in detail in my book, called, unsurprisingly, The Economic Singularity.

the_economic_singula_cover_for_kindle
It is tempting to believe that since my article annoyed both ends of the political spectrum, I must be onto something. But of course that is false logic: traffic jams annoy pretty much everyone, which doesn’t mean they have any merit.

Anyway, here are some brief responses to the objections.

1. You’re an idiot because UBI is communism and we know that doesn’t work

Before becoming a full-time writer and speaker about AI, I spent 30 years in business. I firmly believe that capitalism (plus the scientific method) have made today the best time ever to be human. Previously, life was nasty, brutal and short. Now it isn’t, for most people. In other words, I am not a communist.

Communism is the public ownership of the means of production, distribution and exchange, and UBI does not require that. It does, however, require sharply increased taxation, and this can damage enterprise – unless goods and services can be made far cheaper. I wrote about that in a post called The Star Trek Economy (here).

Star trek economy
2. You’re a callous sonofabitch because UBI is a sane and fair response to an unjust world. Oh, and you’re an idiot too

I recently heard a leading proponent of UBI respond to the question “How can we afford it?” with “How can we afford not to have it?” He seemed genuinely to think that was an adequate answer. Wow.

However, it is obvious that if technological automation renders half or more of the population unemployable, then we will need to find a way to give those unemployable people an income. In other words, we will have to de-couple income from jobs. I semi-seriously suggested an alternative to UBI called Progressive Comfortable Income, or PCI, because I see no sense in making payments to the initially large number of people who don’t need it because they are still working, and I don’t believe all the unemployed people will or should be content to live in penury: we want to live in comfort.

A lot of the respondents to my article argued that payments to the wealthy would be recovered in tax. But unless you’re going to set the marginal tax rate at 100% you will only recover part of the payment. You are also engaging in a pointless bureaucratic merry-go-round of payments.

3. You’re an idiot because we know that automation has never caused lasting unemployment, so it won’t in the future

The most pernicious response, I think, is the claim that automation cannot cause lasting unemployment – because it has not done so in the past. This is not just poor reasoning (past economic performance is no guarantee of future outcome); it is dangerous. It is also the view, as far as I can see, held by most mainstream economists today.  It is the Reverse Luddite Fallacy*.

heads-in-the-sand
In the past, automation has mostly been mechanisation – the replacement of muscle power. The mechanisation of farming reduced human employment in agriculture from 80% of the labour force in 1800 to around 1% today. Humans went on to do other jobs, but horses did not, as they had no cognitive skills to offer. The impact on the horse population was catastrophic.

I suspect that economists either refuse or just fail to take Moore’s Law into account. This doubling process (which is not dying, just changing) means that machines in 2027 will be 128 times smarter than today’s, and machines in 2037 will be 8,000 times smarter.

I’ll say that again. If Moore’s law continues, then machines in 2037 will be 8,000 smarter than today.

It’s very likely that these machines will be cheaper, better and faster at most of the tasks that humans could do for money.

Because the machines won’t be conscious (artificial general intelligence, or AGI, is probably further away than that), that still leaves the central role for humans of doing all the worthwhile things in life: namely, learning, exploring, socialising, paying, having fun.

That is surely the wonderful world we should be working towards, but there is no reason to think we will arrive there inevitably. Things could go wrong. It is no small task to work out what an economy looks like where income is de-coupled from jobs, and how to get there from here. Just waving a magic wand and saying “UBI will fix it” is not sufficient.

UBI is hot
* Thanks to my friends at the Singularity Bros podcast for inventing this handy term.

What’s wrong with UBI?

meat-loaf-two-out-of-three-aint-bad-epic-cleveland-international
One out of three ain’t good

Universal Basic Income (UBI) is a fashionable policy idea comprising three elements: it is universal, it is basic, and it is an income. Unfortunately, two of these elements are unhelpful, and to paraphrase Meatloaf, one out of three ain’t good.

The giant sucking sound

The noted economist John Kay dealt the edifice of UBI a serious blow in May 2016 in an article (here, possibly behind a paywall) for the FT. He returned to his target a year later (here, no paywall) and pretty much demolished it. His argument is slightly technical, and it focuses on UBI as a policy for implementation today, so I won’t dwell on it. But if you are one of the many who think UBI is a great idea, it is well worth reading one or both articles to see how Kay demonstrates that “either the basic income is impossibly low, or the expenditure on it is impossibly high.”

To put it more bluntly than Kay does, if UBI was introduced at an adequate level in any one country (or group of countries) today, there would be a giant sucking sound, as many of the richer people in the jurisdiction would leave to avoid the punitive taxes that would pay for it.

UBI and technological unemployment

But what happens a few decades from now if a large minority – or a majority – of people are unemployable because smart machines have taken all the jobs that they could do? We don’t know for sure that this will happen, of course, but it is at least very plausible, so we would be crazy not to prepare for the eventuality. Kay explicitly ignores this question, but tech-savvy and thoughtful people like Elon Musk and Sam Altman think that UBI may be the answer.

Imagine a society where 40% of the population can no longer find paid employment because machines can do everything they could do for money cheaper, faster and better. Would the 60% who remained in work, including those in government, simply let them starve? I’m pretty sure they wouldn’t, even if only because 40% of a population being angry and desperate presents a serious security threat to the others.

Many people argue that UBI is the solution, and will be affordable because the machines will be so efficient that enormous wealth will be created in the economy which can support the burden of so many people who are not contributing. I describe elsewhere a “Generous Google” scenario in which a handful of tech firms are generating most of the world’s GDP, and in order to avoid social collapse they agree to share their vast wealth by funding a global UBI.

Google and cash
I suspect there are serious problems with the economics of this. Exceptional profits are usually competed away, and companies which manage to avoid that by establishing de facto monopolies sooner or later find themselves the subject of regulatory investigations. But putting that concern to one side, in the event of profound technological unemployment, should we ask the rich companies and individuals of the future to sponsor a UBI for the rest of us?

This is where Meatloaf comes in. (Yay.)

Universality

The first of UBI’s three characteristics is its universality. It is paid to all citizens regardless of their economic circumstances. There are several reasons why its proponents want this. Experience shows that many benefits are only taken up by those they are intended for if everyone receives them. Means-tested benefits can have low uptake among their target recipients because they are too complicated to claim, or the beneficiaries feel uncomfortable about claiming them, or simply never find out about them. Child benefits in the UK are one well-known example. There is also the concern that UBI should not be stigmatised as a sign of failure in any sense.

But in the case of UBI, these considerations are surely outweighed by the massive inefficiency of universality. In our scenario of 40% unemployability, paying UBI to Rupert Murdoch, Bill Gates, and the millions of others who are still earning healthy incomes would be a terrible waste of resources.

Murdoch and cash
Basic

The second characteristic of UBI is that it is Basic, and this is an even worse problem. “Basic” cannot mean anything other than extremely modest, and if we are to have a society in which a very large minority or a majority of people will be unemployable for the remainder of their lives, they are not going to be happy living on extremely modest incomes. Nor would that be a recipe for a stable, happy society.

Many proponents of UBI think that the payment will prevent everyone from starving, and we will supplement our universal basic incomes with activities which we enjoy rather than the wage slave drudgery faced by many people today. But the scenario envisaged here is one in which many or most humans simply cannot get paid for their work, because machines can do it cheaper, better and faster. The humans will still work: they will be painters, athletes, explorers, builders, virtual reality games consultants, and they will derive enormous satisfaction from it. But they won’t get paid for it.

Nillionaire
If we are heading for a post-jobs society for many or most people, we will need a form of economy which provides everyone with a comfortable standard of living, and the opportunity to enjoy the many good things in life which do not come free – at least currently.

Income

UBI isn’t all bad. After all, it is in part an attempt to save the unemployable from starving. And the debate about it helps draw attention to the problem that many people hope it will solve – namely, technological unemployment. So UBI isn’t the right answer, but it is at least an attempt to ask the right question.

Perhaps we can salvage the good part of UBI and improve the bad parts. Perhaps what we need instead of UBI is a PCI – a Progressive Comfortable Income. This would be paid to those who need it, rather than wasting resources on those who have no need. It would provide sufficient income to allow a rich and satisfying life.

Now all we have to do is figure out how to pay for it.

Future Bites 7 – The Star Trek Economy

The seventh in a series of un-forecasts* – little glimpses of what may lie ahead in the century of two singularities.

In 2050 Lauren turned sixty. She reflected that in a previous era she would now be thinking about retiring, but this wasn’t necessary for Lauren since she hadn’t had a job for decades. Neither had most of her family and friends.

Millennials
She was a Millennial, and hers was the lucky generation. It hadn’t seemed like that at the outset. When Lauren was in her teens in what was called the noughties – the early years of the century – it seemed as though the Baby Boomers, the post-WW2 generation, had eaten all the pies. In many countries their education was subsidised, while Lauren’s generation had to pay college fees. The Boomers could afford to buy properties before they reached middle age, even in property hot-spots like London, New York and San Francisco. And they invented sex, for heaven’s sake. (Apparently it hadn’t existed before the Swinging Sixties.)

But later on, when humanity muddled through the Economic Singularity without too much turmoil, it turned out that the Boomers’ luck was eclipsed by that of the Millennials.

During the 2020s, industry after industry succumbed to automation by intelligent machines, and unemployment began to soar. Professional drivers were the first to go, but they were quickly followed by the staff in car insurance companies, call centres, fast food outlets and most other types of retail. At the same time, junior positions in the middle-class professions started thinning out so that there were no trainee jobs for accountants, lawyers, architects and journalists. By 2030 even economists were admitting that lasting widespread unemployability was a thing, although they did so using such obscure language that no-one could tell if they were apologising for having denied it for so long. (They weren’t.)

Economist Oh F..k

People survived thanks to increasingly generous welfare payments, which were raised by desperate governments just fast enough to ward off serious social unrest. The political left screamed for the introduction of a Universal Basic Income (UBI), but pragmatic politicians pointed out there was no point diverting much-needed funds towards the people still working, and also that no-one wanted to live forever on a “basic”, i.e. subsistence level of income.

Instead of UBI, a system of payments called HELP was introduced, which stood for Human Elective Leisure Payment. The name was chosen to avoid the stigmatism that living on welfare had often aroused in the past, and also to acknowledge the fact that many of the people who received it were giving up their jobs voluntarily so that other people, less able than themselves to find meaning outside structured employment, could carry on as employees.

HELP

HELP staved off immediate disaster, but those pragmatic politicians were increasingly concerned about its affordability. The demands on the public purse were growing fast, while the tax base of most economies was shrinking. Smart machines were making products and services more efficiently, but the gains didn’t show up in increased profits to the companies that owned the machines. Instead they generated lower and lower prices for consumers. Fortunately, as it turned out, this enabled governments to reduce the level of HELP without squeezing the living standards of their citizens.

The race downhill between the incomes of governments and the costs they needed to cover for their citizens was nerve-wracking for a few years, but by the time Lauren hit middle age it was clear the outcome would be good. Most kinds of products had now been converted into services, so cars, houses, and even clothes were almost universally rented rather than bought: Lauren didn’t know anyone who owned a car. The cost of renting a car for a journey was so close to zero that the renting companies – auto manufacturers or AI giants and often both – generally didn’t bother to collect the payment. Money was still in use, but was becoming less and less necessary.

As a result, the prices of most asset classes had crashed. Huge fortunes had been wiped out as property prices collapsed, especially in the hot-spot cities, but few people minded all that much as they could get whatever they needed so easily. Art collections had mostly been donated to public galleries – which were of course free to visit, and most of the people who had previously had the good fortune to occupy the very nicest homes had surrendered their exclusive occupation.

Self-driving RV

The populations of most countries were highly mobile, gradually migrating from one interesting place to another as the fancy took them. This weekend Lauren was “renting” a self-driving mobile home to drive her – at night, while she was asleep – to Portugal, where she would spend a couple of weeks on a walking trip with some college friends. With so much of what was important to people now being digital rather than material, no-one was bothered by the impracticality of having piles of material belongings tying them to one location. And with the universal free internet providing so much bandwidth, distance was much less of a barrier to communication and friendship than it used to be.

The means of production, and the server farms which were home to the titanic banks of AI-generating computers, were still in private ownership, as no-one had yet found a way to ensure that state ownership would avoid sliding into inefficiency and corruption. But because it was clear that the owners were not profiteering, this was not seen as a problem. The reason why the owners didn’t exploit their position was partly that they didn’t see any need to, and partly that if they did, somebody else would compete away their margins with equally efficient smart machines. Most people viewed the owners as heroes rather than villains.

There were a few voices warning that the scenario of “the gods and the useless” was still a possibility, because technological innovation was still accelerating, and the owners might have privileged access to tech that would render them qualitatively different to everyone else, and they would effectively become a different species.

But like most people, Lauren thought this was unlikely to happen before the first artificial general intelligence was created, followed soon after by the first superintelligence – an entity smarter than the smartest human. Lauren was very fond of her nephew Alex, a generation younger than her. It was widely assumed that when the first superintelligence appeared, humanity would somehow merge with it, and that Alex’s generation would be the last generation to reach middle age as “natural” humans. It was therefore fitting that they were called generation Z.

* This un-forecast is not a prediction.  Predictions are almost always wrong, so we can be pretty confident that the future will not turn out exactly like this.  It is intended to make the abstract notion of technological unemployment more real, and to contribute to scenario planning.  Failing to plan is planning to fail: if you have a plan, you may not achieve it, but if you have no plan, you most certainly won’t.  In a complex environment, scenario development is a valuable part of the planning process. Thinking through how we would respond to a sufficient number of carefully thought-out scenarios could well help us to react more quickly when we see the beginnings of what we believe to be a dangerous trend.

Future Bites 6 – Generous Google

The sixth in a series of un-forecasts* – little glimpses of what may lie ahead in the century of two singularities.

It is 2044. Around the world, machines have taken over many of the jobs that humans used to do. Professional drivers were the first big group to succumb to what is now commonly referred to as cognitive automation. Many of them struggled to cope, eking out unsatisfactory existences in the gig economy. Call centre staff and retail workers were next, and then, in the early 2030s, most of the professions started to see large reductions in employment levels too.

Dole queue
Unemployment levels in different countries now range from 40% to 75%, depending mainly on the level of technological sophistication of their economies. Countries with deep expertise in artificial intelligence tend to have relatively low unemployment, as do countries where wage levels were extremely low, as the incentive to automate is less.

Some countries tried to resist the encroachment of the machines, but the effect on their economies was devastating, as they became woefully un-competitive. All the countries which tried it have experienced a change of government, sometimes violently. Argentina is an interesting exception: its people believe themselves and their nation to be unique, and they are willing to tolerate deep poverty levels as a by-product of their search for a different path. The collapse of the Russian government was especially violent, although fortunately there were no mishaps with its nuclear arsenal. What happened to president Putin is a mystery, although there are persistent rumours of a grisly end.

Putin down

No economists were harmed in the making of this un-forecast, but their profession is now depleted, as they almost unanimously refused to accept that automation could cause lasting unemployment until well past the time that it was obvious to everyone else.

Overall, the situation is satisfactory because of a Great Accommodation that was reached between the AI giants and everyone else. Thanks to their mastery of advanced AI, eight American firms and half a dozen Chinese ones now generate almost 75% of the world’s GDP. President Michelle Obama chaired a series of seminal meetings in the pivotal years at the end of the 2030s in which these firms agreed to pay extremely high taxes in order to keep everyone else alive by means of so-called Citizen’s Income Payments, or CIP. The result is now known as the “generous Google” scenario. Only one tech giant CEO held out in opposition to the agreement, and as a result his firm was nationalised and transferred to a consortium of the others. He now tours the world in a very fast yacht, complaining bitterly to anyone who will listen to him.

Some countries introduced land taxes in an attempt to supplement their incomes, but since the land was not contributing much to GDP, their main effect was to severely depress the value of the land.

President Michalle

For a while it looked as if the world faced a serious problem because the AI giants were all based in China and the US. Fortunately the profound wave of isolationism, nationalism and protectionism that broke across the world in the late 2010s had by now reversed. President Obama was able to secure an agreement that the AI giants would be taxed at the point where they delivered their services rather than where they were domiciled.

The payments received by citizens are modest because the profits of the AI giants are constrained by the normal forces of competition. To the surprise of many the payments are not called universal basic income (UBI) because they are not universal. People who still have jobs do not receive them. The payments are easy to sign up for in most countries, and policing is light.

Almost all unemployed citizens (and many employed ones) spend a good deal of time in virtual reality, which is now highly compelling. Government guidelines recommend that people spend at least four hours a day outside VR, but many people ignore this. There was talk in some countries about adjusting the CIP according to how much time the recipients spent outside VR on the grounds that this would improve health outcomes. But it turns out that many people get significant exercise while in VR, so that proposal has generally been dropped.

crowd in vr

People are generally stuck economically, in the sense that they have no way to improve their financial situation. Drug use is widespread and is de-criminalised almost everywhere. The view is widespread that humanity’s goal should be to advance towards what is known as a Star Trek economy of radical abundance, where goods and services are virtually free. No-one knows how long this will take, and its arrival does not look imminent.

* This un-forecast is not a prediction.  Predictions are almost always wrong, so we can be pretty confident that the future will not turn out exactly like this.  It is intended to make the abstract notion of technological unemployment more real, and to contribute to scenario planning.  Failing to plan is planning to fail: if you have a plan, you may not achieve it, but if you have no plan, you most certainly won’t.  In a complex environment, scenario development is a valuable part of the planning process. Thinking through how we would respond to a sufficient number of carefully thought-out scenarios could well help us to react more quickly when we see the beginnings of what we believe to be a dangerous trend.

PwC asks: Will robots steal our jobs?

PwC logo

PwC has released a report (here) called “Will robots steal our jobs?” It’s not the first report on the subject and it certainly won’t be the last. But coming from the world’s second-largest professional services firm, it deserves attention. (Disclosure: PwC is an occasional client of mine.)

As you’d expect, the report offers a thorough and intelligent analysis. It also arrives at some fairly radical conclusions. I have some major disagreements with it, but it is a welcome contribution.

The key points

Significant job losses…

By the mid-2030s, PwC expects automation to cause the loss of around 38% of US jobs. This is lower than an influential report in 2013 by two Oxford economists, Osborne and Frey, who put the figure at 47%, but higher than other recent reports by mainstream economists. The UK will experience a lower level of job loss, at 30%.

Job-Cutsoffset by new jobs …

The report argues that most of this loss will be offset by (a) the creation of totally new jobs in digital technologies, and (b) the creation of more of the jobs that people already have in services industries, which PwC thinks are harder to automate. These latter jobs will be created because productivity growth creates wealth and extra spending, and therefore job creation.

but leaving a distribution problem

The radical part of the report is its conclusions about income distribution. It argues that the gains in the new economy won’t be equally shared, and that government policy will have to moderate this effect. It addresses the political left’s current favourite solution, universal basic income (UBI), but concludes that it is too expensive, it is wasteful because it pays people who don’t need it, and it reduces incentives to work.

We want change
The report does trot out the tired old pabulum that improving our education and training services can mitigate the problem, but it does so with little conviction. It concludes that “the wider question of how to deal with possible widening income gaps arising from increased automation seems unlikely to go away.” Amen to that.

Questioning the assuptions

So what to make of the report?  Its annex shows that much of the authors’ time was spent re-visiting the algorithms used by Frey and Osborne, and the calculations derived from their assumptions. The original Frey and Osborne work was famously a curious mix of precise calculation and finger-in-the-air guesswork. In particular, they made very subjective guesses about which tasks (and therefore which jobs) are susceptible to automation.

What can an AI do?

That susceptibility to automation depends heavily on the capabilities of the AI systems that will be available in the next two decades, and that gets surprisingly little attention in the PwC report. Given that the computing power available to the developers of AI systems will go through six doublings between now and 2035, those systems will be very different from the ones we are so impressed with today. (At this point some people will be protesting that Moore’s Law is dead or dying. This may be true in a narrow sense, but in its broader, underlying meaning that computer power double every eighteen months or so, it has plenty of life in it yet.)

Where’s the exponential?

The failure to take seriously the impact of the exponential improvement in AI is a problem with a great deal of thinking about its impact.

Exponential
Today’s AI systems can already recognise images (including faces) better than you can. They are overtaking you in speech recognition, and they are catching up with you in natural language processing. By 2035 they will be enormously better than you at all these skills – and these are the very skills which you use at work every day. Of course we don’t know for sure yet, but it is entirely possible that by 2035, the great majority of jobs which people do today will be done cheaper, faster and better by AIs. This includes middle-class white collar jobs in the professions as well as repetitive jobs in warehouses and factories. AI is collar-blind. (I address this in more detail in chapter 3 of my book, The Economic Singularity.)

Legions of new jobs?

These exponentially improved AIs (and their peripherals, the robots) won’t just take our existing jobs: there won’t be much to stop them taking any new jobs we might devise as well. And there is no guarantee that we will devise legions of new jobs. The PwC report observes that “6% of all UK jobs in 2013 were of a kind that didn’t exist in 1990”. That represents significant innovation, but remember, this is the period in which the web was invented and adopted, which changed most aspects of life and work pretty dramatically. Earlier research by Gerald Huff found that 80% of all jobs done by Americans in 2014 existed in 1914.

UBI quibbles

I’m mostly in agreement with the PwC report when it comments on UBI, although the empirical evidence from the trials which have been conducted so far is that it doesn’t turn recipients into lazy couch potatoes. In general the challenge for the automated world is likely to be income, not meaning.

The PwC report omits to mention what is surely the biggest problem with universal basic income, which is that it is basic. We don’t want to spend our futures scraping by on subsistence incomes: we want to live in comfort while the robots do our jobs for us. I believe this is possible, and that it is what we should be aiming for.

Revolutionising education… yet again

Finally, it is wishful thinking to believe that we can give cognitive automation a swerve by revolutionising education. The institutions of education are notoriously hard to fix, and the timescale for fixing them with government policy is far too long. They will be revolutionised in time, thanks to AI, but that will happen in spite of top-down policy, not because of it.

Verdict:

As you’d expect, a thorough and intelligent analysis, with usefully radical conclusions. I disagree with some of the key conclusions, but this is certainly not a bland re-assertion of the Reverse Luddite Fallacy.  Hooray.

heads-in-the-sand