Putting the AI in retail: How cognitive reasoning systems could make life easier for consumers

Another guest post by Matt Buskell, head of customer engagement at Rainbird.

There was a time when booking a holiday meant a single trip to the high street travel agent. Nowadays, the process of online research seems to take longer than the holiday itself. The difference, back then, was the travel agent – a human being who could look you up and down, talk to you about your preferences, and make a recommendation based on their judgement.

travel-agent-2
In the world of AI, we like to call this ‘inference’. Travel agents never asked any questions like the filters and features you find on travel websites today – location, price range, number of stars. Nothing. Instead, they inferred what we would like, basing their judgement on factors such as how we were dressed, what we liked to do, and how we spoke to them.

Where does the time go?

The average time spent researching holidays in 1997 was just 45 minutes. Now, it’s over eight hours.

frustrated-600x423-335x236
The pattern is the same with other retail sectors that have moved online – from books, to groceries, music, clothing, and even cars. Hours are whittled away on websites like ASOS, TripAdvisor and Amazon. Imagine walking into a real-life store, asking the assistant for advice, and being handed a mountain of products and reviews to spend the next few hours scouring through. You’d probably just walk straight back out. So why do we settle for it online?

Convenience is one thing: for most of us, the ability to browse during the morning commute or on a lunch break is more appealing than a trip to the high street.

Many of us have also convinced ourselves that spending time looking at different online retailers and social media sites is the best way to ensure we have all the facts we could possibly need to get a ‘good deal’.

Online research
But when the choice of online stores and the availability of information was limited, it was a much simpler task. Now, we’re faced with an overload of choice, and the process of doing thorough research can feel laborious.

So why is it that targeted personal advice is lacking in online stores, whilst it is universally expected in the physical stores of our best retailers?

There are three main problems with online retailers today that limit their ability to provide the most suitable recommendations for individual customers:

1) You search for a product using narrow features, e.g. price, size, or category.

2) The system does not explain or offer a rationale for any recommendations it makes.

3) It’s a one-way interaction. You click, the computer displays.

Back to the future

The good news is that ‘conversational commerce’ and cognitive reasoning are going to bring the human element to online retailers. Ironically, the latest trends in AI are actually sending us back in time to the days of personalised shopping.

Imagine an online store in which an artificially intelligent assistant has been trained by your best human retail assistant, your favorite DJ, an experienced travel agent, or a stylist. You ask for advice or a product recommendation, and the system conducts a conversation with you, just like a real-life shop assistant.

robot_travel_agent500px
Let’s take holiday booking as an example. The cognitive reasoning system, channeled via a chatbot – let’s call it Travel Bot – asks you a range of questions to gauge your priorities and their order of importance. During this interaction, you say that you like the beach, enjoy city breaks, hate long journeys, and indicate that price isn’t your deciding factor. Travel Bot recommends a five-star beach resort in Cannes. You baulk at the price and ask for an explanation, and Travel Bot explains that beach property is fifty percent more expensive than inland. You decide that the beach isn’t that important – to which the Travel Bot responds with an altered recommendation.

You end up with the perfect compromise – a reasonably priced hotel stay in the centre of Nice, ten minutes from the beach.

In this instance, Travel Bot mirrors a human travel agent. It makes inferences, explains its recommendations, and continuously alters its advice to cope with uncertain customer responses.

A computer cannot completely replace a good human adviser- yet. We are just too complex to model. But by bringing back the human element of customer service and combining it with the retail arena of the future, we can take a lot of the stress out of online shopping.

online-research-baby
Rainbird is a leading Cognitive Reasoning engine that is re-defining decision-making with AI in Financial Services and Insurance.

 

 

 

 

Advertisements

Don’t get complacent about Amazon’s Robots: be optimistic instead!

In an article for the Technology Liberation Front, Adam Thierer of George Mason University becomes the latest academic to reassure us that AI and robots won’t steal our jobs.i His article relies on three observations: First, Amazon is keen to automate its warehouses, but it is still hiring more humans. Second, ATMs didn’t destroy the jobs of human tellers. Third, automation has not caused widespread lasting unemployment in the past.

TLF banner

Unfortunately, the first of these claims is true but irrelevant, the second is almost certainly false, and the third is both irrelevant and false.

Amazon has automated much of what humans previously did in warehouses, which has undoubtedly reduced the number of humans per dollar of value added, but the penetration of retail by e-commerce is rising very fast, and Amazon is taking share from other retailers, so it is not surprising that it is still hiring. Amazon may never get to the legendary “dark” warehouse staffed only by a human and a dog (where the dog’s job is to keep the human away from the expensive-looking machines), but it will keep pushing as far in that direction as it can. One of the major hurdles is in picking, and that looks like falling fairly soon – in years not decades.ii

Picking arm

ATMs did destroy bank tellers’ jobs, but some of the peak years of their introduction to the US market coincided with a piece of financial deregulation, the Riegle-Neal Interstate Banking and Branching Efficiency Act of 1994, which removed many of the restrictions on opening bank branches across state lines. Most of the growth in the branch network occurred after this Act was passed in 1994, not before it. Teller numbers did not rise in the same way in other countries during the period. In the UK, for instance, retail bank employment just about held steady at around 350,000 between 1997 and 2013, despite significant growth in the country’s population, its wealth, and its demand for sophisticated financial services.iii

As for the third observation, automation certainly has caused widespread lasting unemployment in the past – of horses. Almost all the automation we have seen so far has been mechanisation, the replacement of human and animal muscle power by steam and then electric power. In 1900 around 25 million horses were working on American farms and now there are none. The lives of humans, by contrast, were only temporarily disrupted – albeit severely and painfully – by this mechanisation. The grandchidren of US farm labourers from 1900 now work in cities in offices, shops and factories.

The question is whether it is different this time round, because the new wave of automation – which has hardly begun as yet – is cognitive automation, not mechanisation. My book “The Economic Singularity” presents a carefully-argued case that it is.

Of course no-one knows for certain what will happen in the next few decades. Mr Thierer and others may be right, and the people (like me) that he excoriates as “automation alarmists” and “techno-pessimists” who “suffer from a lack of imagination” may be wrong. Time will tell.

But if we are right, and society fails to prepare for the massively disruptive impact of technological unemployment, the outcome could be grave. If we are wrong, and some modest effort is spent on analysing a problem that never occurs, almost nothing would be lost. The equation is not too hard to figure out.

Optimism

Finally, I reject the claim that the people who take the prospect of technological unemployment seriously are necessarily pessimistic. It is optimistic, not pessimistic, to believe that our most plausible and most positive route through the economic singularity is to work out how to build or evolve the post-scarcity Star Trek economy, in which the goods and services that we all need for a flourishing life are virtually free. We should aim for a world in which machines do all the boring stuff and humans get on with the important things in life, like playing, exploring, learning, socialising, discovering, and having fun. I refuse to believe that being an Amazon warehouse worker or an actuary is the pinnacle of human fulfillment.

Surely it is those who do think that, and who insist that we all have to stay in jobs forever, who are the true pessimists.

In the future, education may be vacational, not vocational

This post is co-written with Julia Begbie, who develops cutting-edge online courses as a director of a design college in London.

Five classrooms

Some people (including us) think that within a generation or two, many or most people will be unemployable because machines will perform every task that we can do for money better, faster and cheaper than we can.

Other people think that humans will always remain in paid employment because we will have skills to offer which machines never will. These people usually go on to argue that humans will need further education and training to remain in work – and lots of it: we will have to re-train many times over the course of a normal career as the machines keep taking over some of the tasks which comprise our jobs. “Turning truckers into coders” could be a slogan for these people, despite its apparent implausibility.

There are several problems with this policy prescription. First, we do not know what skills to train for. One school of thought says that we will work ever more closely with computers, and uses the metaphor of centaurs, the half-man, half-horse creatures from Greek mythology. This school argues that we should focus education and training on STEM subjects (scientific, technology, engineering and maths) and downgrade the resources allocated to the humanities and the social sciences. But a rival school of thought argues that the abilities which only humans can offer are our creativity and our empathy, and therefore the opposite approach should be adopted.

Science vs liberal arts
Secondly, the churn in the job market is accelerating, and within a few years, the education process will simply be too slow. It takes years to train a lawyer, or a coder, and if people are to stay ahead of the constantly-improving machines in the job market, we are likely to have to undergo numerous periods of re-training. How long will it be before each period of re-training takes longer than the career it equips us for?  And is that sustainable?

Third, reforming education systems is notoriously difficult. Over the years, educational reform has been proposed as the solution to many social and economic problems, and it rarely gets very far. Education has evolved over the last 100 years, and teachers are more professional and better trained than they used to be. But as the pictures above illustrate, most classrooms around the world today look much the same as they did 100 years ago, with serried ranks of children listening to mini-lectures from teachers. The fundamental educational processes and norms developed to build up the labour force required by the industrial revolution have survived numerous attempts to reform them, partly because reforming a vast social enterprise which looks after our children is hard, and partly because the educational establishment, like any establishment, tends to resist change.

It therefore seems unlikely that educational reform will be much assistance in tackling the wave of technological unemployment which may be heading our way.

And oddly, this may not be a problem. If, as we believe, many or most people will be unemployable within a generation or so, the kind of education we will benefit from most is one which will equip us to benefit from a life of leisure: education that is vacational rather than vocational. This means a broad combination of sciences, humanities and social sciences, which will teach us both how the world works (the business of science), and also how we work as humans – from the inside (the business of novelists, artists and philosophers). This is pretty much what our current educational systems attempt to do, and although they come in for a lot of criticism (some of it justified), by and large they don’t do a bad job of it in most places in most countries.

Although educational systems probably won’t be reformed by government diktat in order to help us stay in jobs, they will be reformed in due course anyway, because new technologies and approaches are becoming available which will make it more personalised, more effective and more enjoyable. Some of this will be enabled by artificial intelligence.

Flipped

New-ish techniques like flipped learning, distance learning, and competency-based learning have been around for years. They have demonstrated their effectiveness in trials, and they have been adopted by some of the more forward-thinking institutions, but they have been slow to replace the older approaches more generally. More recently, massive open online courses (MOOCs) were heralded as the death-knell for traditional tertiary education in 2013, but they have gone quiet, because the support technologies they required (such as automated marking) were not ready for prime-time.

MOOCs will return, and the revolution which they and other new approaches promised will happen. We will have AI education assistants which know exactly which lessons and skills we have mastered, and which ones we need to acquire next. These assistants will understand which approach to learning suits us best, which times of day we are most receptive, and which times we are best left to relax or rest. Education will be less regimented, more flexible, and much more closely tailored to our individual preferences and needs. Above all, it will be more fun.

Making grammar lessons fun

The main contribution of education to technological unemployment will probably be to make it enjoyable rather than to prevent it.

Future Bites 8 – Reputation management and algocracy

The eighth in a series of un-forecasts* – little glimpses of what may lie ahead in the century of two singularities.

This article first appeared on the excellent blog run by Circus Street (here), a digital training provider for marketers.

consultant

In the old days, before artificial intelligence started to really work in the mid-2010s, the clients for reputation management services were rich and powerful: companies, government departments, environmental lobbying groups and other non-government organisations, and of course celebrities. The aims were simple: accentuate the good, minimise the bad. Sometimes the task was to squash a potentially damaging story that could grow into a scandal. Sometimes it was to promote a film, a book, or a policy initiative.

Practitioners needed privileged access to journalists in the mainstream media, to politicians and policy makers, and to the senior business people who shaped the critical buying decisions of large companies. They were formidable networkers with enviable contacts in the media and business elite. They usually had very blue-chip educational and early career backgrounds; offering patronage in the form of juicy stories and un-attributable briefings to compliant journalists.

Digital democratisation

The information revolution democratised reputation management along with everything else. It made the service available to a vastly wider range of people. If you were a serious candidate for a senior job in business, government, or the third sector, you needed to ensure that no skeletons came tumbling out of your closet at the wrong moment. Successful people needed to be seen as thought leaders and formidable networkers, and this did not happen by accident.

The aims of reputation management were the same as before, but just as the client base was now much wider, so too was the arena in which the service was provided. The mainstream media had lost its exclusive stranglehold on public attention and public opinion. Facebook and Twitter could often be more influential than a national newspaper. The blogosphere, YouTube, Pinterest, and Reddit were now crucial environments, along with many more, and the players were changing almost daily.

informal working

The practitioners were different too. No longer just Oxbridge-educated, Saville Row tailored types, they included T-shirt-clad young men and women whose main skill was being up-to-date with the latest pecking order between online platforms. People with no deep understanding of public policy, but a knack for predicting which memes would go viral on YouTube. Technically adept people who knew how to disseminate an idea economically across hundreds of different digital platforms. Most of all, they included people who knew how to wrangle AI bots.

Reputation bots

Bots scoured the web for good news and bad. They reviewed vast hinterlands of information, looking for subtle seeds of potential scandal sown by jealous rivals. Their remit was the entire internet, an impossibly broad arena for un-augmented humans to cover. Every mention of a client’s name, industry sector, or professional area of interest was tracked and assessed. Reputations were quantified. Indices were established where the reputations of brands and personalities could be tracked – and even traded.

All this meant lots of work for less traditionally qualified people. Clients who weren’t rich couldn’t afford the established consultants’ exorbitant fees, and they didn’t need them anyway. Less mainstream practitioners deploying clever bots could achieve impressive results for far less money. As the number of actual and potential clients for reputation management services grew exponentially, so did the number of practitioners. The same phenomenon was observed in many areas of professional services, and become known as the “iceberg effect”: a previous, restricted client base revealed to be just the tip of a previously unknown and inaccessible demand.

algocracy bot

But pretty soon, the bots started to learn from the judgement of practitioners and clients, and needed less and less input from humans to weave their magic. And as the bots became more adept, their services became more sophisticated. Practising offence as well as defence: placing stories about their clients’ competitors, and duelling with bots employed by those rivals: twisting each other’s messages into racist, sexist or otherwise offensive versions, tactics that many of their operators were happy to run with and help refine.

Algocracy

Of course, as the bots became increasingly autonomous, the number of real humans doing the job started to shrink again. Clients started to in-source the service. Personal AIs – descendants of Siri and Alexa, evolved by Moore’s Law, – offered the service. Users began relying on these AIs to the point where the machines had free access to censor their owners’ emails and other communications. People realised that the AIs’ judgement was better than their own, and surrendered willingly to this oversight. Social commentators railed against the phenomenon, clamouring that humans were diminishing themselves, and warning of the rise of a so called “algocracy”.

Their warnings were ignored. AI works: how could any sane person choose to make stupid decisions when their AI could make smart ones instead?

* This un-forecast is not a prediction.  Predictions are almost always wrong, so we can be pretty confident that the future will not turn out exactly like this.  It is intended to make the abstract notion of technological unemployment more real, and to contribute to scenario planning.  Failing to plan is planning to fail: if you have a plan, you may not achieve it, but if you have no plan, you most certainly won’t.  In a complex environment, scenario development is a valuable part of the planning process. Thinking through how we would respond to a sufficient number of carefully thought-out scenarios could well help us to react more quickly when we see the beginnings of what we believe to be a dangerous trend.

Don’t just speed up the mess

Guest post by Matt Buskell, head of customer engagement at Rainbird

One day back in 1999, I was sitting in a TV studio with a client. We were being interviewed about something called the world wide web. The interviewer was asking if it would change the world. It seems silly to say that now, but it was all very new back then.

The interviewer asked, “do you think this technology will transform your business?” The client was Peter Jones of Siemens, who was one of the most impressive transformation leaders I have ever met. He replied “Yes, but we need to be careful that we don’t just speed up the mess”.

Cluttered desk
What Peter meant was that applied without sufficient thought, technology doesn’t create a better process, just a faster one. It’s an observation I’ve thought about often over the years, and it is still as relevant now as it was back then.

We currently have clients in several different industries using robotic process automation (RPA). This is software (a “robot”) which captures and interprets existing applications to process a transaction; manipulating data, triggering responses, and communicating with other digital systems.

As Andrew Burgess, one of the most experienced consultants in this space, says, “RPA is already delivering huge benefits to businesses. By automating common processes, it reduces the cost of service whilst increasing accuracy and speed. But one of the biggest challenges for any automation project is to embed decision making.”

The next generation of RPA promises to automate even more, by applying Artificial Intelligence (AI). Using natural language processing, robots can read unstructured text and determine workflow; they can make auditable decisions and judgements.

But there is still a danger of simply speeding up the mess.

To check for this, we often ask clients, “Could you improve your efficiency by moving some of the decisions in your process flow further upstream?”

Consider this. A sales person aided by an AI co-pilot can provide the same value to a customer as a lawyer could. A call centre agent can become an underwriter, and a fitness instructor can become a physiotherapist. The co-pilot provides the human with knowledge and judgements far beyond their innate level of understanding.

Botsplaining

Taxi drivers use mapping software as their co-pilots today. If they did not have the software they would have to learn the maps themselves, and not everyone has the patience to spend three years or more “doing the knowledge” which makes London’s black cab drivers so impressive.

An AI cannot mimic all the knowledge and understanding of a lawyer, but that’s not what we are asking it to do. We are asking it to make very specific judgements in very specific situations – and to do that on a massive scale.

Take an insurance help-desk. A customer sends a message saying, “I just had an accident; the other driver’s name was XYZ and their insurance company was ABC”. The RPA reads the text, determines that it’s a claim, creates a case file and adds it into the company’s workflow.

We have potentially cut out a call and automated several steps that the call centre agent would have gone through. We have saved time. So far, so good.

However, the case file still must follow a “First Notice of Loss” (FNOL) process and must still be reviewed by a liability expert to determine which party was at fault. It can take another 40 steps until the case reaches a recovery agent who tries to claim a settlement from the other insurer.

So the AI has been helpful but not yet transformative, because we are still following the same business process as before.

Now imagine you could take the knowledge of the liability expert and the recovery expert and graft it into the claims process system. The AI can make the same decisions that these experts would have made, but by connecting them we can change the process. Whereas liability would traditionally have been determined around step 12 in the process and recovery would have started around step 30, this can now all happen at step 1. There is a ripple effect on the number of follow-on steps, and the whole process becomes much faster.

Insurance
Companies like Accredita are doing this for the FNOL process and making the technology available to insurers. Consultancies like Ernst and Young and PwC are evaluating the overall claims process and figuring out other ways that cognitive reasoning could change the business process. Large banks are assessing how cognitive reasoning and RPA can be used to automate and enhance the identification of credit card fraud.

Re-visiting our map analogy, RPA gets you to the right page on the map for your journey, while cognitive reasoning takes the map it has been given and calculates the route for you.

Peter has retired, but if he was still working he would surely agree there is a fundamental shift in what can be achieved with business processes. Significant limitations have been removed and with that comes a massive opportunity to do much more than just “speed up the mess”.

The second question asked in the interview back in 1999 was “How do you figure out where the value is in this new technology?”. Peter’s answer to that was “You won’t know until you get out there and try it”. That still holds true today.

Matt headshot
Matthew Buskell has been helping companies be more innovative since 1996, when he graduated from Birmingham University with a degree in Cognitive Psychology and Software Engineering. He has helped clients navigate a series of technological waves: the Internet, Mobile, Big Data, and now Artificial Intelligence. He was on the SAP UK board until 2016 when he became head of customer engagement at Rainbird, a leading Cognitive Reasoning engine that is re-defining decision-making with AI in Financial Services and Insurance.

Andrew Burgess is a management consultant, author and speaker with over 25 years’ experience. He is considered an authority on innovative and disruptive technology, artificial intelligence, robotic process automation and impact sourcing. He is a former CTO who has run sourcing advisory firms and built automation practices.  He has been involved in many major change projects, including strategic development, IT transformation and outsourcing, in a wide range of industries across four continents. He is a member of the Advisory Boards of a number of ambitious disruptive companies.

What’s wrong with UBI – responses

Last week I posted an article called “What’s wrong with UBI?” It argued that two of the three component parts of UBI are unhelpful: its universality and its basic-ness.

The article was viewed 100,000 times on LinkedIn and provoked 430-odd comments. This is too many to respond to individually, so this follow-up article is the best I can offer by way of response. Sorry about that.

Fortunately, the responses cluster into five themes, which makes a collective response possible. They mostly said this:

You're an idiot
Expanding a little, they said this:

1. You’re an idiot because UBI is communism and we know that doesn’t work

2. You’re a callous sonofabitch because UBI is a sane and fair response to an unjust world. Oh, and you’re an idiot too

3. You’re an idiot because we know that automation has never caused lasting unemployment, so it won’t in the future

4. That was interesting, thank you

5. Assorted random bat-shit craziness

Clearly UBI provokes strong feelings, which I think is a good thing. For today’s economy, UBI is a pretty terrible prescription, and isn’t making political headway outside the fringes. But it does seem to many smart people (eg Elon Musk) to be a sensible option for tackling the economic singularity, which I explored in detail in my book, called, unsurprisingly, The Economic Singularity.

the_economic_singula_cover_for_kindle
It is tempting to believe that since my article annoyed both ends of the political spectrum, I must be onto something. But of course that is false logic: traffic jams annoy pretty much everyone, which doesn’t mean they have any merit.

Anyway, here are some brief responses to the objections.

1. You’re an idiot because UBI is communism and we know that doesn’t work

Before becoming a full-time writer and speaker about AI, I spent 30 years in business. I firmly believe that capitalism (plus the scientific method) have made today the best time ever to be human. Previously, life was nasty, brutal and short. Now it isn’t, for most people. In other words, I am not a communist.

Communism is the public ownership of the means of production, distribution and exchange, and UBI does not require that. It does, however, require sharply increased taxation, and this can damage enterprise – unless goods and services can be made far cheaper. I wrote about that in a post called The Star Trek Economy (here).

Star trek economy
2. You’re a callous sonofabitch because UBI is a sane and fair response to an unjust world. Oh, and you’re an idiot too

I recently heard a leading proponent of UBI respond to the question “How can we afford it?” with “How can we afford not to have it?” He seemed genuinely to think that was an adequate answer. Wow.

However, it is obvious that if technological automation renders half or more of the population unemployable, then we will need to find a way to give those unemployable people an income. In other words, we will have to de-couple income from jobs. I semi-seriously suggested an alternative to UBI called Progressive Comfortable Income, or PCI, because I see no sense in making payments to the initially large number of people who don’t need it because they are still working, and I don’t believe all the unemployed people will or should be content to live in penury: we want to live in comfort.

A lot of the respondents to my article argued that payments to the wealthy would be recovered in tax. But unless you’re going to set the marginal tax rate at 100% you will only recover part of the payment. You are also engaging in a pointless bureaucratic merry-go-round of payments.

3. You’re an idiot because we know that automation has never caused lasting unemployment, so it won’t in the future

The most pernicious response, I think, is the claim that automation cannot cause lasting unemployment – because it has not done so in the past. This is not just poor reasoning (past economic performance is no guarantee of future outcome); it is dangerous. It is also the view, as far as I can see, held by most mainstream economists today.  It is the Reverse Luddite Fallacy*.

heads-in-the-sand
In the past, automation has mostly been mechanisation – the replacement of muscle power. The mechanisation of farming reduced human employment in agriculture from 80% of the labour force in 1800 to around 1% today. Humans went on to do other jobs, but horses did not, as they had no cognitive skills to offer. The impact on the horse population was catastrophic.

I suspect that economists either refuse or just fail to take Moore’s Law into account. This doubling process (which is not dying, just changing) means that machines in 2027 will be 128 times smarter than today’s, and machines in 2037 will be 8,000 times smarter.

I’ll say that again. If Moore’s law continues, then machines in 2037 will be 8,000 smarter than today.

It’s very likely that these machines will be cheaper, better and faster at most of the tasks that humans could do for money.

Because the machines won’t be conscious (artificial general intelligence, or AGI, is probably further away than that), that still leaves the central role for humans of doing all the worthwhile things in life: namely, learning, exploring, socialising, paying, having fun.

That is surely the wonderful world we should be working towards, but there is no reason to think we will arrive there inevitably. Things could go wrong. It is no small task to work out what an economy looks like where income is de-coupled from jobs, and how to get there from here. Just waving a magic wand and saying “UBI will fix it” is not sufficient.

UBI is hot
* Thanks to my friends at the Singularity Bros podcast for inventing this handy term.

What’s wrong with UBI?

meat-loaf-two-out-of-three-aint-bad-epic-cleveland-international
One out of three ain’t good

Universal Basic Income (UBI) is a fashionable policy idea comprising three elements: it is universal, it is basic, and it is an income. Unfortunately, two of these elements are unhelpful, and to paraphrase Meatloaf, one out of three ain’t good.

The giant sucking sound

The noted economist John Kay dealt the edifice of UBI a serious blow in May 2016 in an article (here, possibly behind a paywall) for the FT. He returned to his target a year later (here, no paywall) and pretty much demolished it. His argument is slightly technical, and it focuses on UBI as a policy for implementation today, so I won’t dwell on it. But if you are one of the many who think UBI is a great idea, it is well worth reading one or both articles to see how Kay demonstrates that “either the basic income is impossibly low, or the expenditure on it is impossibly high.”

To put it more bluntly than Kay does, if UBI was introduced at an adequate level in any one country (or group of countries) today, there would be a giant sucking sound, as many of the richer people in the jurisdiction would leave to avoid the punitive taxes that would pay for it.

UBI and technological unemployment

But what happens a few decades from now if a large minority – or a majority – of people are unemployable because smart machines have taken all the jobs that they could do? We don’t know for sure that this will happen, of course, but it is at least very plausible, so we would be crazy not to prepare for the eventuality. Kay explicitly ignores this question, but tech-savvy and thoughtful people like Elon Musk and Sam Altman think that UBI may be the answer.

Imagine a society where 40% of the population can no longer find paid employment because machines can do everything they could do for money cheaper, faster and better. Would the 60% who remained in work, including those in government, simply let them starve? I’m pretty sure they wouldn’t, even if only because 40% of a population being angry and desperate presents a serious security threat to the others.

Many people argue that UBI is the solution, and will be affordable because the machines will be so efficient that enormous wealth will be created in the economy which can support the burden of so many people who are not contributing. I describe elsewhere a “Generous Google” scenario in which a handful of tech firms are generating most of the world’s GDP, and in order to avoid social collapse they agree to share their vast wealth by funding a global UBI.

Google and cash
I suspect there are serious problems with the economics of this. Exceptional profits are usually competed away, and companies which manage to avoid that by establishing de facto monopolies sooner or later find themselves the subject of regulatory investigations. But putting that concern to one side, in the event of profound technological unemployment, should we ask the rich companies and individuals of the future to sponsor a UBI for the rest of us?

This is where Meatloaf comes in. (Yay.)

Universality

The first of UBI’s three characteristics is its universality. It is paid to all citizens regardless of their economic circumstances. There are several reasons why its proponents want this. Experience shows that many benefits are only taken up by those they are intended for if everyone receives them. Means-tested benefits can have low uptake among their target recipients because they are too complicated to claim, or the beneficiaries feel uncomfortable about claiming them, or simply never find out about them. Child benefits in the UK are one well-known example. There is also the concern that UBI should not be stigmatised as a sign of failure in any sense.

But in the case of UBI, these considerations are surely outweighed by the massive inefficiency of universality. In our scenario of 40% unemployability, paying UBI to Rupert Murdoch, Bill Gates, and the millions of others who are still earning healthy incomes would be a terrible waste of resources.

Murdoch and cash
Basic

The second characteristic of UBI is that it is Basic, and this is an even worse problem. “Basic” cannot mean anything other than extremely modest, and if we are to have a society in which a very large minority or a majority of people will be unemployable for the remainder of their lives, they are not going to be happy living on extremely modest incomes. Nor would that be a recipe for a stable, happy society.

Many proponents of UBI think that the payment will prevent everyone from starving, and we will supplement our universal basic incomes with activities which we enjoy rather than the wage slave drudgery faced by many people today. But the scenario envisaged here is one in which many or most humans simply cannot get paid for their work, because machines can do it cheaper, better and faster. The humans will still work: they will be painters, athletes, explorers, builders, virtual reality games consultants, and they will derive enormous satisfaction from it. But they won’t get paid for it.

Nillionaire
If we are heading for a post-jobs society for many or most people, we will need a form of economy which provides everyone with a comfortable standard of living, and the opportunity to enjoy the many good things in life which do not come free – at least currently.

Income

UBI isn’t all bad. After all, it is in part an attempt to save the unemployable from starving. And the debate about it helps draw attention to the problem that many people hope it will solve – namely, technological unemployment. So UBI isn’t the right answer, but it is at least an attempt to ask the right question.

Perhaps we can salvage the good part of UBI and improve the bad parts. Perhaps what we need instead of UBI is a PCI – a Progressive Comfortable Income. This would be paid to those who need it, rather than wasting resources on those who have no need. It would provide sufficient income to allow a rich and satisfying life.

Now all we have to do is figure out how to pay for it.