Future Bites 1

The first in what may or may not become a series of un-forecasts*, little glimpses of what may lie ahead in the century of two singularities.

otto-self-driving-truck

It’s 2025 and self-driving trucks, buses, taxis and delivery vans are the norm.  Almost all of America’s five million professional drivers are out of work.  They used to earn white-collar salaries for their blue-collar work, which means it is now virtually impossible for them to earn similar incomes.  A small minority have re-trained and become coders, or virtual reality architects or something, but most are on welfare, and / or earning much smaller incomes in the gig economy.  And they are angry.

The federal government, fearful of social unrest (or at least disastrous electoral results), steps in to replace 80% of their income, guaranteed for two years.  This calms the drivers’ anger, but other people on welfare are protesting, demanding to know why their benefit levels are so much lower.

Meanwhile, many thousands of the country’s 1.3m lawyers are being laid off.  And their salaries were much higher.  The government knows it cannot fund 80% replacement of those incomes, but the lawyers are a vociferous bunch.

And there are doctors, journalists, warehouse managers, grocery store workers…

* This un-forecast is not a prediction.  Predicitons are almost always wrong, so we can be sure that the future will not turn out exactly like this.  It is intended to make the abstract notion of technological unemployment more real, and to contribute to scenario planning.  Failing to plan is planning to fail: if you have a plan, you may not achieve it, but if you have no plan, you most certainly won’t.  In a complex environment, scenario development is a valuable part of the planning process. Thinking through how we would respond to a sufficient number of carefully thought-out scenarios could well help us to react more quickly when we see the beginnings of what we believe to be a dangerous trend.

Advertisements

7 thoughts on “Future Bites 1

  1. Chace’s insightful and 20-minutes-into-the-future fiction here (his “un-forecast” rather than attempted futurist prediction) is an excellent strategy. That’s what we sci-fi writers do, plot out possible scenarios, as beacons, warnings of things to come. Recall the closing lines in 1956 classic movie “Forbidden Planet”: “Alta, about a million years from now the human race will have crawled up to where the Krell stood in their great moment of triumph and tragedy. And your father’s name will shine again like a beacon in the galaxy.” Thanks Calum Chace for a great blog. (Liked your novel “Pandora’s Brain.”) K D Kragen, killware.com.

  2. I’d like to add my personal thanks Chace, for the part you are playing in helping to bring greater awareness of the dangers inherent in ill-considered technologic ‘advancement’ ( if even considered at all by governments and people in general who largely have no say in the direction their society is taking them, more particularly with advanced AI.)
    Just to prove i’m not purely doom and gloom – your un-prediction makes no mention of the possible vast reduction in living costs made possible by companies not having to pay staff to drive trucks, taxis etc. and the great efficiency gains in less crashes, down time and general better efficient use of vehicles and fuel from self driving vehicles; and then the reduction in legal and medical fees since we no longer have to pay professionals to study for 10 plus years to help us out – it will simply be uploaded to a machine who can dispense info practically for free.

    Sure i hate the probability that humans will be rendered unnecessary for the future of this planet sometime this century, but we should look at both sides of the human replacement equation – no?

  3. Me again 🙂 my last comment woke a thought in my mind re: us being a computer simulation in a hyper intelligences computer – a feat we might soon be capable of imitating and creating our own computer simulated worlds/people.

    Presumably the point of creating a simulated populated world is to gain answers/solutions to various scenarios that the superior intelligence is experiencing or soon may experience so that any negative outcomes may be minimised in their future?

    The problem is though that once the created population becomes intelligent enough to realise the possibility of what is happening all resulting solutions/answers must be compromised causing the solution to be of little practical use. To prevent this the ‘programmer’ may insert code designed to limit the creation becoming aware of the possibility or otherwise compensate but again this will adversely affect the likely end outcome and gain of useful information for the higher intelligence, so maybe the simulation hypothesis is not a particularly profitable one – for either the creator or the creation?

    Just a thought.

    love.

      • Thanks Calum….. your two posts on SH were some of the first readings that started me thinking about the topic back in October. I left you a suggestion for a possible future novel in my last comment on SH part 11.
        Love your blog and books.

        love.

  4. Episode 074: Calum Chace and The Existential Fire-Tornado – Singularity Bros

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s