Why We Don’t Trust Machines (And Why We Should)

420BF43A C233 4DB9 AD8A 28CEDB996843

Why We Don’t Trust Machines (And Why We Should)

The world of work is evolving, whether we like it or not, and we need to rethink our trust equation with our machines.

Kevin Frankowski, a specialist in helping companies with innovation, and I are exploring the evolving world of work. Our most recent consulting assignments bring into sharp relief the work of today, and the likely work of the future. It raises troubling questions about the readiness of industry to address these changes, and how the attitudes of people towards automation are a key block towards advancement.

How Digital Gets Done

Recall earlier articles in this series where I sketched out a framework for thinking about digital innovation.

  • The internet of things (sensors), generate torrents of data
  • Artificial intelligence (and its companions such as analytics and machine learning), ingest, interpret and analyze that data
  • Automation (robots), apply that data in the real world to get work done
  • Cloud computing houses much of the data, intelligence and robotic controls
  • Blockchain provides immutable evidence that the sensors are reliable, the AI engines are correct and the robots haven’t gone rogue 

There’s lots of evidence that this framework is solidly grounded in reality.

  • Google engineers capture a 40% energy saving in their global data centers by letting an AI engine directly manage energy use
  • BP turns over its oil and gas field operations in the US to AI because the AI engines are superior at at running oil fields once they learn the ropes
  • Ambyint gets a big purchase order from Husky to roll its edge AI devices across an oil field which will pay back in just 2 months
  • Suncor and CNRL use robots on their heavy hauling operations
  • Porsche and other automakers put their latest cars on blockchain to create new customer services and lay the foundation for autonomous cars
  • NAL uses bot technology to change work practices, with returns that are ten times that of drilling another oil well

As it is with digital, none of these technologies are particularly new. The field of artificial intelligence dates back to well before the dawn of computers. The first digital and programmable robot, the Ultimate, is from 1954. On-line bot technology originates from the on-line gaming industry a decade ago. Blockchain is based on three technologies (distributed computing, peer-to-peer networking, and encryption), all of which have roots in the earliest computers.

So why do we feel that somehow things are different today? To use an analogy from my book, ‘Bits, Bytes, and Barrels’, cloud computing is having the same impact on industry that fossil fuels had on airplanes. In the decades before the famous Wright brothers’ flight, engineers had broadly figured out the intricacies of flight. The missing technology was a fuel of the right density to drive a sufficiently powerful motor to pull the aircraft through the air for liftoff. Once that fuel was mastered, airplane technology began rapidly advancing. Cloud computing similarly provides enough data and compute density to allow many other digital technologies, such as sensors, artificial intelligence, automation, and blockchain, to “take off.”

Work is evolving

These automation building blocks that are driving changes to work display no prejudice. They’re impacting heavy and light industries in equal measure. It’s only the pace that differs. Automation is now plainly visible in agriculture where farmers use drones to supervise planting, irrigation and pest control. It’s in shipping where unmanned cargo vessels are in field trials. It’s in commuter transportation where autonomous buses shuttle people in both Vancouver and Calgary. It’s in material science, medicine, media, even retail. 

The energy sector, in particular, is undergoing a second shift as the world looks to embrace more energy from electrons and less from molecules, particularly in transportation. This shift has huge implications for work. The molecules today that yield energy to us when we transform them (such as combusting gasoline in a car engine), require a healthy level of human supervision to keep the mechanical apparatus that contains those molecules from gumming up, overheating, and wearing out. Electrical gear has many fewer moving parts overall, and a dramatic reduction in the number of parts that need to be cleaned, cooled, and replaced. 

The result is we will need far fewer people to maintain this equipment. Fewer people on site to keep an eye on things, to use tools to measure how equipment is behaving, to lubricate parts, and to detect aberrant sounds or smells. Electrical equipment lends itself to robotics, remote control, and remote monitoring more so than molecule-based machines. 

Pilot-less Planes

Oddly, remote control does not always play out in practice. For example, we have all the technology we need for aircraft to fly without on-board pilots. They’re called drones, and Best Buy sells them for a few hundred dollars. There are now flying drone competitions (check out this video from the New Yorker), that demonstrate what can be done without a pilot on board. The same is true for submersibles and helicopters. But today, no airline offers flights without a human pilot at the controls at all times, even though it’s unnecessary. Pilotless aircraft demonstrate that successful technology must meet the MASA principle — most advanced but socially acceptable.   

And it’s not just aircraft where automation removes the need for on-site human operators, but does not. Many mission critical assets, especially in heavy industry, still feature a healthy, but expensive and technically unnecessary, on-site human management team. The social pressure that might block adoption (as with Google Glass) is not present in some obscure plant far from human settlement. 

It begs the question why management teams, particularly in the energy sector, facing critical competitiveness challenges, are resistant to embrace greater levels of automation in anticipation that automation is coming. Is it a matter of trust?

Trust and the machine

Why don’t we trust machines to perform mission-critical tasks (even for tasks that machines are clearly better at)?

There may be several reasons.

Building trust within an interactive and mutually-dependent relationship with other humans is something we are just wired to do.  

We don’t have that same relationship with a machine. Machines lack the human-type needs that drive their decisions, and so we don’t feel part of that decision-making process.

For the most part, we care about being reliable to those that depend upon us. Machines don’t care about how they are perceived by humans—they just follow their instructions. The developers of those machines are disincented from being transparent about how their complex and proprietary algorithms behave, as I’ve described in an earlier post.

We like that humans are adaptable and can problem-solve in the moment when things go wrong (as with our pilots). Machines are getting much better at this, thanks to fleet learning, but we still aren’t as confident in their abilities.

We like that humans can exercise judgement, such as when to break a rule or create something novel. Machines are bound to their instructions, and in fact, we’re uncomfortable with the idea that a machine could break rules.

We like that humans can be held accountable for their actions, and react well to rewards and punishments. We have lengthy experience (albeit imperfect), levying consequences against other humans that stretches back through the whole history of humankind, and it is central to the experience of being human. We are more comfortable having a human “at the switch” (or at least in the loop, keeping watch over the machines and having supervisory control).

Machines can be programmed, using gamification theory, to respond to rewards and to avoid punishments, but only in a math sense, and not an emotional sense. Shunning a human for a transgression and their behaviour might change, but try shunning your digital home assistant. Alexa will just ignore you back.

Building trust in machines

If we are to attain the highest performance out of our relationships with smart machines, we need trust.

To gain trust, especially in an industrial setting, there are some very specific things that need to be overcome.

These can be classified as:

  • Lack of performance: Is the machine even going to work? Will it do what is needed?
  • Lack of long-term track record: Okay, so the machine may have performed acceptably in the demo or pilot test, but how do I know it will work over the long term, under every conceivable condition (e.g., cold and hot weather extremes; loss of power or communication; unexpected operating conditions; emergencies and exceptions)? Can I rely on it when everything hits the proverbial fan and creative problem-solving and adaptability are needed?
  • Lack of integrity: Machines lack morals (unless we program those in), so how do I know that the machine will be truthful with me? After all, we now know that existing algorithms (or their designers) aren’t always truthful and forthright about data – just look at the numerous scandals associated with the social media giants. Or, more concerning in an industrial setting where safety really matters, how will the machine handle conflicting decision-making criteria, such as the famous Trolley Problem that self-driving cars must deal with?
  • Lack of clarity: Why does the machine make the decisions it does? Under what conditions will those decisions change? How can I influence those decisions in real-time as my conditions or drivers change (e.g., oil prices change; or someone extends an offer to buy my company)?
  • Lack of accountability: So if the machine makes a poor decision, how do I hold it accountable? Can I give it consequences that matter? Also, in complex industrial settings, many small decisions by many participants add up to larger outcomes—how do I separate out the contribution of the machine’s decisions relative to decisions that others made?

Critical questions, all of them—how do we solve them?

In an upcoming article, we will lay out a framework for how these can be addressed. Stay tuned for Part 2.

Kevin Frankowski and I co-wrote this article.

Mobile: ☎️ +1(587)830-6900
email: 📧 geoff@geoffreycann.com
website: 🖥 geoffreycann.com
LinkedIn: 🔵 www.linkedin.com/in/digitalstrategyoilgas

2 Comments
  • Hugh Evans
    Posted at 09:58h, 25 January Reply

    Thank you for writing this article Geoffrey and Frank. Your ‘lack off’ implies that machine are self originated and determining entities – that’s coming probably with supra intelligences. It would be good to hear your perspectives on ‘who’ is behind machine learning – as in the real people – their drives, motivations, pay offs, agendas.

    The second point I would like to share is what seems to be missing in why we don’t trust machines…what is making businesses resistant? Let’s say all the commercially available automations were implemented tomorrow – with all the efficiencies that go with that….what would be the social impact? Is there simply a case that we have to knowingly slow this transition otherwise the armageddon that some commentators talk about…will be upon us.

    • Kevin Frankowski
      Posted at 14:35h, 03 February Reply

      Hugh – excellent point about the ‘who’ (the people) behind the machine. Geoff and I discuss this in Part 2, which is coming soon.

      With regards to your second point, no one truly knows all the impacts that come from significant changes like this. There is always a transition period, as we (society and business) gain increasing amounts of experience with the new, learn its benefits and drawbacks, and figure out the best way forward.

      Our key point is that these changes are going to occur, whether the energy sector participates or not… Rather than be left behind and squeezed out, we view it better for the sector to understand the challenges and develop a framework that can address them. After all, slow walking is not the same as walking smartly…

      Thanks for joining the conversation!

      Kevin

Post A Comment