Learning to Trust Machines (A How-To Guide)

5DA4BFC8 5786 4826 8FFA EACECC75B109 e1549904098470

Learning to Trust Machines (A How-To Guide)

We don’t trust machines…and we will need to change that.

Kevin Frankowski, a specialist in helping companies with innovation, and I are exploring the evolving world of work. We see the early signs of how work is changing as we increase the level of automation in the work place. We question the readiness of industry to address these changes and the attitudes of workers towards rising levels of automation.

Why We Don’t Trust Machines

In the first article of this series, we pointed out that, despite the promise of digital innovation, the energy sector has been slow to ride its wave of adoption, especially with regards to Artificial Intelligence (AI), automation and smart machines. There are many valid reasons for the industry’s reluctant response, which we summarized into the following five broad issues that need to be overcome for us to trust smart machines:

  • Lack of performance:  Is the machine even going to work?  Will it do what is needed?
  • Lack of long-term track record: Okay, so the machine may have performed fine in the demo or pilot test, but how do I know it will work over the long term, under every conceivable condition (e.g., cold and hot weather extremes; loss of power or communication; unexpected operating conditions; emergencies and exceptions)?  Can I rely on it when everything hits the proverbial fan and creative problem solving and adaptability are needed?
  • Lack of clarity:  Why does the machine make the decisions it does?  Under what conditions will those decisions change?  How can I influence those decisions in real-time as my conditions or drivers change (e.g., oil prices change; or someone extends an offer to buy my company)?
  • Lack of integrity:  Machines lack morals (unless we program those in), so how do I know that the machine will be truthful with me?  After all, we now know that existing algorithms (or their designers) aren’t always truthful and forthright about data – just look at the numerous scandals associated with the social media giants.  Or, more concerning in an industrial setting where safety really matters, how will the machine handle conflicting decision-making criteria, such as the famous Trolley Problem that self-driving cars must deal with?
  • Lack of accountability:  If the machine makes a poor decision, how do I hold it accountable?  Can I give it consequences that matter?  Also, in complex industrial settings, many small decisions by many participants add up to larger outcomes – how do I separate out the contribution of the machine’s decisions relative to decisions that other machines and people have made?

We need a way to puzzle through these problems that unlocks us from our natural tendency to retain the status quo.

Trusting a smart machine

To achieve high performance in our relationships with smart machines, we need to be able to trust them, which begins with the 5 components of trust:

  • Competence:  Do I believe that all the participants in the relationship have sufficient capabilities to fulfill their respective responsibilities?
  • Reliability:  Do I believe that all participants will fulfill those responsibilities reliably, on a consistent basis?
  • Transparency:  Do I believe that I am sufficiently aware what the other participants in the relationship are working towards?  Would they make decisions that are compatible with the decisions that I will make? Can I influence their decisions?  Do they keep me in the loop?
  • Aligned Integrity:  Do I believe that the other participants are being always truthful with me, and adhering to our agreed-upon principles or norms of behaviour?  Are we operating from compatible sets of priorities?  Put another way, do we have shared values, morals and ethics?
  • Aligned Accountability and Motives:  Do I believe that the other participants want the same outcomes that I do?  Is there an alignment of interests among all of us? Are there meaningful motivators (e.g., bonuses) or deterrents (e.g., consequences) to help maintain and nurture this alignment?

The first thing to notice is that all of these not only involve facts (e.g., is the other party competent?), but also perception (e.g., do I believe that the other party is competent?).  It does no good if there is a mismatch between fact and perception.

If the other party is competent (or reliable, etc.), but I harbour some doubts, then there is no trust.  Similarly, if I believe that these attributes (competence, etc.) exist when they actually don’t, eventually the truth will come out, and the mismatch results in broken trust.

Not only must these five components be present, we also need to make sure that all the other participants in the relationship are aware of them, and that everyone authentically knows that these are present.  (Easier said than done.)

The second thing to note that ALL five of these need to be present at ALL times, or trust is compromised.  Many of our interactions take a “grow in trust” approach, where trust is built over time, as evidence mounts that these components are in place.  When even one of these components goes missing, even temporarily, then things take a step backward and trust needs to be rebuilt.

Trust needs to be both earned AND maintained.  All of us have experienced that this takes work and commitment.

Fortunately, behind every machine is a team of people (designer, programmer, operator, etc.), at least some of the time—building and maintaining trust really needs to happen with them.  As users of smart machines, we need to make it very clear to the technology designers and vendors that we expect these five components to be addressed in a very concrete manner.  Call it the social contract for digital innovation.

Earning Trust in a Smart Machine

To earn and maintain trust in their smart machines, the designers and vendors of digital technology need to address very explicitly the 5 components of trust.

  • Competence:  This is the easiest aspect for vendors to prove — simply demonstrate that the products (smart machines; algorithms) achieve the promised performance under test conditions.
  • Reliability:  This one is harder to prove, because now performance assurances needs to cover off not just “easy” or “normal” operating conditions, but also all the various extremes that might be encountered (the “edge cases”).  Also, consistency of reliability can be very situation specific—my standards for consistency might be very different for a personal navigation and traffic app (80% reliability might be good enough) versus the navigation and flight control software on my favourite airline (it better be greater than 99.99%!).  Perhaps the solution is to allow humans to intervene in the decision-making process if needed, so that humans, who are better at creative problem solving, can address the edge cases.  However, this doesn’t mean that humans need to always occupy every single pilot seat – just look at how your local supermarket handles this at the self-check-out lanes—one human teller oversees 6 to 8 tills, only intervening when an issue comes up.  That represents >80% improvement in efficiency.
  • Transparency:  As we’ve previously pointed out, AI developers need to get over themselves and start sharing how their software works, in ways that are understandable to the users.
  • Aligned Integrity:  Settling on aligned principles of behaviour can be challenging, especially for some of the edge cases.  However, if developers don’t significantly open up the conversation to customers on how they approach this, then strong trust will continue to elude these relationships.
  • Aligned Accountability and Motives:  Giving a meaningful consequence to a machine is difficult.  However, once there is agreement on the outcomes that the smart machine is working towards, plus alignment on decision-making principles and how this will be achieved, then accountability can flow through those agreements to the smart machine’s human supervisors.

Making It Real

Here’s a couple of ways the social contract might play out in the real world.

  • If my self-driving car gets a speeding ticket because I stipulated conditions that caused it to make poor decisions, I should pay the ticket.  However, if the software was written in a way that allowed it to speed of it own volition, then the software vendor should pay.
  • If my AI drilling management program achieves better than expected financial returns due to new upgrades from the vendor, does our contract discuss how these additional benefits will be shared?

This approach, which incorporates a human touch to smart machines, provides a symmetry of deterrence and incentives that not only encourages reliability and spurs innovation, but also fits with how humans are wired and motivated.

Conclusion

In our view, trust starts with the makers of smart machines. Here’s what they need to do.

  • Be clear and honest about the underlying principles on which your machine bases its decisions.  For example, what will the algorithm optimize for – operating efficiency and speed; return on financial investment; health and safety?
  • Be open and upfront about your commercial drivers—how does your freemium business model actually make ends meet?  Do you intend to mine the datasets for insights that you can use to monitor market performance or sell more equipment?  Then admit that upfront, and negotiate a fair exchange of value with your users.
  • Finally, create space in your machine design for human intervention, for those times when the machine owner feels the need to intervene.

In time it will be possible to trust machines – we just need to earn and maintain that trust in a way that aligns with how we humans are wired to trust.

Kevin Frankowski and I co-wrote this article.

Author’s note — The artwork on this post was updated after it was initially published.

Mobile: ☎️ +1(587)830-6900
email: 📧 geoff@geoffreycann.com
website: 🖥 geoffreycann.com
LinkedIn: 🔵 www.linkedin.com/in/digitalstrategyoilgas

No Comments

Post A Comment