Visual Analytics Comes of Age

Visual Analytics Comes of Age

What if a sensor could not only record purely visual activity in the real world, but could also interpret and then take action based on what it sees? Could this capability be exploited in oil and gas?

Visual data grows up

I’m participating in a webinar with Osprey Informatics on visual analytics, a technology with big upside in the oil and gas sector.  

Machine interpretation of visual data is not new. My business card now has a QR code printed on it that takes you directly to my website.  Airlines rely on machine readable luggage tags to route your belongings through the maze of conveyor belts that connect your baggage to its flight. Modern cash registers scan retail tags and figure out what you’re purchasing, its weight, its price, any applicable discounts, etc.

These examples all rely on some kind of human-invented specialised symbology—bar codes, bag tags, and QR codes. That’s nice, but kind of limiting in situations where there is no practical way to use these symbols. Think about these two most recent iPhones from Apple that use an image of your face as your password. What if these sensors could be put to use for other things? 

Next generation visual analytics

It should be no surprise that visual data capture and interpretation systems are evolving rapidly, enabled by several digital technologies are combining and recombining to create a seriously clever new category of analytics that works with visual data.

  • Low cost sensors (in this case, optical), that see in all light and weather conditions, including foggy weather, rain, snow, dusty and night;
  • Cloud computing, that connects multiple such sensors to the cloud where data storage and compute services are effectively unlimited and practically free;
  • Machine learning, a technique where software learns, in this case how to interpret what it sees by being fed thousands of similar images (or in the case of facial ID, just your image); and,
  • Artificial intelligence, a variation of machine learning, where the computer takes an action based on a decision from interpreting the data from the sensor.

Viola! We have a new kind of solution that can “watch” the real world and take independent “action” based on what it “sees”. Robot eyes.

Historically, the hardware for the eyes has been the tricky part, but it’s the software that creates the magic. In oil and gas, these systems can be “taught” to recognise virtually anything, from intruders at the gate, to contractors on the site, to wildlife at the fence. And sensors are not limited to data from the visible light spectrum. The software can detect plumes of invisible vapours (like an escaping gas, or steam jetting out from a pinhole in a pipe), and determine the composition of the vapour. 

I’ve never thought about the world in these terms before – I’ve only ever assumed that man-made symbols, with sharp edges and lines, like the digits of a license plate, could be interpreted by computers. But a system that can take in any scene and interpret it correctly? That’s revolutionary.

Show me the money

Moreover, these new visual analytic systems, interconnected via the web, demonstrate many of the same attributes of other exponential technologies. Multiple sensors connected to a single machine learning engine mean that the single software machine learns absorbs data from all the sensors, improving its decision making over time. Its learning ability is super-human in terms of its range of interpretable situations and decisions it can correctly and reliably take.

Today, these systems are able to recognise and interpret with 90% accuracy what they detect, and they will only get better over time.

They will also fall in cost. In fact, they are already ridiculously lower cost than paying for full time screen watching operators in a control room. A visual analytic system operating in the visible light spectrum might be in the $0.15-$0.25/hour to operate, as compared to $25/hour for a squad of guys (say $60k each all in, 1 dude per shift, 3 shifts, gross up 30% to provide 24/7 coverage).

Robotic eyes are more reliable than human eyes too. Operators need bathroom breaks, take vacations, and require training and supervision, and humans are easily bored with watching screens that don’t change frequently. 

Really compelling use cases

Where would such visual analytics find a home in oil and gas? I see the key benefits in improving safety and compliance at lower cost, improving operational effectiveness at lower cost, and increase the productivity of the workforce. 

Here’s just a few examples:

Augment control room operations staff

Have you ever toured control rooms of oil processing plants? These on-site bunkers (although the newest ones are off site, usually in some downtown office tower where people actually want to work and live), have the usual bank of operators at the SCADA controls, and sometimes feature simple visual feeds from key positions of the facilities, such as gates, fuel tanks, loading facilities, and storage yards.

Typically there’s not a lot of visual sensors—there’s not a lot of real estate in control rooms to allow for too many monitors. Monitors might rotate through several views, meaning something might be missed by the human handlers.

But visual analytic systems could monitor hundreds of sensor points at a time, and take action based on what is happening in the real world. Only unusual situations would need to be handed off to a human to take action.

Improve compliance cost and effectiveness

Oil and gas facilities need to prove that operations have effective compliance regimes in place, and that the facilities comply with regulations. Some compliance activities in some jurisdictions will require “eyes-on” inspections of assets and facilities to detect and report on operating state and condition. Some incidents will require demonstration that compliance monitoring was in effect and operational.

It’s easy to assume that regulations imply “person-on-site” to carry out compliance activities, but that’s not the case. Eyes-on-site no longer means person-on-site (an orthodoxy of the industry that can change).

Visual analytic systems, with their GPS and date/time stamping, still photos and video, will be treated with greater confidence than some time sheet that reportedly shows that an operator drove the perimeter.

The monitoring of green house gas (GHG) emissions (particularly methane), is a task that could be accomplished with robotic eyes that can “read” and record rogue emissions, once the analytic system knows what to look for. Visual systems will be handily better than humans at this task.

Improve safety outcomes

Imagine an analytic system that could detect the presence of a field worker and recognise that they are not wearing high visibility clothing, or are smoking in a hazardous area, or didn’t make use of safety harnesses or hand grips.

The system could send a gentle reminder (“please hold the handrail”) only when it needs to, or could alert the contractor that his people are out of compliance, along with photo evidence. The system could monitor yards, intersections, rights of way to identify emerging unsafe conditions such as heavy equipment in close proximity to people.

Some remote and off shore assets still have high levels of human presence whose role is to provide eyes-on to the facilities. Visual analytics systems offer the potential to substitute robotic eyes for staff, including some kinds of permanent staff as well as the services contractors, out of the field, which improves the productivity of the workforce in general, as well as reduces safety concerns stemming from travel. Transitioning field services from the typical circuit to exception-based visits would lower costs dramatically. 

Safety outcomes, near misses, incidents and compliance with safety protocols should all improve with visual analytic safety.

Manage field services

Imagine a supervisory engineer who has contracted for services to a well site. Using visual analytics, the system could support the supervisory engineer by automatically opening and closing gates, logging arrival and departure times, and monitoring site activities, inventory moves, fluid levels, spills and vapours. This could move supervisory engineers out of field and provide them with greater leverage (ie, monitoring more services at more wells in parallel).

In drilling, an analytics system could monitor drill site activities, cuttings composition, mud features, sand levels, fluids, and other relevant visual data at a central drilling and fracking control facility. Visual analytics should improve the effectiveness of field services and reduce the friction associated with contracting for services. Visual data could eventually feature directly into non-productive time (or NPT) calculations, a key determinant in site cost calculations. 

Improve security

The most obvious, but probably least impactful, use case is in security monitoring. A visual analytics system could monitor perimeters and access points to facilities (such as gates), and interpret if a visitor is wildlife (a deer at the gate), an expected and authorised service team, or an unknown and presumed hostile intruder.

The system could automatically perform tactics depending on visitor (such as broadcasting danger sounds to animals, greetings to approved visitors, and warnings to trespassers). Security costs should decline while actual security goes up.

Moving ahead

Visual analytics simplify control room operations, improve oversight of the operating environment, allow for consolidation of control facilities, reduce the drive-around requirements to get eyes-on-site, and reduce the number of operations headcount. This adds up to a safer, lower cost and more productive operation.

 

Mobile: ☎️+1(587)830-6900
email: ✉️ geoff@geoffreycann.com
Visit: 🖥 https://geoffreycann.com/
LinkedIn: 🚀 https://www.linkedin.com/in/digitalstrategyoilgas

No Comments

Post A Comment