Extract value from your #oil and #gas data assets

IMG 1431

Extract value from your #oil and #gas data assets

Is your oil and gas company sitting on a mountain of little-used data? What are 5 clever ways to leverage your digitized data assets?


Over time oil and gas companies accumulate vast quantities of digitized data assets that, for some unexplained reason, never seem to get deleted. I think it’s because engineers are naturally hoarders of their work, and constantly falling computer and data storage costs are like a bad enabling roommate.


I can’t see this trend stopping anytime soon, frankly. Society is furiously adopting technology at an increasingly frenetic pace, and oil and gas is no different. Indeed, oil and gas data by volume leans to the technical (bits regurgitated from plant controllers and sensors) and away from the commercial (details of purchase and sale transactions, payrolls, and the like), and it’s the technical data that’s growing.


The trend line is pretty clear. Oil and gas companies are going to have to get ever more digital to reset their cost structures, which will contribute yet more data to the pile.


So an astute oil executive in one of Calgary’s glass towers recently posed the question – what do I do with all the data that I have? How can I extract some value from it?


It sounds like a challenge.


Given how much data there is, where do you start? Surely some data must be of greater value than other data? Some insights must be more immediately impactful? I would begin by constructing an economic view of the business, to understand how value is created and what expectations shareholders have built into their valuation of the business. This will give direct insight into which levers really matter (volumes, prices, operating costs, capital efficiency) and the trade offs between such inputs as machines, labour, energy, carbon and water.


Armed with this insight, management can then take action with their data assets, including these 5 proven ways. These are in ascending order of cost of execution.


Build data overlays


One of the problems of oil and gas is the extreme compartmentalisation of people and their data into separate departments. These silos make it very difficult at times to see the business with clarity, but one of the easiest tactics is to simply overlay one dataset on top of another and visualise them together.


One oil and gas company took all the GPS data from its trucks and its suppliers’ trucks (generated while driving to and from gas wells), and overlayed that on a revenue, cost and production view of the business. Very revealing. The high cost of well maintenance was in part caused by all the trips that the engineers needed to make to the wells because of poor quality well information.


In another application, an operator overlayed health and safety incidents onto its human capital data, and learned that the biggest predictor of safety incidents was the average age of crews and shift length – the older the crew, the more likely they would experience an incident as they neared the end of their long shifts.


These overlays don’t take much investment – you just need two or more departments who are willing to work together in unfamiliar ways.


Donate data to science


Universities, think tanks and R&D labs are constantly on the prowl for funding for project studies which can be used to further science, contribute to journals, and create qualified graduates. The scarcity of research dollars is making these institutions more competitive, and they are now very keen to work on meaningful real-world problems that give their students better job prospects.


One of my last projects in Australia was with a government science-based agency whose mandate was to carry out research that helped foster Australia’s great industries. They wanted to shift the focus of their research from projects whose outcomes would not materialise for a decade to projects that could be implemented in months.


In earlier times they might want to study obscure geologic structures that might prove to be prospective in a decade. These days, their projects would be aimed at taking costs immediately out of the field, such as improving the design of joints on polypropylene flow lines.


In the hotly competitive oil and gas world, even the researchers need to be competitive. Give them some of your data and task them with improving on some very short term and commercially interesting areas (such as labor reduction, or energy efficiency).


It takes a bit of work to get this mechanism to work for you. Good agreements, clear expectations, careful target selection all contribute to making this work.


Run a data competition


This is an old idea, but a good one and underexploited by the oil and gas sector. As maths and data science advance, formerly impossible to solve problems are overcome. How about that pesky problem of scheduling and routing (the famous travelling salesman problem)? Or how to improve joint designs in steam piping? Or how to identify sweet spots for drilling and resource extraction?


I’m finding the shelf life of ideas, practices and computations to be falling very rapidly, and these new good ideas are not originating exclusively in oil and gas. The reach of the internet, the spread of knowledge, and falling compute costs mean that oil and gas compute problems could be tackled through crowd-sourcing.


Why not take some of that hard won data and crowd-source a solution to a computational problem, or crowd-source an improvement to a mature problem? Take a look at Kaggle, a platform for running crowd sourced innovation competitions, among other things.



Build a data lake


You have to love the digital industry. It’s constantly inventing new terms that can both clarify and confuse at the same time. The latest idea is a data lake, all your data assets in their original format, in a vast searchable repository. Sounds like a database, but it’s more like a database of databases and datasets, stored in any number of formats, from original raw data to spreadsheets to tables and more.


Sounds alarming, because you frequently hear of people drowning in lakes. Drowning in a data lake would not be a good way to check out, all those zeros and ones smothering your gurgling calls for help as you reach out for that lifeline spreadsheet just out of reach.


The best data lake will not be just a bottomless soup of meaningless electrons. You’d provide a navigation map to the data (the meta data that describes all the columns and what they mean), a set of analytic and visualisation tools, a way to search for stuff, templates that others have shared, even like buttons so that the best stuff floats to the top.


Just making data easier to access for your people is a big step forward. Slowly fill the lake with the data that can yield the greatest economic impacts.


Feed Watson the data lake


Ever heard of Watson? This is the name of IBM’s super smart computer system that can process natural language queries, search vast databases, and give high quality interpreted answers to complex questions. Watson works with structured data, reports, studies, spreadsheets, photos, images, video, audio, anything. Really. It’s a seriously clever advance in artificial intelligence and machine learning.


What if you let Watson drink your data lake?


That’s what one oil and gas company is doing. Watson is freeing up their engineers from having to spend precious time assembling data from lots of sources and reconciling the differences, and affords them more time for analysis. Watson provides a far more complete answer to any question than any one engineer can, in a given amount of time, pulling out all the historical data, charts, interviews, text notes, and so on.


Having a Watson on your team is like having the most experienced team of engineers possible, including all the retired and exited ones, on hand all the time to answer any question from anyone. How cool would that be?


In this case example, the data lake is all the historic and current data about off shore oil and gas platforms and operations, and that’s what Watson is drinking.

 * * * * *

There are many other ways to capture value from data (share it with peers in the cloud, sell it outright, make it searchable for a fee, create ventures with service companies). What are your favourite ways?





No Comments

Post A Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.