In oil and gas, cutting out unplanned downtime is key to being “always on”

By Edward Fun, Head of Business Development, Asia Pacific, Stratus Technologies

Like other sectors, the oil and gas (O&G) sector in Asia-Pacific has undergone significant transformation amid unprecedented challenges in recent years.

From supply disruptions to the gradual transition to cleaner energy, significant headwinds have forced many companies to find new ways to respond to change, boost competitiveness and be more proactive in their outlook.

To do so, O&G companies have sought to modernize, digitalize, and transform, embracing Industry 4.0+ and incorporating automation and digitalization.

This is important to their priorities of reducing costs, addressing space constraints, and improving safety standards. In a tight market, rewards are earned by those that can produce more energy at lower cost and with fewer emissions.

For example, companies are seeking to optimize productivity gains per square foot in their facilities and scale up production space while expanding infrastructure.

Addressing this need, the Asia-Pacific O&G automation market is expanding from US$14.9 billion in 2020 to a projected US$19 billion by 2026, according to research firm Research and Markets.

Safety and reliability drive this growth, with process automation helping O&G producers integrate information, control, power, and safety solutions to respond to dynamic global demand.

The importance of the edge

For many transformation efforts today, edge computing systems have been important foundational building blocks for many of the sector’s digital systems.

They have enabled higher performance and resilience, while allowing operators to monitor and respond to situations more effectively. These systems also help reduce the amount of data offloaded to faraway cloud data centers for processing and sent back on-site for action.

Throughout the O&G sector, edge computing systems have made a real impact in many companies’ rapid digitalization in recent years.

Consider an upstream company. On an offshore platform, ruggedized edge computer servers are used to run OT and IT apps to monitor and control equipment.

With AI-enabled predictive maintenance, employees take fewer trips to a remote part of the world to swap out a faulty component or repair a malfunctioning valve.

In midstream, edge computing systems now analyze camera footage at terminals on land or at sea and send alerts of potential flares or smoke that may signal an emergency. This helps a company’s Health, Safety, and Environment (HSE) endeavors without adding more manual effort.

Finally, downstream at refineries and local gas distributions, edge computing systems are enabling offsite management, including monitoring for power interruptions and other issues. Some issues may even be resolved remotely, enabling staff to work offsite without traveling long distances to set things right.

Cutting out unplanned downtime

With so much riding on the digital infrastructure, it is crucial to have edge computing systems that are robust and fit to operate in the industrial settings that are common to the O&G sector. A chief concern here is unplanned downtime.

A 2016 study by Kimberlite found that offshore O&G industries lost US$38 million annually due to unplanned downtime, with some companies losing up to US$88 million annually.

In another study by Vanson Boume commissioned by ServeMax in 2016, hardware failure or malfunction was the most common root cause, responsible for almost half the disruptions.

The second most common was software failure or malfunction (39%), which was followed by overload (39%), user error (19%), security breach (14%) and humidity (11%).

For many O&G companies, digital systems are deployed in roles such as terminal/pipeline operations, critical equipment monitoring, predictive maintenance, and HSE management.

Disruptions in these areas could jeopardize operations and safety. This is why always-on systems should be a key consideration to reduce downtime risk and increase business agility.

It is important to understand what “always on” entails. For a start, a conventional, unmanaged system has 99% uptime. Going by the downtime costs averaged across industries, this could mean an annual cost of more than US$14 million.

The next step up is a high-availability setup with server clusters, replication software, and virtual machines. This provides 99.95% availability, reducing downtime to over four hours but still costing more than US$700,000 annually.

In comparison, a fault-tolerant server with 99.999% uptime results in just over five minutes of downtime a year and reduces costs to about US$14,000 annually. In other words, 1,000 times less than the cost of using a server with 99% uptime. Clearly, a fault-tolerant server is the gold standard for an “always on” solution.

Finding the right solution

For O&G operators, finding the right systems to run mission-critical industrial software is vital for business continuity and reducing unplanned downtime costs.

As the sector adopts Industry 4.0+ technologies, it must seek digital systems that connect cameras, sensors, and software applications, while providing performance headroom for future upgrades and resilience to downtime.

After all, a computer failure often means more than replacing hardware – software usually has to be upgraded at added cost to keep up.

A short life cycle for one’s servers also means replacement is a perennial concern, taking away precious time from achieving operational excellence.

This could be a key difference between a winner and also-ran in the long term, where agility and proactiveness will have an important bearing on success in the sector.

As O&G companies face new challenges in the years ahead, many will likely be digitally connected enterprises that have undergone thorough transformation efforts.

Keeping those digital systems up will be critical. In an always-on world, this calls for the right solutions for the job – ones that make a difference by being robust, resilient and simple to operate.