AI Works, but it Needs More to Ensure Safety

Friday, September 14, 2018 @ 05:09 PM gHale

As artificial intelligence (AI) systems begin to control safety-critical infrastructure across a growing number of industries, the technology alone is not enough to ensure a safe manufacturing enterprise.

Data-driven models alone may not be sufficient to ensure safety and calls for a combination of data and causal models to mitigate risk, according to DNV GL position paper on the subject.

RELATED STORIES
Atos, Nozomi Team on Analytics Pact
Security Sensor Technology Licensed
Countering Security Threats to Gas Networks
Joint Next Gen First Responder AI Initiative

“Safety can be defined as ‘freedom from risk which is not tolerable’ (ISO). This definition implies that a safe system is one in which scenarios with non-tolerable consequences have a sufficiently low probability, or frequency, of occurring. AI and ML (machine learning) algorithms need relevant observations to be able to predict the outcome of future scenarios accurately, and thus, data-driven models alone may not be sufficient to ensure safety as usually we do not have exhaustive and fully relevant data,” said the paper written by Simen Eldevik, PhD., principal research scientist at DNV GL.

Entitled “AI + Safety,” the paper details the advance of AI and how such autonomous and self-learning systems are becoming more and more responsible for making safety-critical decisions.

As the complexity of engineering systems increases, and more and more systems are interconnected and controlled by computers, human minds have become hard pressed to cope with, and understand, the associated enormous and dynamic complexity, the paper said.

In fact, it seems likely that human oversight will be able to be applied to many of these systems at the timescale required to ensure safe operation. Machines need to make safety-critical decisions in real-time, and industry has ultimate responsibility for designing artificially intelligent systems that are safe.

The operation of safety-critical systems has traditionally been automated through control theory by making decisions based on a predefined set of rules and the current state of the system. Conversely, AI tries to automatically learn reasonable rules based on previous experience.

Since major incidents in the oil and gas industry are scarce, such scenarios are not well captured by data-driven models alone. Not enough failure-data is available to make such critical decisions. AI and machine-learning algorithms, which currently rely on data-driven models to predict and act upon future scenarios, may not be sufficient then to assure safe operations and protect lives.



Leave a Reply

You must be logged in to post a comment.