SIS Test Intervals: Know Facts before Changing

Friday, August 22, 2014 @ 02:08 PM gHale


By Edward Marszal
An operating company engineer ended up asked to confirm that safety instrumented systems were suitable for increasing the test interval up to seven years from a current figure of five years.

Even though he believed the calculations would show the increased test interval was acceptable, he was hesitant to make the drastic two year shift based on simple gut-feel discomfort. In light of his discomfort, it was possible to give a more technical rationale for why his gut was telling him the increase did not feel right even though the “perfect math” of SIL verification calculations might have been able to justify it.

RELATED STORIES
Database for Offshore Drilling Needed: BSEE
Offshore Safety Institute Formed
Culture, Technology Make Safety Job One
SHARP Safety Focus for Stamping Firm

It is no secret refineries are always trying to extend the duration of turnarounds in order to minimize expense. In doing so, they are also increasing the time interval between SIS tests, if tests are only possible during the shutdown the turnaround provides. While it might seem the determination of whether or not these extended intervals are acceptable is a simply matter of re-running the SIL verification calculations with a different test interval, the reality is a bit more complex.

SIL verification calculations depend on failure rates for SIS equipment items. The data that we use for those failure rates is often collected from actual operating SIS equipment from the field that has been compiled into databases such as OREDA and NPRD. The database simply shows a single (constant) failure rate for a device, implying the single number is an attribute of that specific type of device, but again, the truth is much more complex.

When we collect and use data for failure rate calculations we are making two fundamental assumptions that might not be obvious to all persons who perform SIL verification calculations. These assumptions:
• Constant Failure Rate
• Well designed and well maintained equipment

The first assumption is the failure rate of an instrument is constant over its entire lifetime. This assumption, stated another way, implies the probability of a device failing in year number one is exactly the same as it failing in year 2, 5, 10 or 20. While the constant failure rate assumption is fairly valid for electronic equipment during its useful life (i.e., after burn-in but before wear out failures start to occur, usually about 10 years after fabrication of the equipment), it is less valid for equipment with wearable moving parts, such as a valve.

As we collect data, we generally do not distinguish when the failure occurred in relation to the installation of the equipment item, so databases will generally create failure rates representative of equipment items that are of all of the ages that are typically used. As such, if most operating companies are turning around tests (and also performing maintenance) at a 5-year interval, then the databases that we use for SIL verification calculations reflect SIS instruments that are in service for up to five years.

If a user goes outside the typical turnaround intervals, increasing intervals to 6 or 7 years, then SIL verifications based on SIS instruments with shorter test intervals do not accurately represent the (increased) failure rates that might be expected as the between-testing and between-maintenance intervals increase, including instruments that are 6 and 7 years beyond their last test/maintenance. As such, engineering judgment would indicate that using typical failure rate data is too aggressive of a stance, but common data based on the increased test intervals is not yet available.

The assumption about the well designed and well maintained system also comes into play in a very similar way. If users of an instrument perform a routine maintenance task, such as greasing a bearing, replacing packing, or replacing seals is performed at every turnaround, and a failure of those components can cause a failure of the SIS, then the failure rate data is critically dependent on that maintenance actions occurring at the five-year interval. If the maintenance activity does not occur until 6 or 7 years after the start of a run one can infer the failure rates (especially as the devices reach the 6th and 7th year) will increase. Since the bulk of industry, from which the typically failure rate data is derived, is performing its maintenance activities on the shorter 4-5 year interval, it again can be inferred that as the test interval increases to 6 or 7 years, but the data from 4-5 year interval maintenance is used for failure rates, the PFD calculations to verify achieved SIL will also be in error, and in an aggressive non-conservative way.

Unfortunately, making a large increase in test intervals, especially in comparison to what industry peers are doing, may result in non-conservative SIL verification calculations whose data does not accurately represent the operating regime that the plant is in after the test intervals have been increased.

In order to prudently increase test intervals, the between-testing interval needs to be increased more slowly – perhaps one half year at a time, to give time for the actual failure rate data being collected by the plant and by industry as a whole to catch up with the new operating profiles of the plant.
Edward Marszal, PE, is president of Kenexis and is responsible for instrumented safeguard design basis development and verification/validation projects. He is the author of the book Safety Integrity Level Selection.



Leave a Reply

You must be logged in to post a comment.