The placement of F&G Detection devices is a notoriously grey area. Determinations of coverage adequacy have been cause for debate among practitioners, some of whom have spent their entire careers looking to solve the problem.
In the late 1980s, the process of reviewing detector coverage was developed in the Oil and Gas Industry as an effort to provide a more uniform approach to the placement of devices. This method would later be coined Fire and Gas (F&G) Mapping.
Much of the debate in those early days revolved around whether a numerical value should be presented reflecting percentage coverage afforded by the devices. Some practitioners believed presenting a percentage coverage through detection modelling (software tools) would generate consistency in the detection layouts. Others believed it would encourage practitioners to simply attempt to hit that percentage target, with no thought for the complex distribution of coverage and primary influencing factors (McNay 2014).
While this debate rumbles on today, there appears to be one primary difference between the debate of 30 years ago, and the 2019 debate. In the early days, those developing the method had primary experience in F&G detection, having been practitioners developing detection technologies, or building, installing and maintaining the systems. This led to a pragmatic focus on the system performance in the field, attempting to provide a design method which would present consistency in design within the limitations of placing optical devices in an external, unpredictable environment.
Today, the term F&G Mapping has become synonymous with software, some going so far as to believe a detection layout can be generated by applying modelling tools with little or no engineering input. This results in tools and methods which rely heavily on theoretical behaviours and capabilities of detection and environments which are rarely, if ever, experienced in the field (McNay 2017).
When looking at gas detection, the percentage coverage debate has regressed towards the removal of detection based on finite probabilistic dispersion scenarios. Such an approach allows engineers to model a selection of Computational Fluid Dynamics (CFD) scenarios, then argue it is acceptable to remove detectors from areas where activation only occurs, for example, 5% of the time. If this rule was applied blindly it could be that the 5% in question includes the highest consequence events.
If a practitioner doesn’t understand the complexities of gas detection operation in an external environment, including the intricacies of explosion dynamics, then placing detection based on where one thinks the gas could go seems a plausible method to apply. This issue of relevant and adequate competence is raised by the National Research Council’s NAE report on Macondo (Council 2012). They state, “There are few industry standards for the level of education and training required for a particular job in drilling.” This problem would appear to apply throughout Oil and Gas, and not solely to Drilling. Expanding on this issue relating to Macondo, the US Chemical Safety and Hazard Investigation Board (CSB 2016) state “the Macondo incident almost automatically raises questions about competency of the personnel involved.” While F&G Mapping may be a small area of Oil and Gas, it appears to coincide with a systemic issue of competency in the industry.
Downturns naturally push operators to optimise costs, making projects more feasible. It is therefore credible that a ‘new’ approach which promises to optimise any system could be adopted, for example the increased reliance on software to design a detection layout. The problem occurs that there is no credible scientific evidence to suggest an increase in the use of software results in a reduction of the number of devices required. Practice suggests quite the contrary.
Reviewing a layout where devices have been placed based on where gas could go, for example, more often than not results in an excessive number of detectors.
With flame detection, the problem persists. The temptation exists to increase the target fire size, in order to reduce the required number of flame detectors. The problem, however, emerges when we consider these devices are designed to detect smaller fires in order to operate as a mitigative function. The FM3260 test fire (FM 2004), to which most flame detectors are certified, is not a large fire – 1ft² n-Heptane pan fire, (~40kW Radiant Heat Output [RHO]). When we then design a system to detect a much larger fire (>500kW RHO), there is no credible verification that the design intent will be met.
This presents two problems:
1) If our target fire is a worst-case scenario fire, we can’t credibly detect any fire size up to the worst case, so our ability to mitigate is eliminated.
2) Flame detectors which are designed and manufactured to detect small fires (~40kW RHO), will likely struggle to detect excessively large fires due to sensor saturation, for example. Appreciation for this cannot be gained from application of a software tool, but rather from experience of how to design a holistic detection system.
This issue of the safety mindset altering based on industry pressures is highlighted by Verweijen et al. (Verweijen and Lauche 2019) who state “For oil companies, maintaining the flexibility to adapt practices to organizational needs is a deeply embedded institutional norm. Attempts for standardization of practices is therefore generally resisted.” Ultimately this leads to established good practice as a result of historical lessons learned being discarded only for the same problems to arise again. The application of dispersion in placing gas detection devices is one such area.
Verweijen expands to state that “the pervasive variability in training also is driven by the ‘boom-and bust’ cycle in the oil industry. Periods of high investments followed by periods of underinvestment have created chronic discontinuities in experience and competence across the pool of industry workers.”
In conclusion, the temptation to alter practices can result in undesired effects and cause the recurrence of historical problems (false alarms, over engineering etc.). The industry needs a long-term approach on how the implementation of F&G Mapping would result in a more consistent approach whilst allowing for more optimised, performance-based designs. One symptom of the current problem is the common assumption that solely applying software equates to the application of ‘F&G Mapping’. It should be considered that only the combination of F&G Modelling software tools (should the complexity of the application require it) and engineering knowledge/ experience be termed ‘F&G Mapping’.
For more information, go to www.micropackfireandgas.com