What's wrong with toxic gas detection

Back in the 1980s, as a young salesman, I was introduced to a highly-regarded professor of chemistry from the University of Birmingham (England). This visit was part of a UK tour, organized by our newly minted dealer. More than being highly-regarded, this gentleman was identified as the “leading analytical chemist in the UK.” As it happens, he was tasked with developing the school’s environmental chemical analysis program.

After showing me his well-appointed lab, he couldn’t hide his dismay with his fellow environmentalists: “These people throw around terms like parts-per-million and parts-per-billion, but have no clue as to how difficult it is to achieve good analytics at these levels.”

He was right, of course, and sadly, little has changed 30-odd years later. Except for a few gases, that have readily available calibration standards; or for certain rare instances in which sophisticated analytical techniques must be used, the world of toxic gas detection has been limited to simple applications and simple solutions. Ironically, instrument packaging and sensor miniaturization have proceeded apace—to give us more elegant ways to do the same simplistic things.

In this article, we will examine some of the reasons for this, and describe an encouraging development.

Practical detection, of both asphyxiant and combustible gases, dates back to 1816 and the Miner’s Safety Lamp. This lamp employs an ingenious flame arrestor, usually consisting of a screen with tiny holes, whereby flames cannot escape to propagate, in the presence of combustible gases. By the same token, in the presence of such gases, the flame will burn higher with a blue tinge. In the presence of asphyxiant gases, the flame would be extinguished, providing an early warning of imminent danger. Vegetable oil was the original lamp fuel of choice.

Sadly, for a variety of reasons, these lamps did not provide the boon in safety originally hoped for, and gave way to electric lamps at the turn of the (19th to 20th) century.

The familiar canary in the coal mine (a so-called “sentinel species”), was introduced by Scottish physiologist John Scott Haldane circa 1913. While not quantitative, canary distress or death was a sure, reliable sign of imminent human danger. No special training was needed to use a canary, and no maintenance beyond its modest feeding was required.

In 1917, the first gas detector tube was developed at Harvard University, for measuring carbon monoxide—the most common canary killing culprit. These devices consist of a glass tube filled with a chemical reagent that turns color when an air sample containing the gas of interest is passed through it. In the original detector tubes, color comparison charts were provided to relate concentration with the extent of the color change. Various embodiments of hand pumps are used to draw the air sample through the reagent.

By the 1930s, detector tubes for many compounds were commercially available, and some years later, most tubes were converted to a length-of-stain concept. As such, the reagent color change will work its way through the column of reagent, based on the concentration of the analyte gas. A further refinement would see each tube, in a given production batch, being provided with a graduated scale, indicating gas concentration.

Although detector tubes are not particularly accurate (±25%), they are easy to use, and require little operator training. Cross interferences on detector tubes are not uncommon, and the industry has done a decent job of documenting such effects.

You might be noticing a pattern here: Ease of use is key; accurate and precise analytics, not so much.

Operating in a sort of parallel universe to toxic gas detection is the field of combustible gas detection. Many gases used in industry can be potentially explosive, and affected environments must be monitored. Typical alarm set points are at 10% and 20% of the lower explosive limit. Note that combustible gas detectors generally operate in the percent range, while toxic gas detectors operate in the parts-per-million or parts-per-billion range.

By the 1960s, instrumentation methods were starting to get established in certain aspects of gas detection. Unlike previous methods, however, some form of calibration would be needed. That is, the instrument must be challenged with a known standard, to be set up to display an accurate reading. As you can well imagine, creating standards at ppm or ppb levels is far more challenging than doing the same at percent levels.

Yet, a false analogy was almost immediately established between toxic and combustible gas detection, presumably because they are both…gas detection. Indeed, I served on numerous standards committees in the 1980s, which were led by old time “gas detection” types, who—virtually without exception—were only knowledgeable in the less demanding field of combustible gas detection. This led to a series of problems which persist to this day:

1.      While calibration of combustible gas detectors presents few problems, calibration of toxic gas detectors can be difficult—especially if you venture beyond the easy ones: carbon monoxide, hydrogen sulfide, and any other of the “common” toxic gases. More than that, it is still accepted practice to calibrate combustible gas sensors with gases other than the target analyte, using response factors (sometimes called transfer factors) supposedly correcting for the use of the non-analyte calibration standard.

There are also manufacturers who claim that their instruments “never” need calibration. Pray tell, how does a user know that the units are working properly? We have dealt with the notion of “bump testing” in this Knowledge Base article.

2.      Commercial calibration standards do exist for most—but not all—compounds of interest in toxic gas detection. In many cases, though, such standards are in the form of permeation devices, rather than cylinder gas. Permeation devices require the employment of specialized and expensive equipment, which discourages their use by all but the most enthusiastic and well-heeled organizations.

Calibration is at the very heart of gas detection, even if few people in the industry acknowledge this publicly. Often, the standards vendors themselves are less than forthright on the matter, with barely sufficient documentation and support of their products. Another Knowledge Base article deals with a poorly understood topic essential to the use of permeation devices.

This nonchalance in calibration rigor is an unfortunate characteristic of both toxic and combustible gas detection. It is especially egregious in this era of many OSHA PELs set to 1 ppm or even lower.

3.      The historical emphasis on ease of use has distorted the market, so that “toxic gas detection” is virtually equated with confined entry applications. (Those in which the air is tested for oxygen, combustibles, and one or more common toxics.) As such, there are any number of cookie cutter products available. While this promotes healthy competition within a particular type of gas detection, it can lead to unrealistic assumptions in other areas. The fact is, some applications are just not that simple, and may also be costly.

4.      Further to this point, there is a disproportionate emphasis on portable and personal monitoring devices. Consider: If you have a particular occupancy with a known toxic gas hazard, that can include a number of employees, what makes more sense—outfitting each of them with a portable device, subject to maintenance and any number of “human error” and even sabotage issues; or monitoring the AREA in question, with a professionally designed and installed system? You would be surprised how many safety and industrial hygiene officials choose the former.

Area monitoring systems are much more capable of providing valuable time history analysis of exposure data, not to mention being capable of triggering area-wide alarms. Of course, they can appear to be more expensive than personal or portable devices. Ironically, though, one area monitoring system is usually less expensive than dozens of personal/portable devices.

5.      A final pet peeve involves a seemingly officially endorsed misunderstanding of units of measurement. Gas concentration should not be expressed as weight/volume, since this sort of unit is affected by temperature and pressure!

Where do we go from here? At Interscan, we do see an encouraging trend. Industries new to gas detection, such as food/ag, are faced with monitoring difficult compounds such as hydrogen peroxide and peracetic acid. We were quite surprised to discover that many of these companies are very familiar with wet analytical methods for these substances, and wasted no time in comparing their wet chemical results with the data from our instruments. Trust me, this is rare. It should be noted that no commercial calibration standards for hydrogen peroxide or peracetic acid are available. Thus, we developed our own.

Hydrogen peroxide has an OSHA PEL of 1 ppm, and peracetic acid does not even have one, although the ACGIH TLV-STEL is 0.4 ppm. Could it be that a new wave of customers, with a new wave of analytical needs, will transform the gas detection industry? As the Magic 8-Ball toy would say: “Signs point to yes.”

0.00 avg. rating (0% score) - 0 votes