What was formerly a relatively ‘obscure’ field of indoor air quality monitoring has been pushed center stage into the healthy building design conversation recently.
This dramatic shift from margin to middle has been catalyzed by a perfect storm of highly-publicized air quality ‘events’. To name but a few, the ‘airpocalypses’ sweeping the countries like India and China, as well as the availability of low-cost air sensors promising precautionary advice. Paired with the emergence of studies from reputable institutions such as the afore mentioned Harvard CogFx study and the rapid socialization of data made possible via social media has added fodder to the fire.
This, plus a paradigm shift of certification standards deviating from environmental concerns to operational and human health concerns, has resulted in a veritable ‘gold rush’ of air quality sensors and filters of every make and model. We are witness to the wild west of claims and practices with these devices, and currently the field is very much ‘buyer beware’ due to low-level market education and even lower quality standards. Additionally, the field of indoor air quality is doubly exposed with performance claims being made both on hardware and software. The net result has been the creation of vast amounts of low-quality data (bad, big, data), confusion and a wealth of poor decisions based on this data.
Whether it be for air quality, energy or other parameters, it’s important to remind ourselves what the primary use of data is.
Data is used to make decisions and inform action. It follows therefore that in order to make good decisions we need good data. In the built environment, we define good data as that which allows us to keep occupants safe and operate buildings efficiently and effectively. The production of good, meaningful data relies on three main variables: the performance of sensor hardware, the location of sensor hardware and finally, proper communication of the data and results.
In the world of air quality monitors, it is currently extremely challenging for the average user to tell the difference between a consumer device and a professional one, despite what often amounts to a 100x difference in price. After all, the claims made are often virtually identical. However, in terms of initial and on-going performance, the units are in completely different leagues. Higher quality sensors tend to be accurate within 5-15% over extended periods of time. They can also be calibrated; a critical feature for all sensors. The accuracy of lower quality sensors tends to drift rapidly over time, with many being completely unusable from the day they are installed. Unsuspecting users are often sold consumer-grade sensors at commercial prices, particularly when the sensors are part of an overall Building Management System (BMS) or network.
Good data requires standards for monitors. For this reason, the RESET™ Standard for healthy buildings grades air quality monitors into three tiers, based on long-term performance. The three tiers are A, B & C (Calibration grade, Commercial grade and Consumer grade) and help users identify which monitor type best meets their need.
The world of software is equally opaque when it comes to accuracy claims.
Here, there is a quasi-mystical belief that big data analytics can solve anything, particularly when it comes to correcting or enhancing low-quality data that comes from low quality sources. This couldn’t be further from the truth. The fact that the quality of results is only ever as good as the quality of the inputs is basic science. The industry is currently rife with low-quality hardware being sold to unsuspecting users based on the ability of algorithms, big data analytics and sensor networks to extract quality insights.
Let’s not be fooled. Connecting the brains of 100 chickens does not produce the intelligence and insight of an Einstein - it only produces the intelligence and insight of 100 chickens. Most importantly, all data communication is done via software which is responsible for running the math on how results are expressed. Many times, the math from software to software is different. For example, calculating daily CO2 or PM (particulate matter) averages in offices based on operational / work hours vs. 24-hour results yields very different conclusions. Yet, both are typically referred to as ‘daily averages’.
Good data requires standards for how data is communicated at the level of each sensor, at what interval and over what time frame.
Finally, the location of sensors is equally as important as the quality of the hardware. In order to inform and protect people and their health, sensors need to be located in areas representative of the air they breathe. Far too often, sensors are combined with filtration systems tracking the air quality produced by filters at ceiling or floor levels, but not representing the air being breathed several meters away, where people actually are. Good data requires standards for where monitors are installed.
The combination of data plus reputable standards has the power and potential to be transformational. At the core, it allows buildings and spaces to be compared consistently around the world. Furthermore, it enables data to be shared and feeds off biomimetic design strategies that we so obviously need. In addition to having an ability to dramatically accelerate research and advancements in healthy building science.
This perfect storm of events that has propelled air quality onto center stage now also has the potential to help us realize how much we don’t know. We are at the precipice of understanding IAQ: merely starting to make connections between the indoor environment and health; the break-throughs, the truly world-shaping, revolutionary epiphanies haven’t yet happened. We need to deploy these hypotheses and techniques into action. This is where our community of Architects, Designers, Engineers and Construction teams come in.
Recommended further reading:
Part 1/3: Why Outdoor Air Pollution is Also an Indoor Issue
Part 3/3: DLR Group's Health Building Design Approach - Case Study