Data Quality Assurance


The Quality Assurance (QA) team at the IRIS DMC is tasked with monitoring the quality of the IRIS DMC seismic data archive and providing resources relating to data quality to the earth science community. On this page you will find information on general QA practices, MUSTANG metrics and PDFs, and links to QA related products and services at the DMC.

Quality Assurance Mission Statement

QA Related Web Services, Products, and Software

Web Services and Client Tools

MUSTANG, Modular Utility for Statistical Knowledge Gathering: seismic data quality metrics and Probability Density Functions (PDFs).


Synthetic Seismograms
Global ShakeMovie synthetics at the DMC
Global ShakeMovie synthetics at the DMC event listing

Envelope Functions

Calibration Products


R Package on CRAN, IRISMustangMetrics is the official public release of R code used by MUSTANG to calculate metrics.
ISPAQ (IRIS System for Portable Assessment of Quality), software for calculating MUSTANG-style seismic data quality metrics on a local machine.
QuARG (Quality Assurance Report Generator), software for network operators to find station problems using MUSTANG metrics and user-specified threshold values, examine the generated problem list using links to QA tools, document the issues in a ticketing system, and create a formatted HTML report from the tickets.

Other Resources

Data Problem Reports – search submitted data problem reports by station and network. This is not an exhaustive listing of all data problems that may exist in the archive.

QA of USArray Data

Citations and DOIs

To cite the MUSTANG system or reference the use of MUSTANG metrics:

  • Assuring the Quality of IRIS Data with MUSTANG
    Robert Casey, Mary E. Templeton, Gillian Sharer, Laura Keyson, Bruce R. Weertman, Tim Ahern
    Seismological Research Letters (2018) 89 (2A): 630-639.

Contact Us

There is a Message Center List available to provide a forum for QA discussion by the community (subscription required):

And a Message Center List for MUSTANG-related announcements (including service outages and downtimes, etc.):

Issues regarding Quality Assurance at the IRIS DMC can be addressed to this Message Center List :

IRIS DMC QA Staff can be contacted directly at:


  • (2020-11-18) We are pleased to announce the release of QuARG, the Quality Assurance Report Generator, available through the iris-edu GitHub repository: This utility creates a Quality Assurance report, intended for network operators who want and need to have an understanding of the health of the stations in their network. This utility creates a Quality Assurance report, intended for network operators who want and need to have an understanding of the health of the stations in their network. The report calls attention to underperforming or broken stations so that time and resources can be prioritized as they are allocated for improving the quality of the network.

QuARG is a python-based utility that walks the user through the process of creating a quality assurance report. This process follows 4 broad steps:
1. It utilizes MUSTANG metrics available through our web services, or alternately metrics generated using ISPAQ (using ISPAQ 3.0, available soon), to find and highlight potential problems in the data by flagging days that exceed user configurable threshold values. By using the pre-computed metrics, it reduces the amount of time that an analyst has to spend scanning the data for problems. It can also find issues that would otherwise go undetected by the eye.
2. Users then analyze the list of potential issues to determine if these are data quality problems that should be included in the report. QuARG makes it easy to keep track of which issues have been investigated, keep notes on what the analyst has found, and link to a slew of QA tools, such as waveform plots, metric plots, and Probability Density Function (PDF) plots, to make it easier to understand the problem.
3. From there, users create tickets that describe the problem. Tickets can be created in QuARG, or in an external ticketing system if the analysts have one that they already use.
4. These tickets, which track problems as they arise and can be updated when they are fixed, are then used to create a nicely formatted HTML quality assurance report.

Full documentation can be read at

  • (2020-06-23) We are pleased to announce the addition of three new metrics to MUSTANG: max_range, sample_rate_channel, and sample_rate_resp. We are now computing these metrics for incoming data and have begun the process of calculating values back through the data archive. Full descriptions of these metrics can be found at these links:

max_range: calculates the difference between the largest and smallest sample value in a 5 minute rolling window and returns the largest value encountered within a 24-hour timespan.

sample_rate_channel: A boolean measurement that returns 0 if miniSEED and channel sample rates agree within 1%, or 1 if they disagree.

sample_rate_resp: A boolean measurement that returns 0 if miniSEED and response-derived sample rates agree within 15%, or 1 if they disagrees.

  • (2020-01-14) We have released a new MUSTANG Databrowser ( version that has the following improvements:
    1. The drop down network-station-location-channel lists now include experiments stored in our PH5-format archive. We have good metric measurement coverage for PH5 experiments from 2017-2019 and we are working backwards in time to calculate metrics for experiments from 2005-2016.
    2. The “gap duration” plot type now has a fourth plot added. The plots included in this view are: gap duration vs. date, gap duration vs. time of day, a histogram of gap durations, and number of gaps vs. time.
    3. The PDF noise mode spectrogram plot now has the same default color scale limits (plot.powerscale.autorange=0.9) as the spectrogram web service (

Frequently Answered Questions

There are currently 5 Quality Assurance related questions:


There are currently 4 Quality Assurance tutorials:

Page built 02:05:06 | v.527e06cb