Calibration uncertainty for dummies – Part 3: Is it Pass or Fail?

Posted by Heikki Laurila on Jan 4, 2017 1:07:39 PM
Find me on:

Calibration uncertainty - Beamex blog post

 

In this post we discuss the following scenario: You have done the calibration, have the results on the certificate and you compare results to the tolerance limits. It’s time to pop the big question: Is it a Passed or Failed calibration? Or is it In or Out of Tolerance?

This is the third and final post in this three-part series on the subject of calibration uncertainty. If you missed the earlier blog posts on this subject, you can find them earlier in our blog (or links below), or you can get all information by downloading the related white paper from the picture link below.

Measurement Uncertainty: Calibration uncertainty for dummies - Part 1

Calibration uncertainty for dummies - Part 2: Uncertainty Components

CTA-calibration-uncertainty

 

Compliance statement – Pass or Fail

Most often when you calibrate an instrument, you have a predefined tolerance limits that the instrument has to meet. Sure, you may do some calibrations even without tolerance limits, but in process industry the tolerance levels are typically set in advance. Tolerance levels are the maximum levels indicating how much the result can differ from the true value. If the errors of the calibration result are within the tolerance limits, it is a Passed calibration, and if some of the result errors are outside of tolerance limits, it is a Failed calibration.  This sounds pretty simple, like basic mathematics. How hard can it be?

It is, in any way, important to remember that it is not enough just to take the error into account, you must also take the total uncertainty of the calibration into account!

Taking the uncertainty into account makes it another ball game. As discussed in the paper, there are many sources of uncertainty. Let’s go through some examples next.

 

Example

Let’s say the process transmitter you are about to calibrate has a tolerance levels of ± 0.5% of its measurement range. During the calibration you find out that biggest error is 0.4%, so this sounds like a Pass calibration, right? But what about if the calibrator that was used has an uncertainty specification of ±0.2%? Then the result 0.4% could be turned into either pass or fail, it is impossible to know which one. Plus, in any calibration you also have uncertainty caused by many other sources, like the standard deviation of the result, repeatability, calibration process, environmental conditions, and others. When you estimate the effect of all these uncertainty components, it gets even more likely that the example calibration was a fail after all, although it looked like a passed one at first.

 

Example – different cases

Let’s look at a graphical illustration of the next example, to make this easier to understand. In the below picture, there are four calibration points taken, the diamond shape reflecting the actual calibration result. The line above and below the result indicates the total uncertainty for this calibration. The tolerance level is marked with a line in the picture.

We can interpret the different cases shown above as following:

  • Case 1: This is pretty clearly within the tolerance limits, even when uncertainty is taken into account. So we can state that this as a “Pass” result.
  • Case 4: This is also pretty clear case. The result is outside of the tolerance limits, even when uncertainty is taken into account. So this is clearly a bad or “Fail” result.
  • Case 2 and Case 3: These cases are a bit more difficult to judge. It seems that in case 2 the result is within the tolerance while in case 3 it is outside, especially if you don’t care about the uncertainty. But taking the uncertainty into account, we can’t really say this with confidence.

There are regulations (for example; ILAC G8:1996 - Guidelines on Assessment and Reporting of Compliance with Specification; EURACHEM / CITAC Guide: Use of uncertainty information in compliance assessment, First Edition 2007) for how to state the compliance of calibration. These guides suggests to state a result as passed only when the error added with uncertainty is less than the tolerance limit. Also, they suggest to state failed only when the error added (or subtracted) with the uncertainty is bigger than the tolerance limit. When the result is closer to the tolerance limit than half of the uncertainty, it is suggested to be called an “undefined” situation, i.e. you should not state it being neither pass nor fail.

We have seen many people interpreting the uncertainty and pass/fail decision in many different ways over the years. In practice, the uncertainty is often not taken into account in the pass/fail decision, but it is anyway very important to be aware of the uncertainty, when making the decision.

 

Example – different uncertainty

Another situation to illustrate with a picture next, is when the total uncertainty is not always the same. The cases 1 and 2 have the same measurement result, so without uncertainty we would consider these being the same level measurements. But when the uncertainty is taken into account, we can see that case 1 is really terrible because the uncertainty is simply too large to be used for this measurement with the given tolerance limits. Looking at case 3 and 4 it seems that the case 3 is better, but with uncertainty we can see that it is not good enough for a pass statement, while the case 4 is.

Again I want to point out that we need to know the uncertainty before we can judge a measurement result. Without the uncertainty calculation the above cases 1 and 2 look similar, although with uncertainty taken into account they are very different.

 

TUR / TAR ratio vs. uncertainty calculation

The TUR (test uncertainty ratio), or TAR (test accuracy ratio), is often mentioned in various publications. In short this means that if you want to calibrate a 1% instrument and you want to have 4:1 ratio, your test equipment should be 4 times more accurate, i.e. having 0.25% accuracy, or better. Some publications suggest that having a TUR/TAR ratio large enough, there is no need to worry about uncertainty estimation/calculation. The quite commonly used ratio is 4:1. Some guides/publications do also have recommendations for the ratio. Most often the ratio is used as in the above example, i.e. just to compare the specifications of the DUT (device under test) and the manufacturer’s specifications of the reference standard. But in that scenario you only consider the reference standard (test equipment, calibrator) specifications and you neglect all other related uncertainties. While this may be “good enough“ for some, calibrations, this system does not all uncertainty sources into account. So it is highly recommended to make the uncertainty evaluation/calculation of the whole calibration process.

We also get asked quite regularly: “How many times more accurate should the calibrator be, compared to the device to be calibrated?”. While some suggestions could be given, there isn’t really any correct answer to that question. Instead you should be aware of the total uncertainty of your calibrations. And of course, it should reflect to your needs!

 

Summary - key take-outs from the white paper

To learn more about the subject, please take a look at the related white paper.  Here is a short list of the key take-outs from the white paper:

  • Be sure to distinguish “error” and “uncertainty”
  • Experiment by making multiple repeats of measurements to gain knowledge of the typical deviation
  • Use appropriate reference standards (calibrators) and make sure they have a valid traceability to national standards and that the uncertainty of the calibration is known and suitable for your applications
  • Consider if the effect of the environmental conditions have a significant effect to the uncertainty of your measurements
  • Be aware of the readability and display resolution of any indicating devices
  • Study the specific important factors of the quantities you are calibrating
  • Familiarize yourself with the “root sum of the squares” method to add independent uncertainties together
  • Be aware of the coverage factor / confidence level / expanded uncertainty, of the uncertainty components
  • Instead, or in addition to the TUR/TAR ratio, strive to be more aware of all the related uncertainties
  • Pay attention to the total uncertainty of the calibration process before making pass/fail decisions

CTA-calibration-uncertainty

 

Best regards,
Heikki

 

 

 

Topics: calibration uncertainty

Comments