Figure 1. Standard deviation
This is the second blog post (out of three) continuing on the subject of calibration uncertainty. In the first blog post we discussed the basics; how important it is to be aware of the uncertainty, short terminology and the classic “piece of string“ example. If you missed the Part 1-blog post, you can find it in the below link. If you want to get all the parts at once, feel free to download the related full white paper from the link below.
Standard deviation – one important uncertainty component
There are several uncertainty components making up the total uncertainty. The standard deviation of the measurement is one important component, so let’s discuss that next.
The first simple, yet good, practice is that when you normally make a measurement/calibration once, try instead to repeat the same measurement several times. You will most likely discover small differences in the measurements between the repeats. But which measurement is the correct one? Without diving too deep into statistics, we can say that it is not enough to measure only once. If you repeat the same measurement several times, you can find the average and the standard deviation of the measurement. So you will learn how much the results can typically differ between repeats. This means that you can find out what is the normal difference between measurements. It is suggested to make a measurement multiple times, even up to ten times, for it to be statistically reliable enough to calculate the standard deviation. These kind of uncertainty components, that you get by calculating the standard deviation, are called the A-type uncertainty. You may say: What?? - Always repeating the same measurement ten times is just not possible in practice!
Luckily you don’t always need to make ten repeats, but you should still experiment with your measurement process by sometimes repeating the same measurement several times. This will tell you what the typical deviation of that whole measurement process is and you can use this knowledge in the future as an uncertainty component related to that measurement, even if you just make the measurement once during your normal calibration.
Imagine that you would perform a temperature measurement/calibration multiple times and you learn that there is a ±0.2 °C difference between the repeats. Next time you make the same measurement, even if you would then make it just once, you would be aware that there is this ±0.2 °C possible difference, so you could take it into account and don’t let the measurement get too close to the acceptance limit. If you keep calibrating similar kinds of instruments over and over again, it is often enough to make the measurement just once and use the typical experimental standard deviation. In summary, you should always be aware of the standard deviation of your calibration process – it is one important part of the total uncertainty.
Your reference standard (calibrator) and its traceability
Often, one of the biggest sources of uncertainty comes from the reference standard (or calibrator) that you are using in your measurements/calibrations. Naturally to start with, you should select a suitable reference standard for each measurement. It is also important to remember that it is not enough to use the manufacturer’s accuracy specification for the reference standard and keep using that as the uncertainty of the reference standards for years. Instead you must have your reference standards calibrated regularly in a calibration laboratory that has sufficient capabilities (uncertainty small enough) to calibrate the standard and to make it traceable. Pay attention to the total uncertainty of the calibration that the laboratory documented for your reference standard. Also, you should follow the stability of your reference standards between its regular calibrations. After some time, you will learn the true uncertainty of your reference standard and you can use that information in your calibrations.
Other sources of uncertainty
In the white paper you can find more detailed discussion on the other sources of uncertainty.
To shortly summarize, these include:
- Device under test
- Reference standard (calibrator)
- Method/process for making the measurements/calibrations
- Environmental conditions
- The person(s) making the measurements
- Additional uncertainty components depending on the quantity being measured/calibrated
All of these above listed uncertainty components are referred as the Type B uncertainty.
Summary - key take-outs from the white paper
To learn more about the subject, please take a look at the related white paper.
Here is a short list of the key take-outs from the white paper:
- Be sure to distinguish “error” and “uncertainty”
- Experiment by making multiple repeats of measurements to gain knowledge of the typical deviation
- Use appropriate reference standards (calibrators) and make sure they have a valid traceability to national standards and that the uncertainty of the calibration is known and suitable for your applications
- Consider if the effect of the environmental conditions have a significant effect to the uncertainty of your measurements
- Be aware of the readability and display resolution of any indicating devices
- Study the specific important factors of the quantities you are calibrating
- Familiarize yourself with the “root sum of the squares” method to add independent uncertainties together
- Be aware of the coverage factor / confidence level / expanded uncertainty, of the uncertainty components
- Instead, or in addition to the TUR/TAR ratio, strive to be more aware of all the related uncertainties
- Pay attention to the total uncertainty of the calibration process before making pass/fail decisions