Beamex Blog

Beamex blog provides insightful information for calibration professionals, technical engineers as well as potential and existing Beamex users. The blog posts are written by Beamex’s own calibration and industry experts or by guest writers invited by Beamex.

https://2203666.fs1.hubspotusercontent-na1.net/hubfs/2203666/Stock%20images/Person%20on%20Computer.png

Calibration Process Savings Calculator [Online calculator]

Posted by Heikki Laurila on Aug 23, 2024

Blog size Customer service v2 (1)

Are you ready to uncover potential savings within your calibration processes?

In this blog, we’re introducing our new Calibration Process Savings Calculator - a powerful online tool designed to help you estimate how much you could save by upgrading to a modern, digitalized calibration ecosystem.

With decades of experience working with customers globally, we've honed our expertise to identify areas where significant savings can be achieved. A highly effective calibration ecosystem not only saves you time but also reduces operational (OPEX) costs. 

Once you complete the calculator, you'll receive an estimated monetary savings for both 1-year and 5-year periods. 

Naturally, a modern calibration ecosystem offers numerous other benefits beyond just time and money savings.

Access the Calibration Process Savings Calculator here >>

 

How does the calculator work?

The calculator includes questions throughout the calibration process, including the number of instruments and calibrations, work order generation, process instrument data management, transmitter types, calibration procedure management, scheduling of calibrations, calibration execution, documenting of calibrations, managing the calibration results, and so on.

For each question, you can select from predefined answer options. Based on your inputs, the calculator estimates your potential monetary savings. In the end, these savings are summed up to give you a clear picture of your total potential savings for 1 and 5 years.

Give the calculator a try!

 

Additional benefits


Upgrading to a modern calibration ecosystem offers numerous advantages beyond financial savings, including:

  • Reduces the risk of human errors related to manual work
  • Improves the quality and integrity of calibration data
  • Helps achieve regulatory compliance and audit readiness
  • Increases the safety and well-being of your employees 
  • Enhances process efficiency and reduces downtime
    Additional benefits

In many cases, these benefits can be more crucial than the savings themselves and might be the primary reason for updating your calibration ecosystem.

 

What customers say

 

 

Many companies have found a better way with Beamex – read their success stories here.

 

Contact our experts

Want to discuss your results and learn how to achieve these savings? Our calibration experts are here to help. Please contact us for more information.

Contact our experts >>

 

Calibration process savings calculator - Beamex

 

 

 

 

Topics: Calibration, Calibration process, Calibration software, Calibration management

Revolutionizing calibration services with technology - case story [webinar]

Posted by Heikki Laurila on Jul 04, 2024

In today's fast-paced world, managing calibration services efficiently is critical. At Douglas Calibration Services, they faced a significant challenge: a mountain of paperwork that not only slowed them down but also caused immense stress among their technicians.

During a recent webinar, Douglas Calibration shared their transformative journey from a paper-based system to a fully digital, cloud-based solution with Beamex LOGiCAL Calibration Management Software.

In this blog post, we’ll delve into the highlights of that webinar and how innovative software helped them reduce stress, improve efficiency, and achieve remarkable results. Below you can find an executive summary of the webinar and a link to the full webinar recording.

Watch the full webinar recording >>

 

Webinar agenda:

  • 0:00 - Aidan Farrelly from Beamex UK office starts the webinar, introduces agenda and presenters, and discusses the unique challenges with calibration service companies.
  • 10:20 - Case Story: Richard O’Meara from Douglas Calibration share their success story
  • 31:20 - Ville Lassila from Beamex Customer Success runs an online demonstration of Beamex LOGiCAL calibration management software
  • 41:30 - Antti Mäkynen, Product Manager at Beamex, discussed the feedback from service companies and the roadmap ahead.
  • 57:30 - Q&A session
  • 1:21:30 - End of recording

 

The Challenge: Overwhelmed by Paperwork

Douglas Calibration Services faced significant challenges with their traditional paper-based system. With over 90 technicians and more than 130 clients, the company was drowning in paperwork. Technicians struggled with the manual entry, duplication of work, and the physical handling of documents, leading to stress, inefficiency, and errors. The primary issues included:

  • Stress and Staff Retention: Technicians were overwhelmed by the backlog of paperwork, leading to burnout and high turnover rates.
  • Efficiency and Compliance: The manual process was time-consuming and error-prone, affecting the company's compliance and efficiency.
  • Client Satisfaction: Delays in processing and delivering results frustrated clients, impacting overall satisfaction.

 

Revolutionizing calibration services with technology

 

The Solution: Transitioning to Digital with Beamex LOGiCAL

Recognizing the need for change, Douglas Calibration Services embarked on a digital transformation journey. In 2013, they developed an initial access database to manage calibrations, which laid the groundwork for more advanced solutions. By 2022, they implemented Beamex Logical, a comprehensive software solution that revolutionized their operations:

  • Instant Data Access: Logical provided instant access to all calibration data, reducing delays and errors.
  • Seamless Collaboration: The platform facilitated better collaboration among technicians and office staff.
  • Paperless Operations: Transitioning to digital reports eliminated the need for physical paperwork.
  • Improved Efficiency: Automation of processes resulted in faster turnaround times and enhanced productivity.

 

Impact and Results

The implementation of Logical brought about significant improvements in various aspects of Douglas Calibration Services’ operations:

  • Staff Workload Reduction: The digital system reduced technicians' workload by 40%, allowing them to leave work on time without pending tasks.
  • Enhanced Compliance: Compliance improved dramatically, with error rates dropping from 3.5% to 1.2%, and the goal is to achieve less than 1%.
  • Client Satisfaction: Clients began receiving same-day results, increasing satisfaction and trust in the company’s services.
  • Operational Efficiency: Internal report reviews increased by 800%, and office staff could handle all digital calibration reports efficiently.
  • Positive Feedback: There was a significant rise in positive feedback from clients, reflecting the enhanced service quality.

 

Revolutionizing calibration services with technology

 

Key Takeaways

The journey of Douglas Calibration Services from a paper-based system to a fully digital, efficient operation offers several key lessons:

  1. Embrace Technology: Implementing the right software can transform operations, reduce stress, and improve efficiency.
  2. Focus on Staff Well-being: Reducing workload and improving processes can significantly enhance staff retention and satisfaction.
  3. Client-Centric Approach: Faster and more accurate service delivery boosts client satisfaction and loyalty.
  4. Continuous Improvement: Regular audits and validations ensure ongoing compliance and process improvements.

 

Conclusion

Douglas Calibration Services’ successful transition to a digital system with Logical showcases the immense potential of technology in streamlining operations and enhancing service quality. By addressing their challenges head-on and adopting innovative solutions, they set a benchmark for the calibration industry. Software, indeed, became an invaluable tool in their quest for efficiency and excellence. 

 

Watch full webinar recording

Thank you for taking the time to read about our journey. For a more in-depth look, watch our full webinar recording. We hope our experience inspires other businesses facing similar challenges to explore innovative solutions and improve their workflows. You can read Douglas Calibration case story here.

Watch the full webinar recording >>

 

You could save too!

As you saw, Douglas Calibration Services has significantly benefited from adopting Beamex LOGiCAL Calibration Management Software, particularly in terms of time and money savings. The transition to a digital system has streamlined processes, reduced errors, and enhanced overall efficiency. To see how much your organization could save with similar improvements, try our Calibration Savings Calculator.

 

Take the next step

If you want to discuss how calibration technology could could revolutionize your calibration processes, discuss with our calibration experts. They're here to help you. Please contact us for more information.

Contact our experts >>

Learn more on Beamex LOGiCAL Calibration Management Software >>

Request a demo to experience the Beamex LOGiCAL conveniently in an online meeting. 

 

More webinars

View our online webinar library here >>

 

 

Topics: Calibration process, Calibration software, Calibration management, Digitalization

How to get your boss to buy you a new calibrator

Posted by Heikki Laurila on Jun 19, 2024

Calibrator banner image

When you work with something, it's so much easier if you have the proper tools, right?

The same goes for calibration – if calibration is your job, you want the best tools to make your work easier and help you to get more done. Modern calibrators ensure that your calibrations are accurate, you have less to carry, are easy to use, there are automated functions, and so on.

However, when you ask your boss to buy you a new calibrator, you need good arguments. Often, what is important to you may not be as important to your boss. So, you need to be clever and speak “boss language”, presenting the arguments that are important to your boss!

In this blog, I look at how you can talk to your boss to convince them to get you that new, shiny calibrator. Let's dive in and unlock the secrets to getting that "yes!"

 

First, let’s look at the needs of calibration technicians. Then, I list some of the things that typically matter to bosses and managers - the decision makers. Finally, I’ll discuss how you should present your arguments to your boss to get approval for buying your new calibrator.

 

What matters to calibration technicians

Let’s briefly look at the things that normally matter the most to the people who are using the calibrators. Often, they are calibration technicians, or calibration engineers.

  • Less to carry – The calibrator should be multifunctional, so that you don’t need to carry several separate tools with you out in the field.
  • Easy to use – You need to do many different jobs and use many different systems and tools, so the calibrator should be easy to learn and to use. You don’t necessarily use the calibrator every day, so it must be easy to use.
  • Accurate - Good accuracy is naturally a must-have. You can’t calibrate and adjust field instruments properly if your calibrator is not accurate enough. Field instruments are improving and getting more accurate, so should your calibrators.
  • Automation – If your calibration tools can automate part of your work, that is a great time saver.
  • Automatic documentation – Since you need to document the calibration you do, it is great if the calibrator can do the documentation automatically so you don’t need to play with pen and paper.

 

What matters to the bosses/managers

Of course, the priorities and well-being of calibration technicians are important to any manager. But still, the things that matter most to managers are usually different to what matter to technicians.

Typically, the following things are important to managers:

  • Costs and ROI – Making sure that the operational costs do not exceed budgets. And that any new investments provide a good return on investment (ROI).
  • Productivity and efficiency – Doing more with less. There seem to be fewer resources everywhere, but you still need to get more and more done.
  • Digitalization – It is very difficult to find a plant these days that doesn’t have digitalization initiatives ongoing.
  • Reliability and downtime reduction – Making sure that processes run reliably, and any downtime is minimal.
  • Regulatory compliance – It’s important to ensure that processes are compliant with all relevant standards and regulations.
  • Safety and risk management – The safety of workers (and customers) and risk management are important.
  • Training and skills development – Employees need to be trained to make sure their skills stay up to date.
  • Data quality and integrity – The quality and integrity of calibration data needs to be ensured.
  • Sustainability and environment – Environmental considerations such as waste and energy reduction and effluent monitoring need to be taken into account in operations.

 

How to convince your boss

As you saw, there are some differences between the things that matter to technicians and the things that matter to bosses. So how do you discuss with your boss to get the approval to buy your new calibrator?

Obviously, you still have the reasons that matter most to you, but you need to focus on the things that are important to your boss.

Your discussion topics could include following:

  • Productivity and efficiency – Highlight that the automated features of the new calibrators make you and your team more productive and efficient, so you get your job done faster and better.
  • Data quality and integrity – Using modern documenting calibrators will automate documentation, not only making your job more efficient, but also improving the quality of data. This is because it reduces the human errors always present in manual documentation.
  • Costs and ROI – Although new calibrators will always come with a price tag, improved efficiency ensures a good ROI and short pay-back time.
  • Digitalization – New modern documenting calibrators will take the first important steps towards digitalization of your calibration processes. Every boss love digitalization! In future, you can combine those documenting calibrators with calibration management software, and you have digitalized your calibration ecosystem and turn it paperless! Down the road, your calibration software can be connected to your CMMS system to digitalize and automate also your work order delivery.
  • Regulatory compliance – A digitalized calibration ecosystem makes it easier to comply with quality standards and regulations. It also makes any audits so much easier.
  • Training and skills development – New calibrators with a modern user interface are easier to use and easier for new workers to learn.
  • Safety and risk management – In case you need to work in hazardous areas, having intrinsically safe calibrators will make it so much safer to work. It also makes it more efficient as you don’t need to work with hot work permits and carry gas detectors like you do with regular calibrators.

 

Summary

So, there you have it! Getting your boss to buy a new calibrator isn't about listing all the cool features you want. It's about understanding what matters to them and framing your arguments in a way that speaks to their priorities. By focusing on the right things, you'll make a strong case for why this investment makes sense for the whole team. Use these tips, you’ll be able to speak your boss's language, bringing you one step closer to working with that shiny (hopefully green) new calibrator. Good luck! Please let me know how it goes!

If you are ready for some commercial content, please read on.

 

Discover your potential savings!

Convincing your boss to invest in a new calibrator is easier when you can demonstrate significant time and cost savings. Use our Calibration Savings Calculator to see how much your organization can save. By inputting a few details about your current calibration processes, you can uncover the potential financial benefits and efficiency improvements.

Start calculating your savings now and make a compelling case for your new calibrator!

Access Calibration Process Savings Calculator >>

 

Look no further for the new calibrator!

So where do you find that dream calibrator that fulfills all the arguments listed above?

Well, I’m glad you asked! :-)

Check out the Beamex MC6 family of calibrators – a series of advanced, truly multifunctional calibrators designed to digitalize and revolutionize your calibration work!

The Beamex MC6 family includes:

  • MC6 Advanced Field Calibrator and Communicator: The all-in-one solution for versatile field calibration. It combines advanced process calibration functionality with a built-in communicator, making it ideal for on-the-go calibration tasks. Its high accuracy and robust design ensure reliable performance in various field conditions. 
  • MC6-Ex Intrinsically Safe Advanced Field Calibrator and Communicator: Safe and reliable for hazardous areas. Designed to meet stringent safety standards, it ensures precise calibration in potentially explosive environments. The MC6-Ex is indispensable for industries requiring strict safety protocols. 
  • MC6-T Multifunction Temperature Calibrator and Communicator: Specialized in precise temperature calibration. It offers unique features for accurate automated temperature measurements, making it indispensable for temperature-critical applications. Its multifunctionality and ease of use make it a valuable tool for any calibration task. 
  • MC6-WS Workshop Calibrator and Communicator: Optimized for comprehensive workshop calibration. It provides extensive calibration capabilities in a stationary setup, making it perfect for detailed and routine calibration tasks in the workshop. Its high accuracy and automated features enhance efficiency and reliability. 


Beamex MC6 family of calibrators

 

Common key features and benefits of the MC6 family

  • Multifunctionality: Calibrate pressure, temperature, electrical signals, and more.
    • Your benefit: Reduces the need for multiple devices, simplifying your toolkit and saving space. Carry less!
  • High accuracy: Ensure precise calibration with industry-leading performance.
    • Your benefit: Achieves reliable and consistent results, meeting rigorous industrial standards.
  • User-friendly interface: Navigate effortlessly with an intuitive touchscreen.
    • Your benefit: Saves time and reduces training requirements, making it easier for technicians to operate.
  • Documentation: Automatically document your calibrations.
    • Your benefit: Streamlines compliance and reporting processes, reducing manual data entry and potential errors.
  • Durable design: Built to withstand demanding environments.
    • Your benefit: Increases longevity and reliability, providing a robust solution for field and workshop use.

 

Calibration Management Software

Combine an MC6 family calibrator with our calibration management software for a fully digitalized and paperless calibration ecosystem.



Beamex calibration software

 

Ready to take the next step?

Ready to take the next step and upgrade your calibration process? Here are some ways to get started:

  • Book a meeting: Schedule a meeting with our experts to discuss your specific calibration needs and find the best solutions.
  • Request a demo: Experience the MC6 family in action by requesting a live or online demo.
  • Contact us: Reach out to our team for any inquiries or to get a personalized quote.

 

 

 

Topics: Calibration, Calibrator

Hysteresis in pressure calibration: What you need to know

Posted by Heikki Laurila on May 23, 2024

Blog size Customer service v2 (2)

 

Pressure calibration is crucial for ensuring the accuracy and reliability of process instruments used across various industries. One often overlooked but critical factor in this calibration process is hysteresis. Understanding hysteresis and its implications can help improve the accuracy and consistency of your pressure measurements. In this blog, I’ll dive into what hysteresis is, why it matters in pressure calibration, and how you can manage it effectively.

While hysteresis can be found in various types of measurements, such as temperature and electrical signals, this blog focuses on its impact on pressure calibration, where hysteresis is most significant.

 

Table of contents

 

What is hysteresis?

Hysteresis is a phenomenon where the output of a system depends not only on its current input but also on its history of past inputs. In simpler terms, it means that a pressure sensor might not return to its original state after being subjected to varying pressures. This lag or difference can affect the accuracy of the measurements.

For example, if you increase the pressure to a certain value and then decrease it back to the same value, the instrument might show a different reading compared to the initial one. This difference is hysteresis.

For a practical example, if you calibrate a 100 kPa pressure instrument at a 50 kPa point, it may show 49.95 kPa with increasing pressure. With decreasing pressure, at the same 50 kPa point, it may show 50.05 kPa. This difference between 49.95 kPa and 50.05 kPa is caused by hysteresis.

 

The image below shows a simplified illustration of hysteresis. Increasing and decreasing pressure do not follow the same line - there is a clear difference, which is hysteresis.

Hysteresis_v2_crop

 

Hysteresis in pressure calibration

In the world of process instruments, hysteresis can have a significant impact on calibration. Pressure instruments – such as transmitters, sensors, and gauges – are expected to provide precise and repeatable readings. However, due to hysteresis, the readings can vary based on the instrument’s past pressure exposures. This can lead to errors and inconsistencies in your pressure measurements, which can be critical in processes where precision is key.

 

Causes of hysteresis in pressure instruments

Several factors can contribute to hysteresis in pressure instruments, such as:

  • Material properties: The materials used in the construction of pressure-sensing elements can cause hysteresis due to their inherent properties.
  • Design factors: The design and construction of pressure-sensing elements, including their mechanical components, can influence the level of hysteresis. Often in pressure sensors, the pressure stretches mechanical parts that can have a mechanical hysteresis, causing pressure measurement hysteresis.
  • Contamination: Dirt or other contaminants inside the instrument can cause hysteresis by obstructing the movement of mechanical parts, leading to inaccurate readings.
  • Environmental influences: Temperature changes, humidity, and other environmental conditions can affect the hysteresis behavior of pressure instruments.

 

Identifying hysteresis

To manage hysteresis effectively, it’s essential first to identify and measure it accurately. Here are some techniques:

  • Up and down calibration: Conduct calibration by increasing and decreasing the pressure to identify any differences in the readings at the same pressure points. Please note that if you don’t wait long enough for the readings to stabilize, any delay or lag in the measurement instrument can look like hysteresis.
    If you generate pressure with a hand pump, you need to be careful not to overshoot (or undershoot) when generating calibration points, or you may lose some of the hysteresis effect. For example, you need to approach the increasing points from below, and not overshoot and come back down.
  • Calibration cycles: Perform multiple calibration cycles to observe any discrepancies or repeatability issues in the readings. If there are any repeatability issues with the instrument, it may look like hysteresis. Therefore, it is good practice to perform several calibration repeats to reveal repeatability issues. Fully automated pressure calibration obviously makes it easier and saves time when performing multiple repeats.
  • Graphical analysis: Plotting the pressure input vs. output readings can help visualize hysteresis. It may be very difficult to see the hysteresis in numerical results. If you have a pressure calibrator that displays the calibration results in graphical format (such as a Beamex MC6 family calibrator), it is much easier to identify hysteresis. 
    Sending calibration results to calibration software also helps, as the software often offers graphical presentation results (at least Beamex Calibration Management Software does).

 

Mitigating hysteresis 

While hysteresis cannot be completely eliminated, it can be managed and minimized. Here are some best practices to help you do this:

  • Regular calibration: Calibrate regularly, with up and down cycles, to identify hysteresis.
  • Instrument selection: Choose high-quality pressure instruments with low hysteresis characteristics for critical applications.
  • Consistent procedures: Follow consistent calibration procedures to ensure the repeatability and reliability of results.
  • Instrument cleanliness: Ensure that instruments are clean and free from contaminants that could affect their performance.
  • Environmental control: Whenever possible, maintain stable environmental conditions during calibration to reduce external influences. Of course, this is not always possible when calibrating instruments in field conditions.

 

Hysteresis in pressure switches

With any switches, including pressure switches, there is a hysteresis-like feature called “deadband”. This means that the switch has been designed so that there is some difference between the opening and closing points with increasing and decreasing pressure. This may seem a lot like hysteresis, or even be called hysteresis, but it is not actual hysteresis.

This deadband is needed and important in switches, otherwise the switch could start oscillating between open and closed when the pressure is at a certain value. Because switches are used to control specific operations, this is undesirable. Anyhow, you can learn more on pressure switches in this blog post: Pressure Switch Calibration.

 

Conclusion

Hysteresis is a critical factor to consider in pressure calibration, especially in the world of process instruments, where precision is paramount. By understanding what hysteresis is, identifying its causes, and implementing best practices to manage it, you can ensure more accurate and reliable pressure measurements.

 

Beamex solutions

At Beamex, we have worked with pressure calibration for 50 years, so I am confident when I say that we know something about it.

We offer tools and services that meet the highest standards in the industry, based on our long experience and strong commitment to innovation.

Learn more about our solutions related to pressure calibration:

To discuss with our calibration experts, please contact us.

 

Free Pressure Calibration eBook

Download this free 40-page pressure calibration eBook, which includes detailed strategies and resources for calibrating your pressure instrumentation.

Read more and download free pressure calibration eBook >>

 

 

 

Topics: Pressure calibration

With empathy to excellence - The secret to great customer service

Posted by Pekka Videnoja on Apr 03, 2024

The secret to great cusomter service

Empathy is an essential element of great customer service. Along with understanding, it’s the most powerful asset for any customer service organization. Putting time and effort into understanding your customer’s business and processes and empathizing with their problems and pain points will put you in a far better position to deliver great customer service.

In this blog post, we discuss the vital role of empathy, understanding, and a human touch in delivering exceptional customer service - before diving into more details about how Beamex Calibration Solutions Group can help you find a better way to calibrate.

 

Listen to what the customer is saying

Customer service can sometimes feel like a game of table tennis. The customer throws out questions, the customer support contact throws back answers, and on and on (and on) we go. The person in contact with the customer might have the knowledge and the skills, but are they really listening? If the support contact is just interested in shifting the ticket on to someone else or marking it as done, they’re not making the customer feel like they were listened to or understood. This game of table tennis is heading for a frustrating draw where neither side is satisfied and the problem is still there.

 

Understand and appreciate their needs

When the customer puts their trust in your solution, they need to feel appreciated when you are serving them, right from day one. Even if day one is a Friday. We’ve all been there – it’s Friday and you just want to start the weekend, and in comes a phone call or email with a complicated problem to solve. But it’s Friday for the customer too, and they’re not contacting you because they’re bored; they’re doing it because they genuinely need your help – and they have a right to be heard.

Now the real hard work begins. When you take the time to listen and properly understand what the customer’s situation is, you can work out how critical it is and weigh up the best way forward. Is an immediate solution needed? Or would it be better to take the time to go over the information internally and organize a call on Monday to walk through the problem and ask the right questions, instead of rushing in with poorly planned quick fixes now? This kind of empathy and willingness to cooperate to find a solution can reassure the customer that help is on the way.

Of course, there are no guarantees that you’ll understand everything right away, but you will certainly have a more cooperative and less angry customer on the other end of the line if you give the impression that you are making a genuine effort to understand their situation.

 

Look for a viable solution, not a quick fix

Customer-facing experts are often dealing with customers working in a busy production environment or process where things can’t simply grind to a halt. When they understand this and empathize with the customer’s problem they can provide useful answers and workable solutions. It makes no sense to look for a quick fix – a solution that might solve the problem but is completely unworkable in regular operation. There are probably many viable paths forward, and a touch of empathy and a healthy helping of technical knowledge can help to identify the optimal solution in both the short and the long term.

 

Give a human touch in the digital age 

In pre-COVID times the world was a very different place. Remote working and remote meetings were far less commonplace, certainly in our line of business. The technical meetings and workshops I was involved with were almost always face to face. Things have moved on since then, and customers are far more willing to jump online and meet with the help of digital tools. The trick is to get past the email tennis barrier once again. An email from a customer is them reaching out to us, maybe even a cry for help. In non-urgent cases, replying to acknowledge their issue and proposing an online meeting in a few days’ time gives you the chance to gather the information you need and prepare a viable solution, which can then be discussed face to face.

The word “prepare” carries a lot of weight here. Without putting the work in to prepare between acknowledging the customer’s problem and meeting them to look for a solution, the next contact you have with them could be a frustrating waste of time. Maybe they don’t have certain admin permissions they need to show you what’s going on with their devices or processes. Perhaps you need someone from IT in the meeting with you to facilitate the discussion. When you’re prepared and know what to ask for, you’re also being empathetic by showing the customer that you understand their need to feel secure, looked after, and cared for.  

 

Assumptions are not your friend

For me, two of the biggest barriers to delivering great customer service are assumption and pre-judgement. If you go into a situation with the assumption that A or B has happened on the customer side and therefore the solution is C, you’re already limiting your options and not demonstrating a willingness to understand the customer’s situation. Instead, you are looking to reinforce your own preconceptions and deliver a cookie-cutter solution.

In customer service, one size does not fit all. At Beamex, in our experience when a customer contacts us with an issue they are often just describing a symptom of a problem rather than the problem itself. If you then go on to assume things based on a narrow view of a wider problem, then you’re never going to get to the root cause.

With empathy and an open mind, problems can be solved faster, in a way that is more satisfying to everyone involved.

 

Don’t discount the importance of soft skills

Given what we do at Beamex – making the world a safer and less uncertain place by helping customers find a better way to calibrate – technology and technical skills are very important. But in customer service, it’s easy to forget that we are still humans dealing with other humans. Soft skills like empathy and cultural understanding are critical to delivering great customer service.

While the world and its industries are becoming increasingly digitalized, humans will always be analog. As analog beings in a digital world, empathy is a way to set ourselves apart from artificial intelligence. Great customer service is built on what I call the pyramid of strength formed by appreciation, understanding, and solution – and that’s one strength that AI can’t offer.

 

Technology alone is not enough

At Beamex we believe that technology alone does not provide a better way to execute and manage your calibrations unless it is adapted to the customer’s specific needs. Through empathy and understanding, we aim to deliver a solution that is adapted to your specific needs. Our approach is a holistic one, where we aim to be your partner for calibration excellence throughout the calibration solution lifecycle.

When we advise, it is based on an evaluation of your current calibration process to identify room for improvement. The next step is to define which calibration technology and implementation services best fit your needs and then use them to deliver a better way to calibrate. We guide you throughout the adoption process to ensure that your new calibration solution becomes an integral part of your daily operations.

Beamex technology

Customer quote

 

The three areas of Beamex Calibration Solutions Group

Expert services focus on understanding the customer’s problem and advising them on the best way to overcome their current challenges. The next step is working with the customer to define what their new calibration solution will look like and how to map their processes to the Beamex solution before delivering it. Delivery includes introducing the solution and performing instrument data migration. This step can be provided as a service, or the customer can perform it by themselves. A full-scale solution with integration, validation, and SOP creation services is typically required by larger customers.

Training services are available to train the customer’s technicians and engineers on how to use their tailored Beamex calibration solution and how to get the best from it throughout its lifetime. These can be delivered both remotely and on site, and are always tailored according to the solution in question.

Support services are there to make sure the customer is never on their own, with a Beamex advisor always on the end of the phone or available via email to provide helpdesk-type support. When a customer first starts using their Beamex solution, they have the extra peace of mind provided by a ‘hypercare’ period. This elevated level of support is crucial to help them get comfortable with the new solution. 

 

Customer quote

 

Learn more about our expert, training and support services by talking to a Beamex calibration expert

 

Beamex case stories

Many companies have found a better way with Beamex – read their success stories:

Find all case stories here.

 

Customer quote

 

Beamex - Your partner for calibration excellence

 

Topics: Calibration process, Calibration management

Calibrating for a Cleaner Future - Unlocking the Potential of Waste to Energy

Posted by Monica Kruger on Feb 13, 2024

Waste-to-Energy (WtE) has been around since the first waste incinerator was built in 1874, but the sustainability challenges of today – combined with innovative new technologies – are revolutionizing the industry. A World Energy Council report valued the global WtE market at 9.1 billion USD in 2016 and it is projected to increase to over 25 billion USD in 2025, driven by an increase in waste production, growing populations, and urbanization.

Our recent white paper "Waste to Watts – Unlocking the Potential of Waste to Energy" takes a deep dive into WtE and shows some real-life examples of how innovative technologies, calibration, and accurate measurements are driving the industry.

 

Waste to Watts – Unlocking the potential of WtE

In our white paper, we provide an in-depth examination of WtE and showcase examples of how new technologies, calibration and accurate measurements are propelling the industry forward. With plenty of clear infographics, facts, and figures, the white paper covers:

 

  • Waste-to-Energy: The Encyclis story – a modern WtE success story
  • A growth sector in the making – showing a snapshot of the industry
  • Waste-to-Energy around the world – how different countries are embracing WtE
  • The technology of the future – the technologies moving WtE beyond incineration
  • Challenges to growth – from public perception, to sustainability and efficiency
  • The role of calibration – the importance of accurate measurements
  • Accelerating Waste-to-Energy – and why it’s important
  • Unlocking future potential – a clear overview of the path ahead

This blog post gives you a small taste of what you can find in the white paper – download it now to read the full story. 

 

WtE has many benefits over landfill

Modern WtE plants allow hazardous organics to be safely managed within the waste stream while facilitating the recovery of both ferrous and non-ferrous metals, including valuable metals. They also make hydrochloric acid and sulfur recovery feasible – raw materials that can be used in gypsum board production. Even the ash residue from the process has many applications in the construction industry and can be used instead of concrete. All this is on top of the energy WtE plants can generate for homes and businesses.

The fact that organic pollutants are destroyed in the process and inorganic pollutants, especially heavy metals, are extracted and transformed into insoluble substances is a key advantage of WtE over landfilling. In this way the process contributes to a circular economy where waste materials are recycled, reused, or made inert to minimize their environmental impact. A common misconception of incinerators is that they are dirty and polluting – in fact, the exhaust from the stacks of a modern plant is extremely clean, often cleaner than the air surrounding the plant, as only cleaned gasses and water vapor are released into the atmosphere.

 

The benefits of modern technology

Digitalization is revolutionizing the WtE industry, driving process improvements and enabling transparency and third-party oversight. “Certain players in the industry manage up to ten plants and are streamlining their calibration and maintenance programs for consistency,” shares Christophe Boubay, Sales Director and Country Manager for Beamex France. “By doing so, they can assess and replicate successful practices through cloud-based solutions from one plant to another.” This approach ensures on-site technicians and remote management have a comprehensive overview of operations and can control the plants to maximize overall efficiency, minimize waste, and ensure end products are suitable for various applications.

Calibration in Waste to Energy

 

Accurate data for continuous improvement

The WtE process is highly regulated with many rules, frameworks, and standards that operators must follow, both when it comes to the waste that is fueling the process as well as factors such as wastewater disposal and the proper handling of scrap metal and ash by-products. In addition, WtE plants must demonstrate that they are recovering waste and not just disposing of it. These regulations make precise measurements essential in order to be able to monitor and improve environmental performance. To ensure measurements are accurate, calibration is vital.

Properly calibrated tools help ensure that a WtE plant’s pressure and temperature instruments for the incinerator and boiler control process are working at high levels of accuracy, for example. This ensures optimum instrument performance, resulting in higher efficiency and reduced levels of CO2 entering the atmosphere. Compliance with emission restrictions is crucial – exceeding them can result in heavy financial penalties and even plant closure.  Modern calibration software helps manage the calibration of stack emission instrumentation, thus ensuring continuing compliance with local and national regulations. The use of digital calibration certificates also makes it easier for WtE facilities to share calibration data for emissions monitoring, auditing, and regulatory compliance purposes.

 

From waste to watts: a WtE success story

Every year Encyclis’s Rookery South Energy Recovery Facility (ERF) in Bedfordshire, England, takes 550,000 tonnes of waste that would otherwise end up in landfill and turns it into 60 MW of sustainable energy for around 112,500 homes. Thousands of tonnes of ash are also produced for the construction industry. The site has been operating since January 2022 and is one of three WtE plants operated by Encyclis, with two more under construction. Encyclis collaborates closely with waste management companies, recovering valuable resources and by-products for reuse and contributing to the circular economy by turning household and commercial waste into a valuable resource. All Encyclis plants use continuous real-time monitoring to adhere to strict Environment Agency emission limits. 

Encyclis’s Rookery South Energy Recovery Facility

Encyclis chose Beamex to provide a comprehensive calibration ecosystem for the Rookery South ERF, including CMX Calibration Management Softwarethe bMobile Calibration ApplicationMC6 Advanced Field Calibrator and CommunicatorsMC6-T Multifunction Temperature Calibrator and Communicators, pumps and expert services. The company has also collaborated with Beamex to equip its WtE facility in Newhurst, UK. For both plants, Beamex has been involved from the early stages, before the commissioning phase. This approach makes it possible to apply best practices based on decades of experience and ensure that the resulting calibration solution is user-friendly and enables seamless data exchange.

A detailed calibration procedure was also integrated in the plants, specifying calibration schedules and test parameters. This information was then synchronized with handheld devices used by technicians and engineers. All workers have to do is connect to the instrument being calibrated and perform the calibration. The Beamex software does the rest, calculating the pass or fail result, updating the digital certificate, and resetting the recalibration date for future reference.

Nick Folbigg, Electrical, Control and Instrumentation Team Leader at Encyclis, likens managing an ERF to assembling a complex jigsaw puzzle: “Numerous parts need to align seamlessly for effective, efficient, and compliant operation. Using the complete Beamex calibration solution is a key piece of this puzzle.”

 

The future of WtE

Regulation has a role to play in helping the WtE industry reach its full potential and claim its place in the circular economy. The US, for example, faces significant financial hurdles in transitioning away from landfill-based waste management systems, but regulations preventing the disposal of untreated organic waste would be a practical approach to accelerating the change. After all, as Phillipp Schmidt-Pathmann of the Institute for Energy & Resource Management points out, the average person in an integrated waste management-based system usually pays less than in a landfill-based system. Governments should also introduce a market mechanism for Carbon Capture, Usage and Storage, which will encourage more investment in WtE.

WtE plants can then focus on what they do best: reclaiming precious metals, boosting revenues with local secondary raw material streams, supporting construction, reducing resource demand, supplementing grid power, and making drinking water production more sustainable for local populations. WtE’s ability to convert electricity into hydrogen should also be exploited to allow for zero-emission buses and waste transportation. Together, we can help transform waste into a more sustainable future for us all.

To read more about the Rookery South ERF and WtE, download our white paper: Waste to Watts – Unlocking the Potential of Waste to Energy. 

 

Related content

Topics: Calibration management, Digitalisation, sustainability

How an accurate, reliable calibration solution could supercharge your business [Podcast]

Posted by Monica Kruger on Jan 09, 2024

Whether you work in manufacturing, pharmaceuticals, healthcare, or any other field that relies on precise measurements, an accurate and reliable calibration solution is crucial. Accurate calibration is a critical component of quality control, compliance, safety, and cost efficiency. So, what is the best way to guarantee accurate measurements, reliable data, and traceability?

Two Beamex experts recently guested on the Process Industry Informer podcast to discuss this fascinating topic. The episode includes a real-world example of one major Beamex customer that has cut costs while boosting efficiency by adopting a centralized, standardized process for recording, storing, and analyzing calibration data.

Calibration Consultant Michael Frackowiak and Director of Sales for the UK & Ireland John Healy at Beamex sat down with host Dave Howell for a fascinating talk about the key role that calibration plays in industrial processes and how the Beamex calibration ecosystem helps customers to simplify and enhance their calibration processes.

 

Listen to the podcast: 

  

 

You can also find this podcast on the Process Industry Informer website

 

Table of content

 

Time Subject
0:00 - 1:20General introduction
1:20 - 4:10Mike's background and calibration expertise and John's role and experience at Beamex
4:10 -  6:50The role and importance of calibration
6:50 - 13:20Common challenges and industry needs in calibration
13:20 - 17:40The importance of data in calibration
17:40 - 28:30Detailed insight into National Gas's (formerly National Grid) calibration journey
28:30 - 35:00The value of partnership and trust in your calibration technology provider
35:00 - 38:00Beamex's comprehensive calibration solution and educational resources

 

Getting calibration right is fundamental to a successful, safe business 

Discussing the role that calibration plays in the industry, Frackowiak highlights that getting calibration right is as important as getting your product quality and on-site safety right – and that products alone are not enough. “In the end, calibrators are just boxes; to get the most from them, customers need support, expertise, and knowledge.”

“Beamex’s purpose is to provide the customer with a better way to calibrate,” Healy explains. “We are working across many different industries, some highly regulated like pharma, where tolerances and calibration requirements are strict. Many conversations we have revolve around how to move away from error-prone manual recording of calibration data towards a more automated approach. These conversations are the starting point to find out what they need from their calibration process.”

 

Evolving regulations are an opportunity to identify areas for improvement

As standards and regulations evolve across different industries, Beamex takes the opportunity to meet face to face with customers, for example at its annual Pharmaceutical User Forum, to discuss what these changes mean in practice. “The insights we gain from these kinds of forums are invaluable in terms of learning how we can better support customers moving forward,” Healy says.

 

Making sense of the flood of calibration data

Data generation and analysis in process industries has exploded in the last decade. Operators are gathering more data than ever before about their processes as they seek improvement opportunities. How does Beamex help customers make sense of this flood of data? “Working out what to do with the massive amounts of calibration data being generated is a huge challenge for many industries,” Frackowiak points out. “As calibration experts we want to help take the load off customers’ minds. They shouldn’t need to think about calibration data. We give them the calibratorsthe software, and the back end – everything they need to make sense of the data and make good decisions based on it, faster,” he continues. “The calibrators we provide take care of the accurate measurement, but where we add real value is with the ecosystem around calibration as a process, as a decision-making support tool.”

 

A centralized asset data resource for National Grid

As part of the podcast the panel discussed Beamex’s collaboration with National Grid, which uses Beamex CMX Calibration Management Software to centralize asset data in a single system. “In a nutshell, this case was about helping National Grid work out the best way to extract, interpret, and make the best use of the data they had been gathering,” Frackowiak says. “This was a really exciting journey on both sides,” Healy says. “The end goal was a centralized, standardized solution for gathering, storing, and analyzing data on asset performance at their gas compressor sites. The data islands they had made it very difficult to accurately assess asset performance, and there was no standardized calibration procedure across the sites.”

“Our solution for National Grid has three main components,” Frackowiak says. “There are the calibrators themselves, the software, and then our expertise and training to guide the customer through the implementation process. This third element is what helped us map out and design a system that would meet the customer’s needs precisely.”

“The operational team at National Grid saw the value of doing things the Beamex way – the time and hassle doing things this way would save them,” says Healy. “When management could see the cumulative impact of this across multiple sites and the huge benefits of having true visibility over their asset data, they were quickly onboard too.”

Using the Beamex system, National Grid has seen a saving of 4,000 hours per year performing calibrations, resulting in millions of pounds of financial savings.

 

 

Beamex is there every step of the way

Discussing the complexities of these kinds of customer cases, Frackowiak continues by emphasizing how Beamex’s approach sets them apart in the market. “These kinds of projects take time, but we are there to be the partner for calibration excellence, supporting the customer at every step of the transformation process. This is what makes Beamex far more than just another technology provider. We are a trusted partner, a trusted advisor – we are the calibration specialists who are looking 5, 10, even 15 years ahead together with the customer through the Beamex calibration ecosystem.”

 

Listen to the podcast: 

  

 

You can also find this podcast on the Process Industry Informer website

 

Discuss with our experts how a calibration solution could supercharge your business

 

 

Topics: Calibration management

Industrial Temperature Calibration Course [eLearning]

Posted by Heikki Laurila on Dec 12, 2023

Industrial Temperature Calibration Course, eLearning

In this blog post, we want to share a new way to learn more about industrial temperature calibration without attending in-person calibration classes. Our new industrial temperature calibration eLearning course will help you level up your calibration knowledge with six in-depth modules. And best of all, it’s completely free!

The calibration course offers a wide range of resources, including executive summaries, in-depth articles, how-to videos, quizzes, and a comprehensive final test. If you pass the final test in the end,  you’ll receive a certificate.

Read more and start the temperature calibration course now!


What you’ll learn in the course

  • The calibration basics: what, why, and how often?
  • Temperature sensors (RTSs and thermocouples) and temperature units
  • How to calibrate Pt100, duplex, and sanitary sensors
  • How to calibrate temperature switches and transmitters
  • Calibration uncertainty: what it is and why it matters
  • How to calibrate temperature instruments


    Enroll and start now!

 

Course overview

Want to know more about what you can expect from your calibration classes? Here’s a short overview of the key areas in the calibration course.

 

 

1. Calibration basics

However much you already know – or think you know – about calibration, it’s always good to go over the basics. In this section you will learn all about the fundamental concepts of calibration, the reasons to calibrate, why calibration is important, and the critical issue of traceability. We’ll also explore how frequently instruments should be calibrated to maintain accuracy. Once you finish this section of the course you’ll have a solid foundation of the essentials of calibration.

 

What is calibration? - Beamex blog

 

Extract from the eLearning course and calibration basics section.

 

2. Temperature units and sensors

In the second section of the course we’ll dive into the world of temperature measurement, gaining insight into different temperature units and temperature unit conversions. You’ll also learn more about Pt100 sensors and thermocouples and how and where they’re used. Once you’ve covered this section of the calibration training you’ll be ready to combine your knowledge of calibration and temperature to find out about calibrating temperature sensors and sanitary sensors.

Sanitary temperature sensor calibration - Beamex blogSimplified illustration of thermocouple cold junction.

 

3. Calibration of temperature sensors and sanitary sensors

Are you looking for calibration training for temperature sensors and sanitary sensors? This section of the course will help you to master the techniques and procedures for calibrating these types of sensors. There will be a particular focus on the unique challenges presented by sanitary sensors – and how to overcome them.

 

Sanitary sensor calibration - Beamex blogSanitary temperature sensor calibration

 

 

4. Calibration of temperature switches and transmitters

In the fourth section of the calibration course we’ll examine methods for calibrating temperature switches and procedures for calibrating temperature transmitters. You’ll also find out how to ensure precise temperature control.

Temperature switch calibration - Beamex blogTemperature slope in temperature switch calibration.

 

 

5. Calibration uncertainty

Calibration uncertainty is an essential factor to understand when performing industrial temperature calibration. In this section of the course you’ll learn to navigate calibration uncertainty and evaluate it effectively. You’ll also discover how to manage uncertainty in temperature calibration. Finally, we’ll explore the concept of temperature dry block uncertainty, a crucial aspect of temperature calibration.

 

Calibration uncertaintyCalibration uncertainty

 

6. How to calibrate temperature instruments

In the last section of the course we’ve gathered some goodies for you, with exclusive access to webinars that provide hands-on calibration training for temperature instruments. These webinars will enhance your practical skills and give you added confidence in your calibration knowledge.

Temperature instrument calibration webinar - Beamex blogWebinar: How to calibrate temperature instruments.

 

Start learning today!

The full calibration course will take you around eight hours – but if you have existing knowledge you might complete it more quickly. Just remember to sign into our eLearning service so you can save your progress and split your calibration training over multiple days. Once you have successfully completed the final test in the end you’ll receive a certificate via email.

 

Temperature Calibration eLearning

 

Master temperature calibration with this free comprehensive course. Deepen your knowledge, pass the quiz, and earn your certificate!

Enroll and start now!


 

Beamex's offerings for temperature calibration

At Beamex, we have a lot to offer for temperature calibration.

For example, our most versatile temperature calibrator Beamex MC6-T Multifunction Temperature Calibrator and Communicator, the easy to use Beamex MC6 Advanced Field Calibrator and Communicator Beamex RPRT reference sensors and several Beamex Temperature Sensors.

Don't forget our calibration software offerings and the entire calibration ecosystem.

We also offer expert services and training services for temperature calibration.    

You can also download a free temperature calibration eBook, and visit the handy temperature unit converter on our website. 

Please feel free to contact us to discuss on your temperature calibration challenges and how we can be your partner for calibration excellence.

Please scroll through the carousel below for more interesting articles related to temperature calibration!

 

 

Topics: Temperature calibration

Is it a leak? - Understanding the adiabatic process in pressure calibration

Posted by Heikki Laurila on Nov 29, 2023

Is it a leak? - Understanding the adiabatic process in pressure calibration

The adiabatic process is something we have all encountered if we have been working with pressure calibration. Often, we just don’t realize it, and we think there is a leak in the system.

In short, the adiabatic process is a physical phenomenon that causes the pressure media’s temperature to increase when we increase the pressure in a closed system. When we stop pumping, the media’s temperature cools down, and it will cause the pressure to drop – so it does indeed look like a leak in the system.

You can find many in-depth, complicated physical or mathematical explanations of the adiabatic process in the internet. But hey, we are not physicists or mathematicians, we are calibration professionals! Lucky you, you’ve got me to simplify this for you :-)

In this article I take a closer look at the adiabatic process, how to recognize and avoid it. A little bit of compulsory theory to start with and then diving into practical things.

If you are working with pressure calibration, you cannot miss this one!

 

Table of contents

 

What is the adiabatic process?

An adiabatic process is a thermodynamic change whereby no heat is exchanged between a system and its surroundings.

For an ideal gas undergoing an adiabatic process, the first law of thermodynamics applies. This is the law of the conservation of energy, which states that, although energy can change form, it can't be created or destroyed.

We remember from our school physics (well, some of us may remember!) the formula with pressure, volume and temperature, and how they depend on each other. Remember? 

The combined gas law says that the relationship between pressure (P), volume (V) and absolute temperature (T) is constant. As a formula it looks as following:

Combined gas formula

Where:

  • P = pressure
  • V = volume
  • T = absolute temperature
  • k = constant

OK, that did not yet take us too far, but please bear with me…

When using the above formula and comparing the same pressure system under two different conditions (different pressure), the law can be written as following formula:

Combined gas law 2We can think of this formula as representing our normal pressure calibration system, having a closed, fixed volume. The two sides of the above formula represents two different stages in our system – one with a lower pressure and the second one with a higher pressure. For example, the left side (1) can be our system with no pressure, and the right side (2) the same system with high pressure applied.

Looking at the formula, we can conclude that as the volume of a pressure calibration system remains the same, and if the pressure changes, then the temperature must also change. Or the other way around, if the temperature changes, then the pressure will also change.

The image below shows a typical pressure calibration system, where we have a pressure pump, pressure T-hose, pressure instrument to be calibrated (1) and pressure calibrator (2).

 

Pressure calibration connection diagram

 

Typically, the volume of our pressure calibration system remains the same, and we change the pressure going through the calibration points. When we change the pressure (and the volume remains the same) the temperature of the medium will change. That’s physics, deal with it :-)

We can most commonly see the adiabatic process when we raise the pressure quickly with our calibration hand pump, causing the media (air) to get warmer. Once we stop pumping, the medium starts to cool down causing the pressure to drop - at first quickly, but then slowing down and finally stabilizing. This pressure drop looks like a leak in the system.

The same also happens with decreasing pressure – if we decrease the pressure quickly, the media gets colder. When we stop decreasing, the media will start to warm up, causing the pressure to rise. This may seem odd at first – how can the pressure rise by itself? Of course, the pressure does not increase a lot, but enough for you to see it and wonder what’s going on.

So, the adiabatic process works in both ways, with increasing and decreasing pressure.

The faster you change the pressure, the more the medium temperature will change, and the bigger effect you can see.

If you wait a while, the pressure media temperature will stabilize to the surrounding temperature and the effects of the adiabatic process will no longer be visible.

This is the essential learning from the adiabatic effect.

 

How do you know when it’s the adiabatic process and when it’s a leak?

The main difference between the adiabatic process and a leak is that the pressure drop caused by the adiabatic process is bigger in the beginning, then slows down and disappears (stabilizes).

The pressure drop caused by a leak is linear and continues at the same rate.

The below image demonstrates the difference:

Adiabatic process vs leak - graphicIn the above image you can see how the pressure drop caused by the adiabatic process is first fast, but then slows down and eventually stabilizes (red line). While the pressure drop caused by a leak is linear (blue line). 

 

How to avoid the adiabatic process?

Pressurize slowly:

One of the easiest ways to minimize the adiabatic effects is to change the pressure slowly. By doing so, you allow the media more time to reach the same temperature as its surroundings, minimizing any temporary temperature changes. In practice, if you increase the pressure with a hand pump, and you step through several increasing calibration points, this may already be slow enough to avoid seeing the adiabatic process.

If you pump as quickly as you can up to 300 psi (20 bar), then you will most certainly see the effect of the adiabatic process. 

Wait:

After adjusting the pressure, give it some time to stabilize. A minute or two should do the trick. This allows any temperature changes in the medium to reach equilibrium with the ambient conditions, and the pressure will stabilize accordingly.

 

Pressure media

You can also affect the adiabatic process with your choice of pressure media. In practice it is of course not always possible to change the media. Your normal hand pump uses the air as media. For higher pressure, you may use a hydraulic pump with water or oil as the medium.

The effects of the adiabatic process are generally more prominent in air or gas-operated calibration pumps than in hydraulic (water or oil) ones.

This is mainly due to gas being much more compressible, so the pressure increase will push gas molecules closer together, and this work done in gas is transformed into energy, causing heat. In addition , gas/air has lower thermal conductivity than liquids, so less heat is conducted away from gas.

 

Conclusion

In our service department, we regularly get questions about pressure pumps having leaks, while in most cases it turns out to be the adiabatic process that has made the customer think that there is a leak.

Understanding the adiabatic process and its impact on calibration pressure pumps is crucial for users to avoid misdiagnosing issues. By changing pressure at a moderate pace and allowing adequate time for stabilization, you can achieve more accurate and consistent results.

 

Beamex's offering for pressure generation and calibration

At Beamex, we have a lot to offer for pressure calibration.

For example, our PG range of calibration pumps for pressure generation, the ePG electric pressure pump, the POC8 automatic pressure controller, and a number of pressure calibrators

Don't forget our calibration software offerings, and the entire calibration ecosystem.

We also offer expert services and training services for pressure calibration.    

You can also download a free pressure calibration eBook, and visit the handy pressure unit converter on our website. 

Please feel free to contact us to discuss on your pressure calibration challenges and how we can be your partner for calibration excellence.

Please scroll through the carousel below for more interesting articles related to pressure calibration!

 

Topics: Pressure calibration

How Douglas Calibration Services reduces technicians’ workload and stress [Case Story]

Posted by Rita Patel on Aug 08, 2023

Douglas calibration LOGiCAL

Beamex LOGiCAL Calibration Management Software has helped Douglas Calibration Services improve quality of life for their technicians. Freed from mountains of paperwork, technicians can now provide faster, more efficient service for the company’s diverse customer base. Let’s explore how.       

Douglas Calibration Services employs 70 technicians and serves more than 130 clients across the pharma, energy, and food and beverage industries. The company uses the Beamex MC2, MC5, and MC6 documenting calibrators on a daily basis and had been looking for a cloud-based solution that would allow them to go 100% paperless.

Read the full case story >

 

Rescuing technicians from a mountain of paperwork

"We implemented LOGiCAL for all our clients in 2022. As it’s a cloud-based solution, there’s also no wastage or big initial outlay on infrastructure or implementation, and no hardware upgrades,” says Richard O’Meara, Contracts Manager at Douglas Calibration Services.

For Douglas Calibration Services, taking care of their technicians’ well-being was a strong driver behind their decision to adopt LOGiCAL. “We wanted to take the pressure off our technicians, who were drowning in paperwork,” Richard says. He estimates that the technicians’ workload has been reduced by 30–40% thanks to LOGiCAL.

Benefits of LOGiCAL calibration software in numbers

 

Not only has this improved life for the company’s existing workforce, but it also acts as a way to attract new employees in what is an extremely competitive market.

Instead of laptops and paper printouts, technicians use a tablet with the Beamex bMobile Calibration Application to perform all their calibration work. Results are captured by the technician in the field and synchronized with LOGiCAL, and admin staff can download and review them before emailing the calibration certificates direct to the client. Clients now receive their certificates on the same day or the day after the calibration work was done in 90% of cases.

Benefits of using LOGiCAL calibration somftware

 

No more missed calibrations

Prior to LOGiCAL, employees had to manually track recalibration due dates on a spreadsheet; LOGiCAL now does all this work with its ‘instruments due’ feature and also tracks reference standards so they never miss recertification. 

Quote by Richard O'Meara

 

During the transition to LOGiCAL, Beamex convened monthly meetings to discuss any issues or queries, and regular feedback from the team at Douglas Calibration Services has led to the implementation of a host of new features and updates. 

Richard sees plenty more potential in the solution: “We’re excited to work with Beamex to develop new features, like the ability for clients to log in to their database and view instruments and calibration results, for example. This is just the beginning of our shared journey!”  

 

Download the full customer success story

Schedule a free LOGiCAL demo

www.beamex.comwp-contentuploads202006Hero-LOGiCAL-17

 

Here's a very short video summary of the story:

 

Related content:

 

 

Download your copy of the Calibration Essentials Software eBook to learn more about calibration management and software

Calibration Essentials- Software eBook

 

 

Topics: Calibration process, Calibration software, Calibration management, Case Story

Partnering in calibration excellence at the Beamex Pharma User Forum

Posted by Monica Kruger on Jul 04, 2023

Beamex Pharma User Forum 2023

The biggest issues facing industry leaders from some of the largest pharma companies worldwide, including AstraZeneca, BioNTech, Novartis and GSK, include preventative maintenance, ensuring data integrity, complying with regulations and embracing digitalisation. This was on display at the 2023 Beamex Pharmaceutical User Forum.

Facilitating collaboration, knowledge sharing, and a customer-centric approach was at the heart of the conference, with one participant expressing the reassurance that comes from knowing Beamex hears its customers and takes feedback seriously. Alex Maxfield, the firm’s VP for Sales & Marketing, states, “We wanted customers talking among themselves, giving advice and exploring our applications.”

Marc Mane Ramirez from Novartis agrees, emphasising the invaluable role of the conference in gathering genuine feedback from seasoned users. This direct engagement enables Beamex to embrace and integrate valuable improvements and proposals into their forthcoming product releases.

 

Safeguarding data integrity and regulatory compliance

A critical insight from the conference was the significance of predictive maintenance for the future. Effectively maintaining the quality, safety and efficacy of pharmaceutical products, including equipment and instruments, is crucial to ensure compliance with stringent guidelines and regulations, such as those from the FDA and MHRA, upholding the highest quality and patient safety standards.

Calibration is crucial in maintaining consistent and reliable manufacturing processes that comply with these industry standards. Developed through decades of collaboration with leading pharmaceutical companies, Beamex’s ecosystem assists customers in achieving calibration-related goals while adhering to regulatory requirements, including the ALCOA+ principles that guarantee data integrity throughout the lifecycle of pharma products.

Data integrity is a top priority in the pharmaceutical industry, as breaches of regulatory requirements can have severe consequences. As such, the forum also showcased the impact of Beamex’s Calibration Management Software (CMX), which all participants have and use.

Shari Thread from pharma giant AstraZeneca said, “It’s been good to hear from other pharma companies using CMX and their experiences using CMX.” Carlos Da Silva from Boehringer-Ingelheim echoed the sentiment, stressing that through collaborative effort, we can drive continuous improvement in CMX and work towards achieving better outcomes in the future.

By transitioning to a paperless calibration management system, companies can streamline processes, reduce manual errors and enhance data integrity. CMX also ensures compliance with relevant GxP regulations while providing a robust calibration database with comprehensive history functions. 


Shaping the Beamex roadmap

The 2023 Beamex Pharmaceutical User Forum created an environment for attendees to learn from the experiences and proposals of their peers in the industry.

Through sharing successes and challenges, participants gained invaluable knowledge about effective practices and areas that require improvement within the pharmaceutical calibration and maintenance landscape. Mateusz Dunko from GSK expressed that it was gratifying to witness how pharma firms can influence the Beamex roadmap by comparing requirements across different companies.

This collaborative learning approach allows companies to explore diverse perspectives and discover innovative strategies to embrace digitalisation. In conclusion, Jan-Henrik Svensson, CEO of Beamex, underscored the transformative changes his company perceives in digitalisation and its profound impact. He noted that while all the companies were contemplating this shift, they were each doing so in distinct and remarkable ways, showcasing the industry’s collective drive for progress and adaptation.

 

Would you like to know more about the Pharmaceutical User Group, or are you interested in joining the next forum? Contact us.

 

View the below video for more insights and interviews from the event:

 
   
 

 

Many of the world’s leading pharmaceutical and life sciences companies depend upon Beamex calibration solutions. Book a free consultation with our pharma calibration experts to find the best calibration solution for you.

Book a free consultation

 

Related blogs


Customer success stories

Beamex customer success

 

Digital Calibration Certificate (DCC) – What is it and why should you care?

Posted by Heikki Laurila on Jun 15, 2023

Digital Calibration Certificate DCC - Beamex Blog

 

The digitalization of metrology has been slower than in many other fields, and calibration processes in many industries are still mostly paper based.

But that's about to change!

Enter the Digital Calibration Certificate (DCC), “the MP3 of metrology”. Just as the MP3 revolutionized the music industry, the DCC has the potential to revolutionize the process industry by enabling electronic storage and sharing of calibration results in a standardized, consistent, authenticated, and encrypted manner.

No more struggling with manual interpretation of paper certificates! With the DCC, calibration data is machine-readable and easily imported into your system. A DCC is created using digital signatures and encryption methods to ensure its authenticity and integrity, and it's compatible with international standards, making it easy to share with calibration laboratories, manufacturers, and users of measuring instruments.

But that's not all! The DCC has a ton of benefits, like increased transparency, efficiency, and traceability in the calibration process, as well as reduced costs and time. And the best part? A team of key players, including Beamex, is working on creating a global DCC standard so you won't have to worry about compatibility issues.

If you thought that a PDF is a digital calibration certificate, think again!

So, if you're in the process industry, keep calm and get ready to adopt the DCC! It could be the game-changer you've been waiting for. 

 

Download the full article in pdf format >>

 

Table of contents

 

Background

Metrology is a crucial aspect of modern industrial activity as it involves measuring and ensuring the accuracy of physical quantities.

However, the digitalization of metrology has been slower than that of other industries, with calibration processes still being mostly paper based. This means that processes relying on metrological data are often manually executed by humans, which can be slower and more prone to errors compared to machine-to-machine communication.

The growing gap between the digitalization of the process industry and the calibration industry is creating a significant discrepancy in terms of efficiency, productivity, and quality. While the process industry is using advanced technologies such as automation, artificial intelligence, and data analytics to optimize its operations and achieve higher levels of productivity and quality, the calibration industry is lagging behind in terms of digitalization.

To address this issue, a digital calibration certificate (DCC) is being developed to enable electronic storage and sharing of calibration results in an authenticated and encrypted manner. 

The DCC even enables machine-to-machine communication so that calibration results can be transferred directly from the calibration equipment to the relevant systems without the need for manual intervention. 

This may sound futuristic, but even the current Beamex paperless calibration ecosystem works so that a documenting calibrator (such as a Beamex MC6) automatically saves the calibration results digitally in its memory after calibration. From the calibrator’s memory that digital file is then transferred to calibration management software (Beamex LOGiCAL or CMX) for storing and analysis. That calibration results file is still in Beamex's proprietary format.   

The DCC also facilitates sharing calibration data among different stakeholders - for example, external calibration service providers (calibration labs, producers of calibration data) and industrial end-customers (consumers of calibration data). This digitalization and automation reduces the likelihood of errors, improves efficiency and enables almost real-time data integration for improved decision-making.

This would result in more consistent interpretation of the results and improved traceability, as well as enable proper data analytics in process industries and the creation of digital twins for testing and improving processes. This could ultimately lead to increased efficiency, improved safety, cost savings, and new business models.

Beamex has actively participated from the beginning - working alongside Physikalisch-Technische Bundesanstalt (PTB), the national metrology institute of Germany - in creating a globally recognized Digital Calibration Certificate (DCC) format. Our expertise has been instrumental in shaping the DCC standard to meet the specific needs of the process industry. We are preparing to incorporate the DCC into our products to ensure they are future-proofed.

Being entrusted with this significant responsibility by key stakeholders, including PTB, is a true honor. With our extensive experience in delivering digital calibration solutions, we have established ourselves as a crucial player in this field. We take great pride in leading the development of the DCC and remain dedicated to making it applicable and beneficial for the process industry. The recognition and trust from other stakeholders involved in the DCC initiative further reinforces our commitment to this important endeavor.

 

Processes with paper certificates

When a company sends their calibrator or reference standard to an accredited calibration laboratory, they typically receive the equipment back with a paper calibration certificate. This certificate is then stored somewhere or scanned and saved as a file.

If the company wants to run analytics on the certificate, or make a history analysis on several certificates, they need to manually enter each calibration point data into a software to do that. That is because the paper certificate is not standardized neither machine-readable.

In the near future, Digital Calibration Certificates will be delivered as a standardized machine-readable, authenticated, and encrypted files, that can be imported into the company’s system.

 

Digital Calibration Certificate (DCC)

Basically, the DCC is intended to become a globally standardized format for calibration data defined in the form of an XML (Extensible Markup Language) schema.

When a calibration laboratory performs a calibration, it creates the DCC file and adds all calibration-relevant data to the file. This file is then delivered to the customer. When receiving the file, the customer can have it automatically imported into their own system, thanks to the standardized format of the DCC file.

The DCC contains all relevant calibration data, including the date of the calibration, the calibration method used, the measurement uncertainty, and the results of the calibration.

The DCC is created using digital signatures and encryption methods to ensure its authenticity and integrity. It can be accessed and shared online, making it easily accessible for calibration laboratories, manufacturers, and users of measuring instruments.

The main benefits of the DCC include increased transparency, efficiency, and traceability in the calibration process, as well as reduced costs and time.

The DCC is also compatible with international standards and can be used for both national and international calibration requirements.

 

The XML structure of DCC:

The XML structure of a Digital Calibration Certificate DCC

Image copyright by Physikalisch-Technische Bundesanstalt (PTB). All rights reserved.

 

 

The main benefits of the Digital Calibration Certificate (DCC)

 

Here is a short summary of the benefits of the DCC. For the full list, please download the White Paper.

  1. DCC makes it easier to analyze calibration data and create digital twins that help improve efficiency and safety in the process industry.
  2. DCC supports digital transformation by allowing contractors and labs to easily connect to a digitized calibration system with centralized management.
  3. DCC uses a standardized approach to data entry, making it easier to compare and harmonize data from different sources.
  4. DCC makes it easy to manage and search for calibration data and instrument devices, even for large-scale operations.
  5. DCC enables preventive maintenance by alerting when instruments need checking instead of relying on fixed intervals, leading to better risk-based approaches to maintenance and calibration.
  6. DCC increases traceability by replacing inefficient paper-based processes with easy digital search capabilities.
  7. DCC is flexible, allowing customers to use their preferred calibration processes and still generate easily shareable and searchable digital certificates.
  8. DCC is secure, with cryptographic protection to ensure authenticity and data integrity.

Another source discussing the DCC benefits is the Benefits of network effects and interoperability for the digital calibration certificate management by the 2021 IEEE International Workshop on Metrology for Industry 4.0 & IoT.

 

An emerging global standard

Efforts are happening right now to create a global DCC standard. A team of key players, including Beamex, is working together to define requirements, create guidelines and software, and promote awareness and training.

This DCC meets the requirements of DIN EN ISO/IEC 17025:2018-03. The Gemimeg II project is currently leading the way in DCC development, thanks to investments from the German government and involved companies.

Another project, SmartCom, was focused on improving the traceability and handling of metrological data through the creation of DCC with set standards. 

In addition, other projects and initiatives have also been taking place to enable the uptake of DCC to improve the traceability and handling of metrological data.

Such projects include for example EMPIR 17IND02 SmartComEURAMET TC-IM 1448, and Digital NIST.

Together, these initiatives are building a DCC standard that is already being tested in industrial applications.

Due to their key role in the metrology infrastructure, the National Metrology Institutes (NMIs) will also play an important role in ensuring widespread adoption of the DCC standard.

 

Keep calm and adopt the DCC!

The DCC has the potential to transform the process industry. Instead of relying on error-prone and labor-intensive paper-based processes, digital calibration data could be easily searched, shared, and analyzed. This would not only make audits more efficient, but it could also allow data to be used to create digital twins of processes to find efficiency and safety improvements.

At Beamex, we have been digitalizing calibration processes for over 40 years and we see the DCC as a natural extension of these efforts. We believe that cooperation between standards-setting institutions, labs, vendors, and major players in the industry will be needed to make the DCC happen.

That's why we are keen to encourage other industry players to join us in these initiatives and contribute to supporting the implementation of the DCC across industries.

At Beamex we have run several successful proof of concept projects with the DCC and have seen that the DCC is really working in practice.

When you choose Beamex, you are choosing a future-proof solution that is ready to support digitalization efforts and make processes safe, secure, and efficient. Our products are designed to be compatible with whatever DCC standard evolves.

 

Interested in learning more?

If you want to learn more about the DCC or discuss with our experts, please feel free to book a discussion with them:

Discuss with our experts >>

 

In LinkedIn, feel free to connect and discuss with me or with my colleagues with expertise in DCC:

 

Download the full article here:

Digital Calibration Certificate DCC - Beamex white paper

 

 

Relevant material & links

 

Documents describing the basic concept and overall structure of the DCC:

  • S. Hackel, F. Härtig, J. Hornig, and T. Wiedenhöfer. The Digital Calibration Certificate. PTB-Mitteilungen, 127(4):75–81, 2017. DOI: 10.7795/310.20170403.
  • S. Hackel, F. Härtig, T. Schrader, A. Scheibner, J. Loewe, L. Doering, B. Gloger, J. Jagieniak, D. Hutzschenreuter, and G. Söylev-Öktem. The fundamental architecture of the DCC. Measurement: Sensors, 18:100354, December 2021. DOI: 10.1016/j.measen.2021.100354.

Additional information on the technical aspects of DCC can also be found on the PTB’s Digital Calibration Certificate Wiki: Digital Calibration Certificate (DCC) - Wiki | Digital Calibration Certificate - Wiki (ptb.de)

Further reading on the potential and benefits of the DCC in a calibration ecosystem:

  • J. Nummiluikki, T. Mustapää, K. Hietala, and R. Viitala. Benefits of network effects and interoperability for the digital calibration certificate management. 2021 IEEE International Workshop on Metrology for Industry 4.0 & IoT. DOI: 10.1109/MetroInd4.0IoT51437.2021.9488562.
  • J. Nummiluikki, S. Saxholm, A Kärkkäinen, S. Koskinen. Digital Calibration Certificate in an Industrial Application, Acta IMEKO Vol. 12, No. 1 (2023). DOI: 10.21014/actaimeko.v12i1.1402.

 

 

Related blogs

If you liked this article, you might like these ones too:

 

Topics: Calibration process, Digitalization

CMMS calibration module or dedicated calibration software?

Posted by Heikki Laurila on Apr 26, 2023
CMMS-and-calibration-software

When your computerized maintenance management system (CMMS) already has a calibration module, why would you buy dedicated calibration software?

It’s a fair question and one that we frequently get asked! The reasons can vary, depending on the application. Are you maybe comparing apples to oranges?

There are different kinds of dedicated calibration software products out there, each with somewhat different functionalities. Although they have the same name, they are all different in one way or another.

Does integrating dedicated calibration software with your CMMS bring you the best of both worlds, or just a big mess?

In this article we look at the various setups and compare these different scenarios.

If this sounds interesting, please keep on reading.

 

Table of contents

 

CMMS and calibration

CMMS, asset management systems, and enterprise resource planning (ERP) systems include a variety of different functionalities and are implemented for a certain purpose. They are not designed specifically for calibration management. Although they have some calibration functionality, this can be quite limited.

Sure, there can be an add-on calibration module with basic functionality for calibration management, but these kinds of systems do not have the same level of sophistication as dedicated calibration software designed specifically for the purpose of calibration.

Sometimes these add-ons still require manual data entry methods such as a pen and paper to document calibrations! C’mon, this is the 21st century!

 

The problem with pen and paper

With digitalization becoming the norm in industry, you could be forgiven for thinking that calibration is already taken care of by the calibration module of your CMMS. But, as mentioned above, calibration results may still need to be documented manually using pen and paper. The papers are then archived, or the calibration data is subjected to another error-prone manual step – entering it into the calibration module using a computer keyboard.

In the worst-case scenario the calibration data is not stored digitally in the CMMS at all and may simply be scanned. This brings further limitations as you can’t analyze any data from a scanned document.

This is also the case if the data is stored in a paper archive. For example, you can’t easily check the detailed results of the previous calibrations performed. Also, it’s very difficult to find data for regulatory audits. This process also brings with it all the data quality and integrity issues related to manual data entry. The errors within manually completed files don’t disappear if you scan them or manually transcribe the results from the paper to the CMMS, which as mentioned above, can introduce further errors.

Learn more about the consequences of manual data entry is this blog: Manual Data Entry Errors

 

Also, we need to consider reverse traceability. This means that if a reference standard (calibrator) is found to be out of specifications during a calibration, you need to investigate where that reference standard has been used. It may have been used for tens or even hundreds of calibrations, and as a result these may all be considered suspect. If all your calibration certificates are in paper format or scanned, it is extremely laborious and time-consuming to go through them to perform reverse traceability. Advanced calibration software would allow you to generate a reverse traceability report at the touch of a button.

Beyond data analysis – or the lack of it if you’re using paper files or scanned documents – there are other, often overlooked, ways in which dedicated calibration management software can help your business shine.

  1. Sustainability – You might have invested significant time and money in initiatives to create more sustainable working practices, but have you thought about how calibration can make your business more sustainable? A robust calibration process using dedicated software improves efficiency, eliminates paper, and can even extend the lifespan of your equipment.
  2. Employee wellbeing – Making calibration tasks simpler and less stressful for your technicians can make a huge difference to their wellbeing and can even mark you out as an employer of choice in what is an extremely competitive labor market.
  3. Product quality – The downstream impact of data integrity or other issues within the calibration processes can compromise the quality of your products. Dedicated calibration software helps to avoid this problem by maintaining data integrity.
  4. Safety – If you’re making a product that is consumed, for example food or medicine, dedicated calibration management software can give you greater confidence that your product is safe because you can rely on the fact that your in-process measurements are accurate. This is particularly important in cases where a product cannot be destructively tested to confirm it is safe.

 

CMMS vs. dedicated calibration management software

Let’s take a more detailed look at how the calibration module in a CMMS stacks up against dedicated calibration management software such as  Beamex CMX.

  1. Functionality: Compared to a CMMS module, dedicated calibration management software typically offers more advanced functionality for managing calibration procedures, such as automated calibration scheduling, calibration task management, guided instructions, reference management, calibration uncertainty calculations, reporting, and more.
  1. Customization: Dedicated calibration management software is typically highly customizable, meaning you can configure it to your specific calibration needs. This can include creating custom calibration procedures, configuring workflows, and integrating the software with other systems. A calibration module in a CMMS typically is more limited in terms of customization options. If you do want to customize your CMMS module with additional calibration functionality, it will be costly to implement and maintain. What’s more, you might not even know what kind of functionality needs to be added. Dedicated software from a reputable provider will take into account the current and future requirements of a large customer base and leverage emerging future technologies, adding new features and functionalities as part of regular updates.
  1. Integration: While both types of software can integrate with other systems, dedicated calibration management software may offer more seamless integration with other laboratory or process control systems, such as electronic documentation management systems, laboratory information management systems (LIMS), or ERP systems. A CMMS calibration module may only offer limited integration options.
  1. User interface: Dedicated calibration management software typically offers a user-friendly interface specifically designed for managing calibration processes, which can help to streamline workflows and improve user productivity. A calibration module in a CMMS on the other hand may have a more general user interface that is designed to support a range of maintenance management tasks.
  2. Cost: Dedicated calibration management software may be more expensive than a calibration module in a CMMS as it offers more advanced functionality and customization options. However, you should find that the additional cost is justified by the improved functionality and productivity gains that dedicated software offers. 

 

 

Calibration software - manual or automatic?

Not all products that are called calibration management software solutions are the same or offer the same functionalities.

The two main different categories are calibration software where data is entered into the system manually and software that communicates with documenting calibration tools. Let’s look at these two categories in more detail.

 

1. Calibration software with manual data entry

With these types of systems, you input the data manually with a keyboard. If you don’t carry a laptop with you in the field, then you need to manually document data during the calibration and then input it into the system when you’re back in the office – meaning there are two manual steps in your calibration process!

While this kind of calibration software may offer a lot of functionality once you have the results stored digitally in the system database, including data analysis, the original source data may have gone through multiple manual entry steps before ending up in the system. So, the data may have accidental (or even intentional) errors, and the data quality and integrity could be questionable.

Analyzing non-reliable data is a waste of time and may even be misleading, leading to wrong decisions. “Crap in, crap out”, as they say.

So, in the end using this kind of calibration software is not much better than using a CMMS calibration module.

Learn more about the consequences of manual data entry is this blog: Manual Data Entry Errors

 

2. Calibration software that communicates with documenting calibrators

With this kind of software there is no need for any manual data entry during the calibration process. Your calibration tools automatically store the calibration data digitally during the calibration. This eliminates the risk of manual error and means that the data cannot be tampered with. The calibrator can even be configured to require an electronic signature from the person who performed the calibration. This is important in highly regulated industries such as the pharmaceutical industry, where data integrity is vital.

Learn more about: Data Integrity in calibration processes, or about Common Data Integrity pitfalls in calibration processes.

 

documenting calibrator may even be able to perform the calibration fully automatically, saving time and ensuring a repeatable calibration process every time. Once the calibration is complete and the data is stored in the calibrator, the results can be transferred from the calibrator’s memory to the calibration software, again fully digitally.

Advanced documenting calibrators can also perform an automatic pass or fail decision straight after the calibration. This may sound like a small thing, but what about if you have a square rooting pressure transmitter ranging from -0.1 to 0.6 bar and you get an output of 12.55 mA at 0.1 bar input pressure, while your error limit is 0.5 % of span – does that sound like a pass or fail?

It’s not always easy to calculate in your head – or even with calculator. Sure, if you have a 0 to 100 °C temperature transmitter and the error limit is ± 0.5 °C, it is very easy. A smart documenting calibrator like a Beamex documenting calibrator will automatically tell you if each point is a pass or fail.

Using this kind of calibration management software together with documenting calibrators offers significant advantages over basic calibration software or your CMMS’s calibration module. It will save you a lot of time in calibration, and ensures you have high-quality data available for analysis.

But even with the most advanced calibration software, it’s important to remember that there is still some manual work to do in the process, starting with generating the work order in your CMMS and finally closing it.

But don’t worry, there is a better way to do that too. You can also digitalize and automate this step in the process if you integrate your CMMS and your calibration software. More on that next.

 

Integration – the best of both worlds!

Creating an end-to-end digital flow of calibration data throughout your business is easily achievable by integrating your CMMS with advanced calibration management software, such a Beamex CMX, that can communicate with documenting calibrators.

Many of our customers have found a better way with this kind of integration.

In practice, in the simplest case this integration works like this: work orders are generated in the CMMS and automatically sent to your calibration management software, and when the calibration is complete in the software, it automatically notifies the CMMS to close the work order.

Read more on why integrate and how integration can automate your work order and calibration results handling on our website >>

 

What our customers say on integration

Jody-Damron-SRP-v2"With this software integration project, we were able to realize a significant return on investment during the first unit overhaul. It’s unusual, since ROI on software projects is usually nonexistent at first."

Jody Damron, Business Analyst, Salt River Project, US

Read the full Salt River Project case story >>

 

Beamex calibration ecosystem

beamex calibration ecosystem

Beamex calibration ecosystem is a comprehensive solution for calibration management that includes various hardware and software tools designed to help industries achieve better quality and reliability in their production processes. It consists of three main components: calibration software, calibration equipment, and calibration services.

The calibration software provides a user-friendly interface for managing calibration procedures, storing calibration data, and generating reports. It allows for customizable workflows, automated documentation, and integration with other systems.

The calibration hardware includes portable calibrators, bench calibrators, and pressure controllers that are designed to perform accurate and reliable calibrations in the field or laboratory. These devices are easy to use and feature advanced functions such as automated calibration, data logging, and wireless communication.

Calibration services are also offered by Beamex, which include on-site calibration, instrument maintenance, and training. The services are provided by qualified technicians who are experts in their field and can provide tailored solutions to meet the specific needs of each customer.

Overall, the Beamex calibration ecosystem provides a complete solution for calibration management that can help industries improve their processes, reduce downtime, and comply with regulatory requirements.

Learn more about the Beamex calibration ecosystem on our website >>

Talk with Beamex experts to find the best solution for you >>

 

Related blog posts

If you liked this post, you might also like these:

 

Finally, I want to thank my colleague Aidan Farrelly for his great LinkedIn post that sparked the idea for this post. Also, thanks Aidan for your comments while I was writing this.

Thanks,

 

Topics: Calibration process, Calibration software, Calibration management, Digitalization

The most common calibration-related FDA warnings to pharma companies

Posted by Heikki Laurila on Feb 13, 2023

fda-warning-pharma

 

As consumers of the products from pharmaceutical companies, we all want to be sure that we can trust the products they produce. Fortunately, pharmaceutical companies are heavily regulated, and their operations are being constantly monitored. The US Food and Drug Administration (FDA) is the main authority that regularly audits pharma companies globally to ensure they are compliant. 

Every year, the FDA gives hundreds of warnings to pharma companies that fail to comply with regulations. So even they are not perfect! Most of these warnings are not related to calibration, but some are. I analyzed the warnings related to calibration and in this article, I list the most common calibration-related warnings issued. If that sounds interesting, please continue reading.

Table of contents

I recommend browsing through the whole article, but you may also jump straight to the underlined main topic:

 

The FDA in brief

The FDA (https://www.fda.gov/) is a US government agency protecting public health by ensuring the safety, effectiveness, and security of drugs, biological products, and medical devices.  

The FDA audits pharmaceutical companies and ensures that they operate according to the relevant regulations. Although the FDA is a US authority, any pharma company that wants to sell products in the US needs to comply with FDA regulations and will be subject to their audit processes. Therefore, in practice the FDA requirements are global.

If the FDA find evidence of non-compliance during the audits, they will issue a form 483, and it may lead to a warning letter. The FDA even has the power to shut down a plant if they find serious non-compliance. 

So, it’s pretty obvious that the pharmaceutical companies are on their toes when an FDA auditor arrives.

In addition to the FDA, there are other authorities auditing pharma companies, including the European Medicines Agency (EMA) and the UK’s Medicines & Healthcare Products Regulatory Agency (MHRA), plus national agencies in other countries. So, pharmaceutical companies are  audited regularly by other authorities in addition to the FDA. 

Hopefully, the Multilateral Recognition Agreements (MRA) between the authorities keep developing so that pharma companies are not subject to multiple audits by different authorities.

 

Warning letter

As mentioned, when an FDA auditor investigates a pharmaceutical company, the observations of non-compliance will be written into a 483 form. Depending on their criticality and/or number of the observations, it can lead to a warning letter. The pharmaceutical company then has to provide a response to the letter and perform sufficient corrective actions to correct the observations.

These warnings are publicly available and they can be found on the FDA website from this link: https://www.fda.gov/inspections-compliance-enforcement-and-criminal-investigations/compliance-actions-and-activities/warning-letters 

There are currently almost 3,000 warning letters listed on the FDA site, dating from 2019 until now. 

For the last three years, there have been around 600 warning letters issued annually, distributed as shown in the below image:

Warning letters per year

 

The most common general warnings 

As there are so many letters, it gets complicated to analyze them all, but generally available information lists the most common reasons for warnings being:

  • Absence of written procedure, or not following written procedures
  • Data records and data integrity issues 
  • Manufacturing process validation – Lack of manufacturing controls
  • False or misleading advertising
  • Issues with environmental monitoring

Many are generally referred as “Failure to meet Good Manufacturing Practices”.


The most common calibration-related warning letters

This article is not about all the warning letters, only the ones that are somehow related to calibration. Needless to say, I am mostly interested in calibration-related topics 😊 

So, I investigated all the warnings for the last three years: 2020, 2021, and 2022. There have been almost 2,000 warning letters issued during those three years!

If we look at how many warning letters are related to calibration, we can see that it is actually quite a modest share of the warnings. It seems that about 2% of the warning letters include comments on calibration.

While most of these companies are located in the US there are also some from South America, Europe, and Asia.

Obviously, some of the generic warnings can also have calibration included, although not separately mentioned. These include maintenance-related issues, written procedures missing, and issues with data records and manufacturing controls, to mention a few. 

I analyzed all the warning letters that have calibration mentioned in them. Let’s look at what kind of calibration-related topics seem to be the most common.

I grouped the calibration-related warnings into the following categories and their share is mentioned in the below list:

  • Inadequate calibration program: 33%
  • Failed to routinely calibrate instruments: 19%
  • Lack of calibration records: 16%
  • Use of non-calibrated calibration instruments: 11%
  • Insufficient calibration to prove required accuracy: 5%
  • Test equipment not calibrated for the required range: 5%
  • All others: 11%

 

The image below illustrates the most common calibration-related warnings.

Calibration related warning letters

 

The top three reasons for calibration-related warnings – and how to avoid them

Let’s look at the top three reasons and how to avoid them.

1. Inadequate calibration program

As we can see, the most common reason is “Inadequate calibration program”, which accounts for a third (33%) of the cases. In these cases, the company has not had an adequate calibration program that would document how and when each instrument should be calibrated. 
Creating and documenting a calibration program is a fundamental job for calibration-responsible staff. It naturally gets a bit more demanding to make sure it is “adequate” for the FDA auditor and fulfils all the relevant FDA regulations.


2. Failed to routinely calibrate instruments

The second most common reason is “Failed to routinely calibrate instruments”, with 19% of the cases. In these cases, the company has simply not calibrated all instruments as they should have done.
The best cure for this one is to make sure you have an automated system that alerts you when instruments should be calibrated and ensures that all calibrations are done.


3. Lack of calibration records

The third most common reason is “Lack of calibration records”, with 16% of the cases. This means the company has no evidence that calibration is being done. This one is quite similar to the previous type of case, but in these cases the company has somehow been able to convince the auditor that they have done the calibration but they don’t have records to prove it, such as calibration certificates.
The cure for this one is to make sure your calibration system stores all calibrations digitally so you can pull out the records easily any time an auditor wants to see them.

 

How Beamex can help

We have worked and partnered with many of the top pharmaceutical companies for a long time. Many of the world’s leading pharmaceutical and life sciences companies rely on the Beamex calibration ecosystem, which is designed to help customers achieve their calibration-related goals in line with regulations, like those from the FDA.

A lot of functionality developed for our calibration ecosystem has been to meet the requirements of pharmaceutical companies. 

For pharmaceutical companies we offer, for example:

  • Calibration management software with many features developed especially for pharmaceutical companies 
  • Calibration equipment for field and workshop calibration
  • Various expert services tailored for pharmaceutical companies

If you are in the pharma business, contact us to learn more about how we can help you to meet the calibration-related FDA requirements: 

Contact Beamex pharma calibration experts!

 

Related reading

If you found this article interesting, you may also like these:

View Beamex pharmaceutical industry page >>

 

Download your copy of the Calibration Essentials Software eBook to learn more about calibration management and software:

Calibration Essentials- Software eBook

 

 

Topics: Calibration in pharmaceutical industry

Working workshop wonders with Endress+Hauser [Case Story]

Posted by Tiffany Rankin on Oct 19, 2022

 

Equipping a full-service calibration workshop for Endress+Hauser in Canada

A customized Beamex solution including hardware and software sits at the heart of Endress+Hauser’s state-of-the-art Customer Experience Centre in Ontario. In this short blog we explore the critical role Beamex solutions are playing as part of the center’s full-service calibration laboratory.

Endress+Hauser is a global leader in measurement instrumentation, services, and solutions for industrial process engineering customers from a wide variety of industries including life sciences, mining, chemicals, food and beverage, and water and wastewater treatment.

Read the story in full >>

 

The best of the best in calibration technologies

In 2021 Endress+Hauser Canada opened its state-of-the-art Customer Experience Centre in Burlington, Ontario. The center is home to a full-service calibration laboratory that brings together the best of the best in calibration technologies, reflecting Endress+Hauser’s own exacting standards.

The company has been collaborating with Beamex since 2015, so when it came to equipping the new laboratory, Beamex was the natural choice to provide the necessary high-performance calibration equipment.

The Beamex Canada team worked with colleagues at the Beamex HQ in Finland to design a custom solution for Endress+Hauser Canada comprising:

 

Here's a very short video summary of the story:

 

Speed and efficiency get a boost

Endress+Hauser Canada’s calibration needs typically involve pressure and temperature calibrations. With the Beamex solution enabling fully automated calibration, pressure calibrations take just 30 minutes instead of 45 minutes or even an hour. This means more calibrations can be done in the same amount of time and frees up technicians to work on other tasks while calibrations are being performed.

Martin Bedard, Calibration and Program Supervisor for Endress+Hauser Canada: “The quality of Beamex equipment is higher than that of the competition, and the customer service is also very good. The Care Plans give us a great customer experience, with a turnaround time to Finland of just five to seven days, which is often faster than using local laboratories.”

 

Download the full customer success story

 

220429-Endress_Hauser-405

 

Related content

 

Topics: Workshop calibration, Calibration, Calibration process, Case Story

Ensuring sustainable management of process safety for chemicals

Posted by Monica Kruger on Sep 22, 2022

Ensuring sustainable management of process safety for chemicals

In the chemicals industry, safety is priority number one. But how do you ensure safety in a sustainable way? When it comes to calibration, the answer is a modern, digitalized, and automated solution.

There’s a reason safety is so important in the chemicals industry. If something goes wrong, it’s not just an issue for the plant and its employees – it can also impact people living in the surrounding area. This is one of the reasons that chemicals are so strictly regulated. 

Chemical plants need to maintain strict quality management and hold detailed product information. Chemical process companies must be able to capture data from operational processes accurately in order to be prepared for product recalls. In case of audits, all of this data must be easy to find.

Learn more by downloading the full article here:

Improving efficiency in the chemical industry

 

How automation helps

This is where automated and digitalized calibration solutions come into play. All of the instruments that are part of this safety process need to be accurately calibrated to ensure they’re working properly. However, in many plants this process still relies on paper certificates. While paper may feel reliable and tangible, there is a substantial risk of human error in the process. Each calibration typically has 20 data points or more, so even if the error rate for writing down results is only 1%, this means one in every five certificates is likely to contain faulty data. 

With automated calibration, results are captured automatically in a digital format and sent securely to the calibration management system. This gives 100% accurate, tamper-proof results. Even better, finding certificates is as simple as performing an online search instead of manually looking through mountains of binders full of paper.

 

A repeatable process brings sustainability

Another advantage of automated calibration is repeatability, which improves business sustainability. One challenge chemical plants face is the changing skill sets of the technicians who perform the calibrations. When this is combined with out-of-date test and measurement equipment, there is a genuine risk of instruments drifting out of tolerance.

Automated calibration helps solve this problem. Instead of varying in quality, every calibration is performed to the same highly accurate level as the calibrators can offer step-by-step guidance to technicians. The process is also faster – by cutting manual steps such as the need to fill in paper certificates or enter results into a database at the office, technicians can save 50% of the time needed for calibrations.

 

Ensuring safety

The repeatability, reliability, and accuracy of automated calibration also means better safety. This is because chemical plants can be sure that instruments critical for process safety are reliably calibrated and within tolerance. Well-designed calibration systems also enable technicians to include checklists as part of the calibration procedure – which is critical in ATEX environments and other scenarios where very clear procedures need to be followed to ensure safety.

 

Beamex calibration ecosystem

Beamex offers an automated calibration ecosystem that is composed of a unique mix of expertise and technology to help chemical companies sustainably improve process safety. Beamex solutions provide accurate measurements, reliable data, and traceability, which helps to reduce uncertainty and errors in calibration data. The end result is consistent calibration quality at your chemical plants.

Learn more about Beamex products and services on our website or contact your local Beamex representative:

 

Visit Beamex website

Contact Us

Beamex Worldwide Contacts

 

Related chemical articles

 

Related blog posts

Download your copy of the Calibration Essentials Software eBook to learn more about calibration management and software.

Calibration Essentials- Software eBook

Topics: Calibration software, Calibration management, Calibration in chemical industry

What is operational excellence and how can calibration help achieve it?

Posted by Heikki Laurila on Aug 31, 2022

What is operational excellence?

In this article, we’re going to take a closer look at a topic that’s talked about a lot but isn’t always that clear: operational excellence. We’ll briefly discuss the history of the concept, what it means in practice, and how it applies to process industries – including the many benefits it can unlock. We’ll also set out how calibration can play a role in enabling operational excellence in your process industry plants. Read on to find out more.

 

Table of contents

 

What is operational excellence?

As defined by the Institute for Operational Excellence, operational excellence is achieved when “each and every employee can see the flow of value to the customer and fix that flow before it breaks down.” What this means in practice is that a company with operational excellence at its core is able to provide the best possible value to their customers. To do this, the focus is on the quality of the product or service and the process for creating and delivering it to the customer.

Operational excellence applies to every level of an organization and empowers people at all levels to make changes to ensure the proper process flow is continuously improved and does not break down. This leads to better execution of a company’s strategy, unlocking the benefits of operational excellence – including improved quality, efficiency, and revenue.

 

A brief history of operational excellence

The model of operational excellence was created by Dr. Joseph M. Juran in the 1970s when teaching Japanese business leaders about how to improve quality. His methods were further expanded in the 1980s in the US in response to the “quality crisis” where US companies were losing market share to Japanese companies.

The concept has continued to develop and now operational excellence encompasses methodologies like lean manufacturing and Six Sigma to help bring about the desired state of value flow with a focus on quality.

 

Operational excellence principles

There are several approaches that can be used to achieve operational excellence. We will look at two of the main ones, the Juran model and the Shingo model, as they both offer useful insights. The Juran model (named after the creator of operational excellence) has five components that help build operational excellence in an organization. These are:

  • Understanding the guiding principles that lay the foundation for excellence, which includes embracing quality
  • Improving the customer experience
  • Creating an infrastructure that engages employees to make improvements by using the right methods and tools
  • Creating process improvement teams to drive process efficiency
  • Ensuring leadership and workforce engagement

 

The Shingo model, created by Dr. Shigeo Shingo (a Japanese engineer who is considered one of the foremost experts on manufacturing practices), is based on ten core principles. These are:

  • Principle #1: Respect every individual – employees at all level of an organization should feel empowered to make changes and improvements.
  • Principle #2: Lead with humility – leaders should inspire employees to undertake and execute critical tasks.
  • Principle #3: Seek perfection – even though it’s not possible to be perfect, the organization should always strive to improve in order to avoid complacency.
  • Principle #4: Embrace scientific thinking – the scientific method should be used to examine and improve operational processes.
  • Principle #5: Focus on process – process is the key to creating value flow from company to customer; by focusing on the process you can see if it is performing as it should be.
  • Principle #6: Assure quality at the source – quality is the key focus, and should be an integral part of all activities.
  • Principle #7: Improve flow and pull – companies can maximize the flow of value through efficient processes that minimize waste.
  • Principle #8: Think systematically – the entire system should be seen as one flow where all departments are working together to create customer value.
  • Principle #9: Create constancy of purpose though clear company goals and a vision with a clear target.
  • Principle #10: Create value for the customer – this is the key takeaway. The business exists to bring value to the customer.

 

These ten principles underly the four key areas needed to enable operational excellence in an organization, including cultural enablerscontinuous improvemententerprise alignment, and results.

 

Operational excellence methodologies

The methodologies for achieving operational excellence include lean manufacturing, Six Sigma, and kaizen.

The core idea behind lean manufacturing is cutting waste, resulting in more efficient processes. Doing so requires the following steps to be taken:

  • Specifying the value wanted by the customer
  • Identifying the value stream for each product and finding wasted steps
  • Creating continuous flow for the product along all steps
  • Introducing pull between all the steps to enable flow
  • Striving for perfection to reduce the number of steps needed and how long each step takes

 Lean thinking, Womack and Jones, 2003

 

Six Sigma, which was first introduced at Motorola, aims to improve quality by identifying areas where defects may occur in a process and removing them. This is done through systematic quality management. Lean manufacturing and Six Sigma have also been combined to create Lean Six Sigma, which combines focuses on process, flow, and waste into one system.

Kaizen, often translated as “continuous improvement”, is a Japanese methodology focused on making continual incremental changes with the goal of improving processes. Improvements can be suggested and implemented by any employee across the organization. The basic idea is that no process is ever perfect and thus can always be improved by making gradual changes.

All of these methodologies have a focus on quality and process while eliminating waste, helping to create operational excellence in an organization.

 

The benefits of operational excellence 

The benefits of achieving operational excellence are many. The number one benefit is that it enables an organization to achieve concrete business results more quickly. This is because employees at all levels of an organization are able to make decisions and execute changes that result in better value flow to the customer – speeding up improvements and ensuring constant creation of value. An operational excellence mindset, with its focus on flow and value, can also lead to better quality, efficiency, on-time delivery, and overall profitability.

 

Best practices and how to achieve operational excellence

Achieving operational excellence is a multistep process that requires effort from all levels of an organization. 

  •  Having and communicating a clear strategy that is based on goals and key performance indicators is critical. 
  •  Choosing and implementing the right methodology for your goals – such as lean manufacturing, Six Sigma, or Lean Six Sigma – helps to ensure your focus is on quality and reducing waste. 
  •  Training and education is needed to help employees understand their role in achieving operational excellence. 

 

Working with an expert who understands operational excellence and how to roll it out can also be helpful, as is looking at successful industry case studies. Some major companies using operational excellence are: 

 

What does operational excellence mean for process industries?

The focus on quality and flow that operational excellence unlocks is absolutely critical for process industries. After all, process industries have to manufacture products for customers to exacting quality standards. Efficiency of operations, along with safety, is also key. By helping to ensure quality and efficiency with a smooth flow of value across all processes, operational excellence helps production plants to be more profitable and resilient.

The benefits of operational excellence for process industries include:

  • Better process efficiency from fewer steps and less waste
  • Better profitability through lower expenses
  • More consistent production through a focus on quality
  • More resilient plants from optimized processes
  • A decreased risk of shutdown from optimized processes

 

 

How calibration can help enable operational excellence

In process industries, calibration plays an important role in operational excellence. A good calibration process ensures processes work as designed and plays an important role in ensuring the quality of the end product. The efficiency of the calibration process is an important element of overall operational efficiency and greatly depends on the type of calibration process.

 

What is calibration?

Before discussing how calibration can contribute to improved operational excellence, let’s very briefly summarize what calibration is. Calibration is a documented comparison of the device to be calibrated against an accurate traceable reference device (often referred to as a calibrator). The documentation is commonly in the form of a calibration certificate.

Unbroken and valid traceability to the appropriate national standards is important to ensure that the calibration is valid. As each calibration is only valid for a limited period, regular recalibration of all the standards in the traceability chain is required.

It is vital to know the uncertainty in the calibration process in order to be able to judge if the calibration result was within set tolerance limits and if it was a pass or fail. Learn more about what is calibration.

 

Reasons for calibrating

Aside from enabling operational excellence, there are various reasons to perform calibration. All measurement instruments drift over time, meaning their accuracy deteriorates and regular calibrations are required. In the process industry, this fact is directly linked to the quality of the end product. In many industries, such as the pharmaceutical industry, regulatory requirements set tight rules for the calibration of critical process instruments. Likewise, quality systems set requirements for calibration.

As with many other things, money is also an important reason. In many cases money transfer depends on measurements, so the accuracy of the measurements directly effects how much money is transferred. In some processes, the safety of both the factory and its employees, as well as that of customers or patients who use the end product, can be the main driver for calibration.

 

Calibration interval

To maintain the traceability of all your process measurements, a valid unbroken traceability chain needs to be maintained. This means regular recalibrations at all levels of the traceability chain –  not only all the process measurement instruments, but also the working standards and reference standards (or calibrators).

Finding the proper calibration interval is important. If you calibrate too often, you end up wasting resources. But if you calibrate too infrequently, you face the risk that instruments will drift outside of set tolerances – and in many cases that can have serious consequences.

This means companies are constantly balancing risk against wasted resources. A proper analysis of calibration history and calibration interval is key, and finding the right sweet spot helps to contribute to operational efficiency.

 

Digitalizing, streamlining and automating the calibration process – finding a better way

When we realize calibration’s role in operational excellence, we understand the importance of making calibration processes more efficient – how can we produce less waste and do more with less?

At many industry sites, there are thousands and thousands of calibrations carried out annually. To streamline those processes and save time with every calibration can save a huge amount of money and have a big impact on the bottom line.

One of the main opportunities for time saving is to ditch manual calibration processes – typing or using pen and paper to document things – and instead move to a modern digitalized world where the calibrator automatically stores the calibration results in its memory, from where they can be digitally uploaded to calibration management software. Not only does this digitalized and paperless calibration process save a lot of time, it also eliminates all the errors related to manual data entry. Digitalization also dramatically improves the quality of calibration data. And given that analysis and decisions are based on data, it’s clear that data should be of high quality.

The streamlining of calibration processes with the help of digitalization is one major contributor to their operational excellence. As with any processes, when working to improve operational excellence there is a constant quest to find better ways of doing things. If the calibration processes are very outdated, relying on manual documentation and lacking automation, then it’s possible to make a major leap in excellence by moving to digitalized and automated processes. After that is done, the next step is to constantly find small improvements.

Finding the best practices and consistence in calibration processes and methods is important. You should constantly work to evolve and improve these methods over time. This is even more important and have bigger impact in big multi-plant organizations keeping processes uniform.

Make sure you leverage automation in calibration whenever possible, that is a great way to improve efficiencyConsistent automated processes will also improve the quality of data by eliminating the risks for human errors. It will also make it quicker and easier for new employees to get up to speed with higher quality of work.

 

Conclusion 

In summary, operational excellence is an organizational mindset based on set principles and methodologies that aims to improve the flow of value to a customer. A focus on quality and eliminating waste results in greater efficiency and profitability in process industries. Calibration can help unlock operational excellence by moving to a modern digitalized process that reduces the time needed for calibrations and improves data quality. Better data can be analyzed to find further efficiency improvements, not just for the calibration process but also for production plant processes. The end result is improved operational excellence for process industries.

 

 

Experience a better way for your busienss with Beamex

Beamex offers a calibration ecosystem that is a unique combination of calibration technology and expertise to help improving efficiency, ensuring compliance, increasing safety in operations and improving the operation excellence.

Please contact us and let our expert help you to find you a calibration system that helps to improve your operation excellence:

Contact Us

 

 

You might also like

 

Download your copy of the Calibration Essentials Software eBook to learn more about calibration management and software.

Calibration Essentials- Software eBook

 

 

Topics: Calibration, Calibration process, Digitalization, Operational Excellence

How automated calibration can help service companies be more competitive

Posted by Monica Kruger on Jul 06, 2022

How automated calibration can help service companies be more competitive

Service companies that perform calibrations for the process industries operate in a challenging environment. Not only is there a lot of competition, but contracts for customers are based on estimates, meaning that every additional hour of work directly affects the bottom line. Finding and retaining skilled calibration technicians is also a challenge.

So, what can service companies do to make their quotations more accurate and ensure work is carried out as consistently and efficiently as possible? The answer is to automate the calibration process.

 

The problems with pen-and-paper calibration

To see why automation helps, first we need to look at the way calibrations are currently conducted using pen and paper. Paper-based calibrations are time consuming, with 40–50 data points needing to be filled in by hand for each calibration. Because it relies on manual data entry, paper-based calibration is also prone to errors. It’s commonly accepted that the typical error rate in manual data entry is around 1%. While this might not sound like a lot, it can have major implications for the accuracy of the calibration process. The end result of manual processes is that every second calibration certificate might possibly contain an error.

Paper certificates also negatively affect transparency – when using them, it’s hard to share calibration results with end customers in a timely fashion. Paper certificates also require warehousing and are not easy to find when an audit is required – let alone if the client wants to use the calibration data improve their process efficiency through trend analysis.

 

You can go paperless today

Beamex’s automated calibration solution combines softwarehardware, and calibration expertise to deliver an automated, paperless flow of calibration data with a minimal requirement for manual data entry. The major benefit here is that an automated process cuts the number of steps involved in the calibration process, potentially saving up to 50% of the time it takes. Even shaving just 15 minutes off the time needed to perform a calibration, plus an additional 15 minutes due to not having to manually enter results into a database, adds up to huge time savings.

In addition to saving time and enabling a more efficient process, automated calibration helps to avoid mistakes typically associated with manual data entry – thus improving the quality and integrity of the calibration data and making sure your customers are happy with the work you’re doing for them.

Modern multifunction calibrators from Beamex also provide user guidance so that even less experienced technicians can carry out calibrations quickly and reliably. Because the process is highly repeatable, making quotations becomes easier as you will know how much time is needed for each calibration.

Finally, with automated calibration you can offer your customers new services based on data analysis. Because all the calibration data is in a digital format and easily searchable, you can analyze your customers’ calibration processes and data to provide improvement recommendations – differentiating your service company offering.

 

Example of ROI calculation

The average cost to a service company for an instrument technician is around €50 per hour, including salary, benefits, overheads, and so forth.

If a technician carries out 2,000 calibrations a year and it takes them on average 15 minutes to write up a calibration certificate for each calibration, then writing certificates costs a service company €25,000 per year per technician.

Assuming it takes another 15 minutes to manually enter that data into the database, then entering data costs another €25,000 per year per technician.

Automating this process would save 1000 hours of work per year per technician and result in significant cost savings.

 

automated calibration solution for service companies

 

Customer success testimonials

"Beamex's calibration solution is an ideal match with our needs as a service company. We now have a paperless process that increases our technicians’ productivity without sacrificing accuracy, so we can provide a leaner, more efficient service and our clients can expect certificates as soon as the work is completed. The support from Beamex means we can rely on everything to work as expected and provide our customers with the best possible service."  

Richard O’ Meara Contracts Manager, Douglas Calibration Services, Jones Engineering Group

 

"The Beamex Integrated Calibration Solution has allowed us to save up to 30% of the time spent on calibrations and the production of verification reports, while also giving us the option of editing standardized and personalized verification report frames to meet the specific requirements of our customers. By automating calibration routines, our technicians can focus on other tasks while still following procedures and ensuring the integrity of calibration results. Therefore, the technicians are more relaxed and our customers are more confident, especially since the process is fully digitalized and there is no risk of errors during data collection."

Laurent Flachard, Lifecycle Field Leader, Emerson, France

 

 

Read more about the benefits of automated calibration for service companies in our white paper. Download your PDF copy here:

How service companies can create a competatice advantage - Beamex White Paper

 

Beamex calibration solutions

Learn more about Beamex products and services on our website or contact your local Beamex representative:

 

Visit Beamex website

Contact Us

Beamex Worldwide Contacts

 

Related articles

If you found this post interesting, you might also like these service company-related articles:

 

Related blog posts

Other blog posts we would like to suggest to you: 

 

Download your copy of the Calibration Essentials Software eBook to learn more about calibration management and software

Calibration Essentials- Software eBook

 

Topics: Calibration, Calibration process, Calibration software, Calibration management

Stepping on the gas with the UK’s National Grid[Case Story]

Posted by Rita Patel on May 30, 2022

Stepping on the gas with the UK’s National Grid - Beamex blog post

 

Beamex’s automated, paperless calibration solution has helped National Grid streamline its calibration processes and save a huge amount of time and money in the process. In this blog we take a look at what can be achieved with a combination of the right hardware, software, and expertise.

Read the story in full >>

 

National Grid owns and operates Great Britain’s gas national transmission system (NTS). A critical part of this network are the 25 gas compressor stations, from where gas is fed to the eight distribution networks that supply domestic and industrial consumers across Britain. The volume of calibration data these stations generate is huge, with everything from pressure and temperature switches to flow transmitters and vibration sensors requiring regular calibration to ensure accuracy and reliability. 

But National Grid was facing some challenges:

  • islands of data siloed across disparate systems, making it difficult to monitor and accurately assess asset performance
  • no established, standardized process for recording and storing calibration data
  • no commonality in terms of the calibration hardware and software being used across the different stations.

“It was very challenging for us to build a true picture of how our assets were operating, optimize our ways of working, and build business cases for investment,” says Andy Barnwell, Asset Data & Systems Specialist, National Grid. “We were dealing with individual databases and time-consuming calibration processes with multiple steps.”

 

Here's a very short video summary of the story:

 

Automated, paperless calibration to the rescue 

Based on our knowledge of National Grid’s assets and operational process for calibration, Beamex was able to deliver a fully automated and integrated calibration solution that would improve both access to and visibility over asset data through a centralized database.

This package comprised the Beamex MC6-Ex Intrinsically Safe Field Calibrator and Communicator ­and Beamex CMX Calibration Management Software.

 

Sometimes it’s good to put all your eggs in one basket 

Centralizing all their asset data in a single system would provide National Grid with the ability to thoroughly interrogate their assets and make informed decisions about maintenance procedures and schedules. With the Beamex solution in place, National Grid have been able to cut the number of steps needed to perform a calibration, saving 15 minutes per device. This adds up to a time saving of over 4,000 hours per year – and a financial saving that runs into millions of pounds.  

National Grid plan to further expand their use of the CMX system to include the execution of maintenance inspection tasks using the Beamex bMobile application. “We need to make sure technicians’ work is compliant with our policy and procedures. When everyone is following an established, standardized process and all the information is kept in one place, their work is faster, the data it generates is far more reliable, and we can make better decisions about how to manage our assets,” explains James Jepson, Control & Instrumentation Systems Officer at National Grid.

 

A bright future ahead for a constantly evolving partnership

Stepping on the gas with the UK’s National GridThe Beamex solution has not been a hard sell at the compression station sites. “Beamex was very proactive in organizing online training for our teams, but the uptake was less than we expected because they are so easy to use that instead of asking basic questions during the training, our technicians were teaching themselves and quizzing the Beamex team on some fairly in-depth issues instead,” James Jepson says.

There is plenty more to come from this ever-evolving relationship, as Andy Barnwell explains: “The great thing about the Beamex solution from a development perspective is that it’s flexible and offers us a lot of options. We’ll be looking at how to further integrate Beamex solutions into our systems landscape and take advantage of even greater asset management functionalities as and when they are developed in collaboration with Beamex.”

“Automation is the future, and I can see a not-too-distant future when we will have a Beamex solution that will allow us to do everything remotely while still performing periodic on-site spot checks with highly accurate portable devices. The sky really is the limit,” James Jepson concludes.

 

Download the full customer success story

 

Read more Case Stories

To read more case stories like this, click the link below:

Read more case stories >>

 

Products highlighted

Learn more about the products highlighted in this story:

Beamex CMX Calibration Management Software >>

Beamex MC6-Ex intrinsically safe advanced field calibrator and communicator >>

Beamex Mobile calibration application >>

View all Beamex products >>

 

 

 

Topics: Calibration process, Case Story

Calibration Management and Software [eBook].

Posted by Heikki Laurila on Apr 25, 2022

Calibration software ebook Beamex

 

In this blog post, we want to share with you an education eBook focusing on calibration management and calibration software.

This eBook is a handy collection of several software-related articles, some of which have been posted earlier on the Beamex blog.

Just give me the eBook now! >>

  

What you'll learn in this eBook

  • Why use software for calibration management
  • How calibration documentation has evolved
  • How software solves the problem of manual data entry errors
  • Why data integrity matters in calibration processes
  • The benefits of connected calibration maintenance management systems
  • How to automate your calibration management ecosystem
  • How an integrated calibration solution helps you do more with less

 

View a sample of the eBook before downloading >>

 

Download the eBook in pdf format by clicking the below button:

Download the software eBook here!

 

More calibration eBooks from Beamex

If you like to learn by reading eBooks, here are a few other recent calibration-related eBooks:

View all our eBooks and white papers >>

 

Beamex - your partner in calibration excellence!

Please take a look at our offering for calibration management software on our calibration software page.

Contact us to discuss how we can be your partner in calibration excellence!

 

 

Topics: Calibration software, Calibration management

Understanding Pressure Calibrator Accuracy Specifications

Posted by Heikki Laurila on Mar 21, 2022

Understanding pressure calibrator accuracy specifications.

 

Comparing the accuracy specifications of pressure calibrators can be a challenging task because different manufacturers specify accuracy in different ways. This means that you can’t simply compare the numbers given in the specification – you need to understand how these numbers are calculated and what they mean in practice.

In this blog post, I look at the different ways pressure calibrator accuracy specifications are presented, explain what they mean, and compare them, as well as take a brief look at what else you should consider when choosing a pressure calibrator.

 

Table of contents

Accuracy specifications

1. Percent of full scale

2. Percent of span

3. Percent of reading

4. A combined accuracy

5. A split range accuracy

Other things to consider

Long-term stability

Uncertainty vs. accuracy

TAR & TUR vs. calibration uncertainty

Environmental specifications

Additional components

Finally, it's not only about accuracy

Beamex solutions for pressure calibrations

Related blog posts

 

Download a pdf white paper of this article >>

 

Accuracy specifications

First, let’s look at the different ways accuracy specifications are provided by manufacturers and how to interpret them.

 

1. Percent of full scale

Percent of full scale (sometimes written also "% of full scale", or "% FS") is one of the most common ways to specify pressure measurement accuracy, and many process instruments use this kind of accuracy specification.

As the name suggests, you calculate the given percentage value from the full scale of the pressure range, with full scale being the maximum pressure the module can measure.

With percent of full scale, measurements have the same (absolute) accuracy (or error) throughout the whole range. This specification is obviously an easy one to calculate and understand.

It is best suited to technologies where the zero and full scale have a similar likelihood for error or drift, and where it is not possible for the user to easily make a zero correction during normal usage.

With most modern electrical pressure measurement devices, the user can perform zeroing of the pressure measurement by having the pressure measurement open to atmospheric (ambient) pressure and performing a zeroing function. This makes it easy for the user to correct for any zero errors before and after a measurement is taken. Therefore, % FS is not the most suitable accuracy specification for modern electric pressure measurement equipment.

 

Example

For clarification, let’s look at some examples with graphs for all the different specification methods, starting with the "percent of full scale" method.

  • Pressure range: 0 to 200 kPa
  • Accuracy specification: ±0.05 percent of full scale (%FS)

As we can see in the first image below, the accuracy specification is a flat line and remains the same in engineering units (0.1 kPa) throughout the pressure range whether we use %FS or kPa on our Y-axis.

But if we look at the accuracy specification as the accuracy of the measured pressure point (or accuracy as a "percent of reading" value), then the situation is different as the second graph below shows.

 

Percent of full scale accuracy specification.

 

Percent full scale with percent reading accuracy

 

The above graph shows the percentage of the accuracy reading on the y axis. This shows what is happening in practice when you measure a certain pressure with this kind of module, showing how accurate that measurement is compared to the pressure being measured.

We can see that the error of the actual measured pressure will increase pretty quickly if we are measuring a pressure smaller than the full scale.
A %FS specified pressure measurement should be mainly used with pressures close to the upper end of the module, as it loses accuracy pretty quickly at lower pressures. If you measure very low pressures the error of that measured pressure can be huge.

For example, when measuring a pressure in the middle of the range (at 50% point), the error on that reading is already doubled on the error at full scale point. Measuring at 25% of the range point, the error is quadrupled!

If you have pressure modules with %FS accuracy specification, you end up needing several modules as the accuracy deteriorates quickly when measuring lower pressure.

 

Accuracy expressed in ppm or engineering units

These two methods are very close to the percent of full scale method.
Sometimes the accuracy can be expressed in ppm (parts per million) of the full scale. Obviously, as the percentage is 1/100 and ppm is 1/1 000 000, there is a multiplier of 10 000 between the two.

For example, 0.05 %FS equals 500 ppm FS, so it is very similar to the %FS way of expressing accuracy. Of course, ppm can also be used for reading error, but more on that later.

Sometimes accuracy is also expressed in engineering units. For example, in the above example, the accuracy could have also been expressed as ±0.1 kPa, instead of ±0.05 %FS.

 

2. Percent of span

Percent of span is similar to the percent of full-scale method, but instead of calculating the percentage from the maximum range value (full scale), it is calculated from the whole range.

Naturally, if the range starts from zero, there is no difference between %FS and percentage of span accuracy.

A pressure measurement range is anyhow often a “compound” range, i.e. it starts from the vacuum side and continues to the positive side. So, for example, the measurement range could be from -100 kPa to +200 kPa. In this case, the percentage is calculated from the whole span (300 kPa, the difference between the minimum and maximum values) instead of the full scale (200 kPa).

For a fully symmetric pressure range (e.g. -1 bar to +1 bar, or -15 to +15 psi), an accuracy specification of “±0.05 % of span” has twice the error of a “±0.05 % of full-scale” specification.

Example

  • Pressure range: -100 kPa to +200 kPa
  • Accuracy specification: ±0.05 % of span

The above example looks graphically as the image below:

Percent of span pressure accuracy specification

In practice, a compound range is most often not fully symmetrical, with the positive side of the range typically larger than the vacuum side. Of course, the vacuum side can never exceed a full vacuum, but the positive side can be any size.

With a compound range, the positive side does not typically measure to very high pressure, because if a high-pressure sensor is used it will not be accurate on the vacuum range.

 

3. Percent of reading

With percent of reading accuracy specification (sometimes written "% of reading", "% of rdg", or "% rdg"), accuracy is always calculated from the measured pressure value.

With this kind of specification, the absolute size of the error (accuracy) changes as the measured pressure changes.

Obviously, this also means that at zero the accuracy specification is zero, and at very close to zero it is very small or negligible. So in practice, it is very unlikely that you will see a percent of reading specification used on its own.

Traditional dead weight testers commonly have accuracy specified as a percent of reading. In practice, the lowest pressure that can be generated with a dead weight tester is limited by the smallest available weight, or the lowest pressure at which the accuracy specification is valid is specified.

A pure percent of reading accuracy specification is not well suited to electronic pressure measurement devices or calibrators because the accuracy gets very small close to zero, and the accuracy is zero at zero pressure.

That is not practical, as there is always some noise or zero drift, so it is not realistic to provide only a percent of reading accuracy specification for electronic calibrators. If this is the only specification provided, then the range minimum, i.e. the limit below which the accuracy specification is no longer valid, should also be specified.

Percent of reading may also be given as a ppm specification. This is more practical with high-precision instruments (e.g. dead weight testers) as a percentage figure would soon start to have many zeros. As explained earlier, converting a percentage figure to ppm means multiplying the percentage by 10 000.

 

Example

  • Range: 0 to 200 kPa
  • Accuracy specifications: ±0.05 percent of reading

The graphic below has the Y-axis as "% of reading", which is obviously a straight line.

Percent of reading accuracy specification.

 

The below graphic shows a "% of reading" accuracy, with the absolute accuracy (engineering units, kPa in this case) on the Y-axis. We can see that when the pressure value is small, the absolute error is small. As the pressure increases, the absolute error increases.

percent of reading kPa on Y

 

 

4. A combined accuracy (percent of full scale and percent of reading)

This means that the accuracy specification is a combination of percent of full scale and percent of reading.

The percent values of each may be different. For example, the accuracy specification can be expressed as ±(0.01% of full scale + 0.05% of reading).

In practice this means that the "% of full scale" part ensures that there is a valid accuracy specification at zero and close to zero, while the "% of reading" part means that the absolute accuracy specification grows as the pressure grows.

This kind of specification is pretty common for electrical pressure measurement devices.

The below example and graphic illustrates this type of specification.

Example

  • Pressure range: 0 to 200 kPa
  • Accuracy specification: ± (0.01 % of full scale + 0.04 % of reading)

 

Combined accuracy specification.

 

In the above example, the combined accuracy at the full scale value is ±0.1 kPa, which is the same as for the ±0.05% of full scale specification, so the accuracy at the full scale point is the same.

However, because part of this specification is given as percent of reading, the module is pretty much more accurate at lower pressure than a 0.05% of full scale module.

So, this kind of pressure calibrator is better at performing calibrations at lower pressures without sacrificing accuracy than a calibrator with only a percent of full scale accuracy specification. Also, with this kind of combined specification you end up needing less different range pressure modules, as they are more accurate on a wider pressure range.

 

5. A split range accuracy 

This means that the lower part of the pressure range has a fixed accuracy (% of full scale, % of span, or engineering unit) and the upper part has a percent of reading specification.

This is another way for manufacturers of electrical calibrators to ensure that they can provide credible accuracy specifications at and close to zero, and also that the absolute accuracy specification increases as the pressure increases.

The lower part of the range may be specified as a percent of full scale, part of the scale, or as a percent of a (fixed) reading. It may also be given in engineering units.

In practice this means that the lower part is “flat” and the upper part is growing. The example and graph below illustrate this concept.

Example:

  • Pressure range: 0 to 200 kPa
  • Accuracy specification: "0.01% of full scale" for the first third of the range plus "0.05% of reading" for the rest of the range

 

Split range accuracy specification.

 

Download a pdf white paper of this article >>

 

Other things to consider


Long-term stability

Often, the given accuracy specification is not valid for longer periods of time and does not include any long-term drift specification. Be sure to read the small print in the calibrator’s documentation to find out if this is the case.

If you calibrate the pressure calibrator once a year, for example, it is important to know what kind of accuracy you can expect from the calibrator just before the next calibration, i.e. 11.5 months after the previous one.

For electrical pressure calibrators, where the user can make a zero correction, the zero does not drift over time (or it can be zeroed away by the user).

But the user can’t correct the drift at higher pressures (span drift). The drift normally changes the accuracy at higher pressures, typically adding a “percent of reading” type drift over time, so the full-scale span typically drifts more over time.

When choosing a pressure calibrator, be sure to check its long-term drift specification.

 

Uncertainty vs. accuracy

Another vital consideration is what components the accuracy specification includes.

Some calibrators offer an uncertainty specification instead of an accuracy specification. Typically, this means that the uncertainty specification also includes the uncertainty of the reference standards used in the calibration laboratory when manufacturing and calibrating the calibrator. Also, it often specifies the validity period for the specification, for example one year.

Generally speaking, uncertainty is a more comprehensive concept than accuracy. I could write a lot about uncertainty, but for now it’s enough to mention that you should make sure that you know all the uncertainty components relevant to the whole calibration event because the total uncertainty of the calibration process is often much more than just the calibrator’s specification.

If interested, you can read more about uncertainty in the Calibration uncertainty for dummies blog.

 

TAR & TUR vs. calibration uncertainty

Commonly used criteria for calibrator (reference standard) accuracy is the test accuracy/uncertainty ratio (TAR and TUR). This is the ratio of accuracy or uncertainty between the calibrator and the instruments to be calibrated with it and it is used to determine the level of accuracy you require from your calibrator. The bigger the ratio, the better it is. Common traditional industry practice is to use a 4 to 1 ratio.

Often the process instruments to be calibrated will have a percentage of full scale accuracy specification, while the calibrator may have (partly) a percentage of reading specification.

In practice, this means that the calibrator’s accuracy is greater than that of the process instrument when the measured pressure is smaller than the full scale.

So even if the test accuracy ratio (TAR) is not big enough at full scale pressure, it gets bigger (better) as you measure a lower pressure. The example below explains this.

I think the below graphic needs some explanations:

  • The blue line is the process instrument's accuracy (to be calibrated), it is 0.2 % of full scale (=0.4 kPa) [Left Y axis]
  • The green line is the calibrator accuracy, being 0.05 % of reading [Left Y axis].
  • The red line is the TAR (test accuracy ratio), i.e. the ratio been the two above accuracies (read on the right Y-axis). We can see that the TAR is 4 at the full scale value, but as soon as the pressure comes smaller the ratio increases a lot because the calibrator has a "% of reading" specification while the process instrument is a "% of full scale" [Right Y axis]

 

TAR accuracy ratio

 

The main takeaway with this (maybe confusing) above graphic is that the TAR should be calculated at different pressure values. Even if it looks like not being enough at full scale, it may be well enough at lower pressure, assuming the calibrator accuracy has at least partially a "% of reading" component.

Please note that a TAR only includes an accuracy comparison, which means it is missing all the uncertainty considerations. In practice, calibration processes can include other larger uncertainty sources than the calibrator, so it is important to determine the total uncertainty of the calibration process.

 

Environmental specification

It is important to read the specifications carefully to understand which environmental conditions the given specifications are valid for. If you are performing calibrations in the field rather than in a controlled environment like a laboratory or workshop, then the conditions will vary a great deal.

Sometimes the given specifications are valid only at a specific temperature or within a limited temperature range. There can be a separate specification outside that range, or a temperature coefficient.

Other environmental conditions to consider include humidity, altitude, ingress protection, orientation effect, warm-up time, and shock/vibration. 
In summary, be sure to check the environmental specifications that are relevant for you when comparing pressure calibrators.

 

Additional components

Does the specification include all relevant components – like hysteresis, nonlinearity, and repeatability – or are these specified separately and need to be added to the specification.

 

Finally, it’s not only about accuracy

Although accuracy and uncertainty are vital considerations when choosing a pressure calibrator, there are also other things to consider when selecting one, such as:

  • Does the calibrator include overpressure protection? It is difficult to avoid over-pressurizing a pressure measurement device every now and then. Some pressure calibrators have sensors that can be damaged by even the slightest amount of overpressure, while others can withstand a certain level of overpressure without damaging the sensors or affecting the accuracy. For example, the Beamex MC6 Advanced Field Calibrator and Communicator includes a built-in relief valve to release overpressure and prevent damage.
  • Does the calibrator come with an accredited calibration certificate ensuring the formal metrological traceability? If not you may need to have it calibrated separately.
  • How conservative or aggressive are the specifications? Although difficult to see, it would be good to try to find out if the company normally gives reliable conservative specifications, or if it gives aggressive figures.
  • The brand of the company. Who manufactures the device? Is the manufacturer reliable?
  • What are the warranty terms and conditions, and is there the option to extend the warranty and purchase a service plan to cover items that don’t fall within its scope?
  • Are there training services available? How is the support arranged and available? How is recalibration arranged?
  • What functions other than pressure measurement does the calibrator provide that are useful for you? 
    • For example, a 24 V loop supply and accurate mA measurement are handy if you plan to calibrate pressure transmitters.
    • Most transmitters are HART enabled, so if the calibrator includes a HART communicator, you don’t need to carry a separate communicator with you.
  • Calibrations need to be documented, so how do you plan to do that? A documenting pressure calibrator will automatically document calibration results and can communicate with your calibration software, saving you time and effort and eliminating the risk of manual data entry errors.

 

Download a pdf version of this article by clicking the image below:

Understanding pressure calibrator accuracy specifications - Beamex White Paper

 

Beamex solutions for pressure calibration

Beamex offers various different high-accuracy solutions for pressure calibration, such as:

  • Portable pressure calibrators; MC6MC2MC4.
  • Intrinsically safe pressure calibrators; MC6-Ex.
  • Pressure calibrators for workshop solutions; MC6-WS.
  • Automatic pressure controllers; POC8.
  • Portable electrical pressure pump; ePG.
  • Calibration hand pumps; PG series.
  • Calibration management software solutions; CMXLOGiCAL.

Check out all Beamex pressure calibrators.

 

Related blog posts

Beamex blog includes several posts related to pressure calibration, but if pressure is your thing, please check at least these:



 

Topics: Pressure calibration

How to avoid safety and compliance issues in fine chemicals

Posted by Monica Kruger on Feb 08, 2022

How to avoid safety and compliance issues in fine chemicals

Safety and compliance are non-negotiable in the fine chemicals industry, which produces complex, pure chemical substances such as active pharmaceutical ingredients. Fine chemicals are batch driven, with complicated, multistage processes where accuracy and efficiency are critical. 

In this industry, one of the keys to ensuring both safety and compliance is that measurements taken throughout the production process are accurate, which can be challenging to say the least with paper-based calibration. Paper-based calibration is time consuming and because it relies on manual data entry, prone to errors.

It’s commonly accepted that the typical error rate in manual data entry is about 1%, which while it might not sound like a lot can have huge implications for your calibration process. (Read more about in our blog post "Manual data entry errors".)

How can automated calibration help to address these challenges?

 

Read more about the benefits of automated calibration for the fine chemicals industry in our white paper (pdf format).

How to avoid safety and compliance issues in fine chemicals


A paperless process to the rescue


Automated calibration solutions combine software, hardware, and calibration expertise to deliver an automated, paperless flow of calibration data with minimal need for manual data entry. This not only saves time and makes the process far more efficient, but it also helps to avoid mistakes typically associated with manual data entry – thus improving the quality and integrity of the calibration data. 

Furthermore, calibration results are safely stored, tamper proof, and easily accessible in the calibration software for review, for example for audit or analysis purposes.


Safety, compliance, and continuous improvement


Removing human error reduces the chance that a production batch will be rejected due to out-of-tolerance calibrators and helps to ensure compliance with regulations like GMP and 21 CFR part 11.

Automating calibration processes also brings significant financial benefits. For example, if an instrument is found to be out of tolerance, at minimum it requires that the product is quarantined and subject to risk analysis and investigation. In the worst case, the entire batch will have to be discarded, increasing waste and leading to large financial losses. 

What’s more, with calibration data in digital form rather than sitting in siloed paper archives it can be integrated with ERP systems, helping management to understand what’s going on and supporting better decision-making. And with everything in one easily accessible system, data across factories can be easily compared to spot trends and identify areas for improvement. 

It’s important to remember that any automated calibration solution should be based on a thorough analysis of your specific needs to ensure the process is well designed and error-free. This is where working with a trusted advisor who can help analyze the process and find areas for improvement really pays off.

 

Read more about the benefits of automated calibration for the fine chemicals industry in our white paper.

Download the free pdf by clicking the picture below

How to avoid safety and compliance issues in fine chemicals

 

Beamex calibration solutions

Learn more about Beamex products and services on our website or contact your local Beamex representative:

 

Visit Beamex website

Contact Us

Beamex Worldwide Contacts

 

Related fine chemicals articles

 

Related blog posts

Download your copy of the Calibration Essentials Software eBook to learn more about calibration management and software.

Calibration Essentials- Software eBook

 

Topics: Calibration software, Calibration management, Calibration in fine chemicals, Calibration in chemical industry

Calibration management - transition from paper-based to digital

Posted by Tiffany Rankin on Jan 11, 2022

Calibration management - transition from paper-based to digital - Beamex blog article

 

A Tale of Three Steves

Stephen Jerge, Calibration Supervisor for Lonza Biologics, a multinational chemical and biotechnology company, recently walked attendees of the Beamex Annual Calibration Exchange (ACE) through the project he headed to transition from a paper-based calibration management system (CMS) to an integrated, digital, paperless solution using Beamex CMX. Steve has over 30 years of calibration experience in the telecommunication and pharmaceutical industries.

Over the last 3 years, his primary focus has been contributing to Lonza’s global paperless SAP/CMX integrated solution through implementation, training, and supporting calibration operations and expansion projects.

Watch the presentation video recording now!

In this blog post, we’ll share The Tale of Three Steve’s as we follow his journey from 2017 Steve, a stressed-out, overworked supervisor; to 2019 Steve as he underwent the rollout of a new, automated system; to Steve 2021, whose focus is on continuous improvement.

 

2017 Steve - Managing a paper-based process

Steve starts by bringing us back to 2017 when the calibration process was paper-based and manual. “It couldn’t be more inefficient,” notes Steve. “Everything from standards selection, approvals, and calculations were done manually. All historical data was "safely locked up in a filing cabinet" and not doing any good at all.”

Below – is a visual representation of the paper-based process. As Steve notes, “Each arrow in this image is time and money and each person is an error trap.”

paper 7 steps

 

Does any of this sound and look familiar to you? Keep reading to find out how Steve got out of this heavily manual and error-prone process.

Clearly, something needed to change. Steve and the management team at Lonza outlined their main priorities: Quality, Consistency, Efficiency, and Cost.

Quality

As a pharmaceutical manufacturer, quality is their first priority. With the existing methods, they were doing at about 20% technical review, as a quality component, and everything else was getting a GMP (Good Manufacturing Practice) review at an administrative level. They wanted to leverage a CMS system so they could have a 100% review done as the work was completed, as opposed to days or weeks later when the manual review was performed. They wanted to be able to reduce human errors before they happened.

Consistency

With a paper system, it’s easy to make mistakes such as calculation errors.

Efficiency

If we look at the image above outlining the paper-based steps, automation doesn’t remove the arrows, but it makes them easier, more streamlined, and ultimately takes them out of the hands of the technicians. This allows technicians more time to focus on their roles.

As Steve states, “If you’re running a NASCAR race and the technician must get out of the car, change the tires, clean the windshield and add the fuel, you’re going to come in last place every time.” An automated process gives the technician the time and resources to do the work that needs to be done.

Likewise, an automated process means you’re able to take the knowledge a tenured technician has gathered over the years and include it as part of the process. This way, the next person that must do the work, be they a contractor or new technician, has all the information needed to be successful.

Managing by exception* – the CMX process reviews the work order and flags any issues that need review.

*Learn more about this by listening to the roundtable discussion from day one of the Annual Calibration Exchange.

Cost

Clearly, this is a huge issue. Rework means they need to reclean, reschedule, stop production, etc. All of these cost money. By leveraging the CMX technology, they make rework less necessary and have improved processes.

 

improve your processes

 

2019 Steve – Implementing automation

After outlining all their priorities and deciding to implement Beamex CMX as their CMS of choice, Lonza was ready to go from a paper-based process to a fully integrated SAP Computer Maintenance Management System (CMMS) process.

The first benefit of the new system is the new automated review process. Because CMX includes 7 built-in checks, only those items with red flags need to be reviewed. Technicians can also flag work orders for a second review if any questions arise during the calibration process. All other work orders can be closed in as little as one hour.

In 2019, Lonza also moved to use the Beamex MC6 as their primary calibrator. The MC6 replaced 5 or more types of standards. Because the MC6 meets all the functions for each standard, the technicians can now use the technology to be more efficient.

Prior to automating the process, Lonza was having issues keeping up with items that were nearing being past due for calibration. By leveraging the integration between CMX and SAP, the Calibration Team at Lonza was able to make huge improvements in scheduling and tracking. Utilizing SAP also allows them to manage external vendor orders. Now, internal and external items can be efficiently tracked.


Let’s look at how the SAP to CMX integration works:

sapcmx

 

In short, SAP cares about the when, where, and who of the process. CMX cares about HOW. CMX provides preselected calibration standards with standard warnings, test point warnings, passed/failed/adjust warnings, etc. CMX can also perform calculations and conduct an automated review.

 

2021 Steve – Continuous improvement

With the integrated process fully implemented, Steve can now focus on ways to continue to make life easier for the technicians. They have added pre-work instructions for each work order. This allows them to take the knowledge from the seasoned technician and load that information into CMX. Now, the location, special tools/fittings information, post-work instructions, etc. are readily available.

Checklist/inspection functions were also added. These include a combination of simple instructions, plus critical tasks that may not be calibration-related but are essential to the function of the equipment.   This reinforces procedure, is paperless, provides another level of quality, and reduces time on deviation.

From a technician’s perspective, life is pretty straightforward now. You check out the calibration, execute the calibration (following along with all the defined steps), then check-in and close. Technicians can provide feedback through the second approval process or by adding notes in CMX. This information is brought to management for review and can be triaged based on how critical the notes are.

3 step process

Final Words

Steve summarizes his journey by saying, "When you implement a major change – and take people out of their comfort zone – it can turn lives upside down. Management needs to support their technicians and their team through these changes. If they can do this successfully, the technician’s job will be easier in the long run and supervisors will have the ability to manage by exception, focus on continuous improvement, and work with a happier and more productive team."

 

To learn more about how Steve has moved to managing by exception with CMX, take a look at the round table discussion from ACE 2021.

 

Check out Steve Jerge's video presentation, plus other insights from industry experts, on the Beamex 2021 Annual Calibration Exchange Video Library

Watch Now


Want more information on CMMS and Calibration System Integrations?

Check out these blog posts:

Manual Data Entry Errors

Bridging the Gap

Topics: Beamex MC6, Calibration process, Calibration software, CMX, Data Integrity, Calibration management

Automating the calibration management ecosystem

Posted by Heikki Laurila on Dec 01, 2021

Automating the calibration management ecosystem

It’s time to say goodbye to error-prone paper-based calibration!

While the pen might be mightier than the sword, in process industries pen-and-paper based calibration systems are a weak spot that is desperately in need of attention. Automating your calibration ecosystem saves time, reduces the risk of errors, and enables your organization to use calibration data for business benefits.

But what are the steps involved in implementing an automated process, and what support is on offer to help you succeed? Read on to find out.

In process industries, despite the fact that calibration is a business-critical process, engineers and technicians are still drowning in time-consuming, unreliable, and error-prone paper trails.

Automating and streamlining the calibration process is key to improving efficiency and avoiding errors, but for many going digital can feel like a daunting step.

 

What is the calibration ecosystem?

The calibration ecosystem is made up of everything that’s connected to the calibration process: calibration data, calibration management software, services, and expertise.

Expertise on the part of the calibration provider ensures that the system being delivered is compliant and meets the client’s unique requirements, that the roll out of that system goes smoothly, and that the personnel who will use the system are properly trained to do so.

In terms of hardware, it’s no surprise that calibrators are on the front line. In a modern, automated system this means documenting multifunction units that can provide a traceable reference, calculate measurement error, and even perform calibrations automatically and then document and store the results ready for uploading to the calibration software.

The software then handles tasks like calibration planning and scheduling, analysis and optimization of calibration frequency, and reporting – and can be integrated with maintenance management systems.

 

So, why do I need to automate my calibration infrastructure?

Automation can help process industry operators to thrive with the help of streamlined, accurate, and efficient calibration processes that ensure compliance and ultimately improve business performance. The headline benefits include:

  • Better planning and decision making
  • Ensured compliance
  • Time and cost savings
  • Full traceability with everything documented in one system
  • Improved analysis capabilities
  • More efficient and effective maintenance processes
  • Robust quality assurance

 

OK, I’m sold. What’s next, and how can Beamex help me?

Whether you’re taking an instrument-based or a process-based approach to calibration, the first step is to classify all your devices as critical or non-critical, decide on their calibration intervals, and choose the right tools and methods for calibration.

After that comes staff training – for maintenance technicians, service engineers, process and quality engineers, and managers – to ensure you can get the best possible return on your investment. Finally, there’s execution and analysis, where staff carry out calibrations according to a carefully defined set of instructions and safety procedures and the results are analyzed to identify where there’s room for improvement.

Beamex can act as a trusted advisor throughout the entire process of creating an automated calibration ecosystem, helping you to evaluate your current processes, identify areas for improvement, and ensure a smooth transition to a new and optimized process.

Beyond expertise, our solution offering covers on-premises calibration softwarecloud-based calibration software, and a mobile application for paperless calibration, documentation, and inspection in the field.

In addition, different calibration tools and various services are being offered.

 

To find out more about what’s involved in automating your calibration management ecosystem and how we can help, download our white paper.

Automating the calibration management ecosystem - Beamex blog

 

Other related content

Download your copy of the Calibration Essentials Software eBook to learn more about calibration management and software

Calibration Essentials- Software eBook

 

Topics: Calibration software

Improving efficiency and ensuring compliance in the pharmaceutical industry

Posted by Monica Kruger on Oct 05, 2021

Improving efficiency and ensuring compliance in the pharmaceutical industry 1500x500px

Today, it seems like everyone is talking about digitalization – and for good reason. Done properly, digitalization can unlock a whole host of benefits for companies including greater efficiency, cost savings, and the ability to do data analysis.

But in order to harness these rewards, digitalization needs to be carried out in a smart way – especially in the pharmaceutical industry where compliance and patient safety are the key drivers.

Digitalizing calibration 

One area ripe for digitalization is calibration. Calibration is still largely paper based in the pharma industry, which means there is room for human error across the many steps required.

Process instrument calibration is just one of many maintenance-related activities in a manufacturing plant, and it doesn’t make sense for companies to use their limited resources and time performing unnecessary calibrations or following time-consuming, ineffective calibration procedures.

The use of paper for calibration also means that a huge potential resource – data from calibrations – is being wasted as it’s sitting in binders in a storage room rather than being easily available for analysis. 

How automated calibration works

An integrated calibration solution is a smart way to digitalize calibrations.

Such a solution combines the actual calibrators, centralized calibration software, and industry knowledge to deliver an automated and paperless flow of calibration data.

This means moving away from resource-intensive manual data entry towards an automated system where everything is validated automatically by a single technician using a multifunctional device – in real time and with no room for human error.

 

The benefits

The benefits of digitalizing and automating calibration are numerous and include:

  • Ensuring patient safety and compliance by ensuring that instruments are operating within tolerances
  • Each calibration takes less time, improving operational efficiency
  • Smart calibrators can provide guidance to technicians to decrease errors during calibrations
  • Management can make more informed decisions based on current data
  • The integrity of calibration data is kept safe in a tamper-proof central repository
  • Data can be found quickly and easily for audit purposes

 

How to ensure successful digitalization

In order to make sure digitalization serves a useful purpose and fulfills its potential, several things are needed.

Firstly, the proper expertise to ensure that systems are in compliance with the Food and Drug Administration’s Good Manufacturing Practice (GMP) and other regulatory requirements. The GMP requirement 21 CFR Part 11, which regulates how the calibration certificate is documented and signed electronically, must be followed in order to create a compliant process.

Secondly, the actual calibration solution software and hardware need to be designed in a way that minimizes or removes the need for human input. This reduces the chance of error and removes the need for the “four eyes” principle – where a second set of eyes are needed to confirm calibration data is recorded correctly.

Finally, software tools need to be available to quickly access data, for example for audit purposes, as well as to carry out trend or other analysis on calibration data. This data can also be used to predict when a device is drifting out of tolerance to optimize maintenance, or for comparing performance between factories to optimize efficiency. 

 

To find out more about what digitalizing calibration means in practice, read our white paper.


New Call-to-action

Related articles

Related blog posts



Download your copy of the Calibration Essentials Software eBook to learn more about calibration management and software.

Calibration Essentials- Software eBook

 

Topics: Process automation, Calibration in pharmaceutical industry

Pressure Calibration [eBook]

Posted by Heikki Laurila on Jun 22, 2021

Beamex pressure ebook cover image

 

In this blog post, we want to share a free educational eBook on Pressure Calibration and other pressure-related topics.

The eBook is titled "Calibration Essential: Pressure" and we have developed it together with Automation.com, a media brand of the International Society of Automation (ISA). 

Some of these articles have been previously posted in the Beamex blog, but now we have collected several pressure-related articles into one handy eBook.

Just give me the free Pressure eBook pdf now! >>

 

Pressure Calibration eBook - contents

The eBook starts with a few general educational pressure-related articles and includes several technical “How to calibrate” articles.

The eBook has 40 pages and contains the seven (7) following articles:

Pressure Calibration Basics: Pressure Types (Page 5)

    • Different pressure types or modes are available, including gauge, absolute, and differential.

What is Barometric Pressure? (Page 8)

    • This article takes an in-depth look at barometric, or atmospheric, pressure.

Pressure Units and Pressure Unit Conversion (Page 13)

    • It is important to understand the basics of different pressure units and pressure unit families to avoid potentially dangerous misunderstandings.

Calibrating a Square Rooting Pressure Transmitter (Page 18)

    • There are many questions about the calibration of a square rooting pressure transmitter, with the frequent concern that the calibration fails too easily at the zero point.

Pressure Transmitter Accuracy Specifications: The Small Print (Page 21)

    • Pressure transmitters’ accuracy specifications have many different components that go beyond the specification listed in the advertising brochure, which might tell only part of the truth.

How to Calibrate Pressure Gauges: 20 Things You Should Consider (Page 27)

    • Pressure gauges need to be calibrated at regular intervals to ensure they are accurate.

Pressure Switch Calibration (Page 35)

    • Pressure switches are more difficult to calibrate than transmitters, but proper calibration is important for accuracy and reliability.

 

Download the free Pressure eBook here!

 

More Beamex eBooks

Here are links to some of our other popular calibration-related eBooks:

You can find all our eBooks and White Papers here: White Papers and eBooks.

Here's a webinar you may enjoy:  Differential pressure flowmeter calibration - Best practices in the field.  Watch now!

Find a better way for your pressure calibrations!

If you want to find a better way for your pressure calibrations, please get in touch with our pressure calibration specialists:

Contact Beamex pressure calibration experts! 

If you work with pressure calibration, please check out the latest Beamex pressure calibrators:

Beamex Pressure Calibrators

 

 

Topics: Pressure calibration

The Evolution of Calibration Documentation

Posted by Tiffany Rankin on May 20, 2021

Evolution of calibration documentation

 

Our modern history is defined by the advent of writing. Writing is humankind’s principal technology for collecting, manipulating, storing, retrieving, communicating, and disseminating information. Before we learned to write, we lived in an era referred to as pre-history, or prehistoric times. As humans evolved, began cultivating land, and started living a less nomadic existence, the documentation of events became more sophisticated. Cave drawings gave way to hieroglyphics; stone tablets evolved into scrolls and then into bound books; the invention of typeset documents gave more and more people access to the written word. Today, we can send emails, text messages, and a variety of other digital communication around the world in a matter of seconds. Humans have evolved and documentation has evolved, and with it the way in which we manage calibration.

In the beginning, there was no way to document calibration findings other than with a pen and paper. This information was brought back from the field, entered into a form, and filed away. Just as in the Library of Alexandria (one of the largest and most significant libraries in the ancient world) with its thousands of papyrus scrolls, managing hundreds or even thousands of paper calibration documents comes with the inherent risk of misplaced, lost, or damaged documents – in the case of the Alexandria library, caused by a fire allegedly started by Julius Caesar. Additionally, a paper and pen system is labor-intensive, time-consuming, prone to errors, and provides little to no opportunity to analyze historical trends.

 

Download a pdf version of this article!The evolution of calibration documentation - Beamex white paper

 

Digital systems enter the scene

Databases

As we progress through time, more digitalized systems of calibration management have emerged including the use of spreadsheets and databases. While certainly a step in the right direction, this method of documentation still has its drawbacks. Similar to the pen and paper method, this form of recording calibration data is still time-consuming and error-prone. It also lacks automation in that reminders and tasks cannot be set up on instruments that are due for calibration.

Read the blog post: Manual Data Entry Errors (March 2021)

Software systems

The use of software to manage calibration reports was the next giant leap. The calibration module within some maintenance management software allows instrument data to be stored and managed efficiently in a plant’s database. But again, this method falls short due to lack of automation, limited functionality, and often non-compliance with regulatory requirements (for example, FDA or EPA requirements) for managing calibration records.

 

Dedicated calibration solutions

Advances in technology seem to come faster and faster. Today, dedicated calibration software is the most advanced solution available to support and guide calibration management activities. With calibration software, users are provided with an easy-to-use Windows Explorer-like interface. The software manages and stores all instrument and calibration data. This includes the planning and scheduling of calibration work; analysis and optimization of calibration frequency; production of reports, certificates, and labels; communication with smart calibrators; and easy integration with maintenance management systems such as SAP and Maximo. The result is a streamlined, automated calibration process that improves quality, plant productivity, safety, and efficiency.

In order to understand how this type of software can help better manage process plant instrument calibrations, it is important to consider the typical calibration management tasks that companies undertake. There are five main areas here: planning and decision-making, organization, execution, documentation, and analysis.

 

Planning and decision-making

Instruments and measurement devices should be listed and classified into ‘critical’ and ‘non-critical’ devices, with calibration ranges and required tolerances identified for each individual device. The calibration interval, creation, and approval of standard operating procedures (SOPs), and selection of suitable calibration methods and tools should also be defined. Finally, the current calibration status for every instrument should be identified.

Organization

Organization involves training the company’s calibration staff in using the chosen tools and how to follow the approved SOPs. Resources should be made available and assigned to carry out the scheduled calibration tasks.

Execution

The execution stage involves staff carrying out assigned calibration activities and following the appropriate instructions before calibrating a device, including any associated safety procedures.

Documentation

Unlike many of the more archaic methods, calibration software generates reports automatically, and all calibration data is stored in one database rather than multiple disparate systems. Calibration certificates, reports, and labels can all be printed out on paper or sent in electronic format.

The documentation and storage of calibration results typically involve electronically signing or approving all calibration records generated.

Analysis

Improvements in documentation lead to improvements in analysis. Using specialized calibration management software enables faster, easier, and more accurate analysis of calibration records and identification of historical trends. Also, when a plant is being audited, calibration software can facilitate both the preparation process and the audit itself. Locating records and verifying that the system works is effortless when compared to traditional calibration record keeping. Regulatory organizations and standards such as FDA and EPA place demanding requirements on the recording of calibration data. Calibration software has many functions that help in meeting these requirements, such as change management, audit trail, and electronic signature functions.

Based on the results, analysis should be performed to determine if any corrective action needs to be taken. The effectiveness of calibration needs to be reviewed and calibration intervals checked. These intervals may need to be adjusted based on archived calibration history. If, for example, a sensor drifts out of its specification range, the consequences could be disastrous for the plant, resulting in problems such as costly production downtime, safety issues, or batches of inferior quality goods being produced which may then have to be scrapped.

 

Just as advancements in tools and the proliferation of the written word has helped shape the evolution of humans, advancements in calibration documentation shape the efficiency and productivity of plants using these technologies. By replacing manual procedures with automated, validated processes, efficiencies should improve. Reducing labor-intensive calibration activities will lessen costly production downtime, while the ability to analyze calibration results will optimize calibration intervals, saving time and increasing productivity.

Every type of process plant, regardless of industry sector, can benefit from using calibration management software. Compared to traditional, paper-based systems, in-house legacy calibration systems, or calibration modules of maintenance management systems, using dedicated calibration management software results in improved quality and increased productivity, and reduces the cost of the entire calibration process.

 

Calibration software also gives users access to data and historical trends, and these insights help plant personnel to make better decisions. For example, when a piece of equipment needs to be upgraded it can be difficult to get approval based on speculation. Being able to show data of the inconsistencies and malfunctions makes the approval process much easier. In addition, as the volume of work for calibration technicians increases, having insights into the process can facilitate a more streamlined and efficient work schedule. This will in turn improve reliability, make it easier for technicians to manage their workflow, and contribute to a safer and more well-organized process.

 

As we become a more advanced society our need to share information progresses, as do our methods of collecting, manipulating, storing, retrieving, communicating, and disseminating information. While simply writing calibration data down with a pen and paper is still an effective way of collecting information, it lacks efficiency and hinders the ability of people further down the line to retrieve and process the information. While databases and maintenance management software are certainly steps in the right direction, they still miss the mark when it comes to disseminating data in a useful and streamlined way. Implementing calibration software makes it easier to collect, store, analyze, retrieve, and share information. Until the next technological leap forward, calibration software remains the most advanced solution available to support and guide calibration management activities.

 

Evolution of Beamex calibration software in brief

Here's a brief list of Beamex's main software products.

Beamex historical CALDB calibration software

Beamex PDOC (1985)

The very first calibration software Beamex released was the PDOC software back in 1985.

The PCAL software automated the documentation of pressure calibration by communicating with a bench-mounted pressure calibrator. It printed a calibration certificate on a narrow paper with the thermal printer integrated in the Epson computer.

That was a software that was stored on a small cassette and was used with a kind of a portable Epson computer.

Later, a corresponding TDOC program was release for documenting temperature calibrations.

 

CALDB1 / CALDB3 (Late 80's)

CALDB – Calibration Database – was a DOS-based calibration database software. Our first one for personal computers.

Later, an adder HISDB was introduced for reviewing the history of calibration results.

 

Beamex QM6 Quality Manager - Calibration Management Software (1996)

The Beamex QM6 was our first calibration management software that run in Windows operating system. It had a database for instruments, references and calibration results. It had communication with documenting calibrators, so you could send calibration procedure (work order) to documenting calibrator and receive the results back to QM6 after the calibration was completed.

 

Beamex QD3 Quality Documenter (1996)

QD3 was software for documenting calibration results. It did not have the same functionality as the QM6 but was a simpler version. It could anyhow communicate with documenting calibrators.

 

Beamex CMX Calibration Management Software (2003)

The very first version of the Beamex CMX calibration management software was launched already in 2003 and it was our first Windows software. The first versions were pretty limited in functionality compared to what CMX is today.

During the years, the CMX technology and functionality have been developed continuously and CMX is still very much under active development. Today, the CMX includes a huge amount of functionality, including seamlessly integrating with many maintenance management systems, and suits smaller customers as well as large enterprise installations.

A lot of functionality has been developed together with leading pharmaceutical customers related to the functionality required in the regulated pharmaceutical industry.

 

Beamex bMobile calibration application (2016)

Beamex bMobile is a calibration application that can be installed on Android or Windows mobile devices. The bMobile can be used to document calibration results with a mobile device.

The bMobile communicates with Beamex CMX and Logical calibration software, so calibration work can be sent to bMobile and results received back to software.

 

Beamex LOGiCAL 1.x (2018)

The first version of the Logical cloud-based calibration software was a simple documenting software that could read calibration results from a documenting calibrator and convert the results into a pdf calibration certificate.

The Logical 1.x has been replaced with Logical 2.x.

 

Beamex LOGiCAL 2.x (2020)

The current Logical 2.x is a subscription-based and cloud-based calibration software as a service. It has a database to store instruments, references and calibration results. It can synchronize procedures to Beamex documenting calibrators and Beamex bMobile, and also synchronize calibration results back to Logical from mobile devices.

Beamex LOGiCAL calibration software

 

Keep up with Beamex advancements by subscribing to Product News.

Subscribe today

Ready to get your calibration process out of the stone ages? Contact Beamex today.

Contact Us

 

Download your copy of the Calibration Essentials Software eBook, to learn more about calibration management and software.

Calibration Essentials- Software eBook

 

 

Topics: Process automation, Calibration software, CMX, Data Integrity, Digitalisation

Manual Data Entry Errors

Posted by Heikki Laurila on Mar 25, 2021

banner-for-manual-data-entry-errors_1200px_v1

 

Many businesses still use a lot of manual entry in their industrial processes.

This is despite the fact that it is commonly known and accepted that it is a slow and labor-intensive process and there are always human errors related to manual data entry - Human errors are natural.

It is commonly accepted that the typical error rate in manual data entry is about 1 %.

What does this 1 % mean in practice in calibration processes, and how can you make it smaller, or even get rid of it?

This article mainly focuses on industrial calibration processes and the manual data entry related to these processes.

 

Table of content

 

Download a pdf version of this article!

Common manual data entry steps in calibration processes

To start with, let’s take a look at the common ways in which data is handled in industrial calibration processes:

1. Pen & paper

It is still very common that calibration data is captured in the field by writing it on a paper form during the calibration process. Later on, back in the workshop, the calibration data from the paper is manually typed into a computerized system, in some cases by another person.

So with this very common process the calibration data is entered manually twice: first with pen and paper and later when it is typed into the system.

 

2. Manual entry into a calibration system

Another common way is to document the calibration data by typing it into a computer system, using spreadsheet software like Microsoft Excel or dedicated calibration software. If you want to type straight into a software program you need to carry a laptop in the field and you need to be connected to a network, which is not always possible in industrial environments.

If it is not possible to enter the data straight into the calibration application using a computer, it may in some cases be entered on a mobile device with a relevant application and then later electronically transferred into the calibration software.

In this process the data is still entered manually, although only once, not twice like in the previous process.

 

3. Electronic storing of data

The most modern way is to use calibration equipment that can store the calibration data in its memory fully electronically. The calibration data can then be transferred from the calibrator’s memory into the calibration software, again fully electronically.

This kind of process does not include any manual data entry steps. This eliminates all the human error and is also faster as it does not consume the engineer’s time.

This process works only for calibrations where the calibration equipment can measure (or generate/simulate) instrument input and output. If there are any gauges, indicators, displays, or similar that need to be read visually, some form of manual data entry is needed.

But even if some of the calibration data is manually entered into the calibrator, the calibrator may have a feature to check that the data is within accepted values and may also have an informative graphical indication of the data quality for easy verification.

The calibration data is then sent electronically from the calibrator to the calibration system.

 

Manual data entry versus documenting calibrator

On the above picture, the left side shows an example where the calibration data has been entered manually on a paper form. Possibly some numbers have been entered incorrectly, it is difficult to read some of them, manual error calculation is difficult, is that tick a pass or a fail, who signed that, and so on. 

On the right side you can see the same calibration with a Beamex MC6 documenting calibrator. All calibration data is stored automatically and electronically in the calibrator's memory, errors are calculated automatically, pass/fail decision is done automatically, the results are sent electronically to calibration software for storing and certificate printing.  

Which one delivers more reliable calibration data?

(well, that was not really a questions, it is the MC6 calibrator of course)

 

What about the 1 % typical error rate?

It is obvious that there are errors in manual data entry. It seems to be a commonly accepted rule that in manual data entry, human errors will cause a 1 % average error rate.

This error rate is based research published on several articles, but I must admit that I don’t know the scientific background for it. We can argue about what the real error rate is, but we can all agree that there are always errors in manual data entry.

After reading about this 1 % error rate in a few places, it got me thinking about what this means for calibration processes. So, let’s stick with that 1 % average error rate in the following considerations.

The error rate can grow quickly if the data to be entered is complicated, if the user is tired or in a hurry, and for many other reasons. For example, some people may have “personal” handwriting (I know I do), which is difficult for others to read.

To reduce errors, companies can train employees, highlight accuracy over speed, double-check the work, ensure optimal working conditions, and naturally try to automate their processes and get rid of manual data entry.

 

Calibration processes

Calibration data includes a lot of numbers, often with many decimals. The numbers also typically fluctuate up and down with the decimals changing all the time. Very rarely is calibration data an easy to enter “even” number (20 mA is more likely to be 20.012 mA). This makes it challenging to manually enter the data correctly.

When calibrating a process instrument, for example a transmitter, the input and output data should be captured at the same time, which is difficult. If the values are drifting, additional error will be introduced if the numbers are not recorded at the same time.

In a process instrument calibration, there are typically five calibration points (25 % steps with 0 %, 25 %, 50 %, 75 % and 100 % points), and both input and output are to be recorded. This already makes 10 calibration data points. Other data also needs to be entered during the calibration, such as the reference standards used, environmental data, date, time, signature, etc.

On average we can say that 20 data points need to be entered during the calibration process. With a 1 % error rate, this means that every fifth calibration will include faulty data.

Every fifth calibration? Why is that? Because if one calibration includes 20 data points then five calibrations include 100 data points. A 1 % error rate means that data is entered incorrectly once in every 100 data points entered. So, every fifth calibration will include a faulty data entry. Every fifth calibration means that 20 % of the calibrations performed will be faulty, each including one faulty data point on average.

The above is true if the data is entered manually only once. But as discussed earlier, often the data is entered manually twice, first on paper in the field and then when it is transferred from the paper to the system in the workshop. This means that there are double the number of data entry points, with one calibration event having 40 data points instead of 20 to be entered. This means that statistically, 40 % of the calibrations made will include a faulty data entry!

Wow, so the modest-sounding 1 % error rate in manual data entry means that often 40 % of calibrations will include faulty data in practice.

To repeat: The 1 % error rate just turned into 40 %!

So, this means almost half of these calibrations will include faulty data. Well, not quite half, but 40 %; I exaggerated a little there, you got me, but it is pretty close to half.

If you do manual calibration data entry using the two-phase system, about 40 % of your calibration records will most likely have errors. Let that sink in for a while.

... a short pause for sinking... :-)

In a typical process site that performs 10,000 calibrations annually, all manually entered using the two-phase data entry process, statistically they will have 4,000 calibrations with faulty data!

Wow, that escalated quickly!

Naturally, the calibration process may be way more complicated and may contain many more data points.

If a calibration process of an instrument includes 100 data points and the results are manually recorded, a 1 % error rate means that statistically every calibration includes one faulty data entry! So statistically, 100 % of the calibrations include faulty data point!

 

Significant or insignificant error?

The significance of error varies according to the situation.

If the manually entered calibration data is wildly inaccurate it is likely going to be noticed at some point. For example, if the nominal 4 mA zero point of a transmitter is entered as 40.02 mA (wrong decimal point) that will most likely be noticed at some point, at the latest when the data is entered into the calibration system, assuming the system gives a warning when the error is too big.

But what to do then? Do you consider that it is ok to move the decimal and assume it is then correct, or does the calibration need to be repeated – which means going back to field and doing the calibration again.

If the error is small enough, it may not be noticed anywhere in the process. Using the previous example, if the transmitter’s zero point is erroneously recorded as 4.02 mA when it was actually 4.20 mA, that error may not be noticed at all. Even if the transmitter’s current of 4.20 mA would be out of tolerance, which should be noticed and corrective actions taken, it will not be noticed because the erroneously entered 4.02 mA is a good enough reading and the calibration will pass without any further action. This leaves the transmitter in the process continuously measuring with a too-large error.

So, in the worst-case scenario, human error in manual data entry will lead to a situation where a faulty calibration is considered being passed!

 

Unintentional or intentional error?

Most human errors in manual data entry are naturally unintentional.

It is anyhow not totally impossible that sometimes the calibration data would be intentionally entered incorrectly. Manual data entry gives the opportunity to falsify results, and it is almost impossible to stop that.

If the results are on the limits of being a pass or fail, it is possible that in some cases the data is entered so that it is a pass. Maybe a fail result would cause a lot of extra work, and maybe it is already late in the afternoon and time to go home.

If you see for example a pressure transmitter calibration certificate with a pressure reading of 10.000 psi (or bar) and a current reading of 20.000 mA, it is probably too good to be true.

I apologize for bringing up this kind of possibility, but this kind of information may be found in some publicly available audit reports. This is also something the US FDA (Food and Drug Administration) pays attention to when auditing the pharmaceutical industry.

But let’s assume that the errors are unintentional human errors.

Manual data entry is still being used in surprisingly many calibration processes, even in highly regulated industries such as the pharmaceutical and food and beverage industries, nuclear power, and many others.

When entering data manually on a paper form, the paper form will not automatically alert the user if the entered data is outside of accepted tolerances. It is up to the user to notice it. The calibration system often has an alarm if the entered data is outside of accepted tolerances. At that point the calibration is already done, and it needs to be redone.

 

Would this error rate be accepted in other situations?

If we use manual data entry in our calibration processes and accept the risk of error that comes with it, would we accept the same error rate in other applications?

Would we accept that our salaries don’t always come on time or are wrong? Or that our credit card repayments have a big error rate?

Obviously, these applications rely on electronic not manual data entry.

In most applications we would simply not accept the kind of error rate that comes with manual data entry. But like I said, many people still accept it in their calibration data entry process.

This article has about 15,000 characters, so with manual writing there would be about 150 errors (with a 1 % error rate). Well, frankly with me writing, there would be a lot more:-)

But luckily, we can use computer with spellchecking and the text is also proofread by colleagues. But I am sure there are still some errors. In this text the errors don’t have serious consequences as they do with calibration data.

At the same time, industry is moving fast towards the world of digitalization, where data is more important than ever and decisions are based on the data. We should also take a good look at the quality and integrity of the data!

 

To download this article as a free pdf file, please click the image below:

New Call-to-action

 

There has to be a better way!

What if you could avoid all human errors related to manual calibration data entry?

What if you could even avoid the intentional errors?

What if, at the same time, you could make the data entry process much faster, saving time?

What, you may ask, would be the cost for such a system? Can you afford it?

In return I would ask what are the costs of all the errors in your calibration data? What would be the value of such a system to you? Can you afford to be without it?

There has to be a better way.

 

There is a better way – the Beamex way!

So, what about the Beamex way? What is it?

With the Beamex integrated calibration solution, you can replace manually entering calibration data with the most highly automated calibration data collection on the market.

In a nutshell, the Beamex system comprises calibration softwaredocumenting calibrators, and mobile data-entry devices communicating seamlessly. Also, the calibration software can be integrated with your maintenance management system (CMMS) to enable a paperless automated flow of calibration work orders from the CMMS to the calibration software and acknowledgement of the work done from the calibration software to the CMMS.

It all starts from you planning the work in the CMMS or the calibration software. When it is time to perform the calibration the work orders are synchronized to documenting calibrators or to mobile devices (phones or tablets).

In the field, when you do the calibration the calibration data is stored automatically in the documenting calibrator or manually entered on a mobile device.

If you work in highly regulated environment, mobile devices can be provided with additional data security functions to ensure the integrity of the data. The Beamex calibration solution fulfills the requirements of 21 CFR Part 11 and other relevant regulations for electronic records, electronic signatures, and data integrity.

This lowers the risk of ALCOA (data integrity) violations by identifying those using offline mobile devices by their electronic signature and by protecting the offline data against tampering, eliminating the possibility to falsify calibration records.

From the mobile devices, the calibration data can be synchronized back to the calibration software for storage, analysis, and certificate generation.

The calibration software can also send an automatic notification to the CMMS when the work is done.

 

Here's a short video on how the Beamex integrated calibration system works:

 

Learn more about Beamex products and services on our website or contact your local Beamex representative:

Visit Beamex website

Contact Us

Beamex Worldwide Contacts

 

Download your copy of the Calibration Essentials Software eBook, to learn more about calibration management and software.

Calibration Essentials- Software eBook

 

 

Related blogs

If you found this article interesting, you might also like these articles:

 

 

Topics: Calibration process, Calibration management

How to choose a calibration laboratory - 13 things to consider

Posted by Heikki Laurila on Feb 23, 2021

Beamex Temperature Calibration Laboratory

 

So you have invested in some new, accurate calibration equipment. Great!

But as with many other things in life, also accuracy fades over time.

To ensure that your calibration equipment serves you well and stays accurate throughout its lifetime, it needs to be recalibrated periodically. It also needs to be serviced and adjusted whenever necessary.

When choosing a calibration laboratory or calibration service, you need to select one that is capable of calibrating your accurate equipment with sufficient uncertainty.

We have seen the accuracy of a calibrators being destroyed in a non-competent laboratory. I want to help you to avoid that.

What do you need to consider when choosing a calibration laboratory?

In this blog post I will discuss the most important things to consider when choosing a calibration laboratory for your precious calibrator or reference standard.

 

Table of Content

Background

How to choose a calibration laboratory - 13 things to consider

  1. Manufacturer’s laboratory
  2. Laboratory accreditation
  3. Calibration uncertainty
  4. Calibration certificate
  5. Pass/Fail judgment
  6. Adjustment
  7. As Found / As Left calibration
  8. Turnaround time
  9. Brand and reputation
  10. Price
  11. Repairs, service, and maintenance
  12. Warranty
  13. Agreements and reminders

What do we do at the Beamex calibration laboratory?

Beamex Care Plan

Beamex Service Portal

 

Download this article as a free pdf file by clicking the picture below:

New Call-to-action

 

Background

To match the ever-improving accuracy race of process instrumentation, also the calibration equipment is getting more and more accurate. This puts more pressure on the accuracy of the calibration laboratories, and they also need to improve their accuracy to match these requirements with sufficient accuracy ratio.

Many modern process calibrators are multifunctional, containing several quantities and multiple ranges. This is great for users as they only need to carry one multifunctional calibrator with them in the field.

But multifunctionality makes recalibration more challenging for the calibration laboratory. Not all laboratories can calibrate multiple quantities and ranges with sufficient accuracy and uncertainty.

Even if you choose an accredited calibration laboratory it will not always offer the required uncertainty for all the ranges.

Something that we sometimes see in our calibration laboratories at the Beamex factory is that customers have bought the most accurate and multifunctional calibrator we offer (for example, the Beamex MC6) and it has been calibrated in a local calibration laboratory. The MC6 calibrator is packed with several accurate pressure, electrical, and temperature ranges, so is not the easiest to recalibrate. In some cases, the laboratories have claimed that the calibrator does not fulfill its accuracy/uncertainty specifications, but when the case is investigated it is commonly found that the laboratory’s uncertainty is worse than the calibrator’s uncertainty!

And even worse, we have also seen local labs adjusting the calibrators with the intention of making them ‘more accurate’. Next time our calibration laboratory calibrates the calibrator, it is discovered that the unit was adjusted incorrectly and it is out of specifications! In some cases the customer has been already using an out-of-spec calibrator for some time, which can have serious consequences.

I therefore wanted to discuss the topic of choosing a suitable calibration laboratory in this article.

 

How to choose a calibration laboratory - 13 things to consider

 

1. Manufacturer’s laboratory

One good way to choose a calibration laboratory is to use the equipment manufacturer’s laboratory, if that is practical. The manufacturer knows anyhow all the ins and outs of the equipment and has the capability to calibrate it. The manufacturer can also do any service or maintenance work that may be required. Also, using the manufacturer’s calibration service does not jeopardize the warranty of the equipment; they may even offer an extended warranty.

It is however not always possible or practical to use the manufacturer’s calibration laboratory, so let’s discuss some other considerations.

 

2. Laboratory accreditation

Choosing a calibration laboratory or service that has accreditation is the most important thing to start with, especially if it is not possible to use the manufacturer’s calibration laboratory.

Calibration laboratory accreditation is done by a formal third-party authority to ensure that the laboratory meets all the requirements of the relevant standards. Laboratory accreditation is so much more than “just a piece of paper”.

Formal accreditation guarantees many things that you would otherwise need to check if the laboratory didn’t have accreditation. For example, accreditation ensures, amongst other things, that the laboratory fulfills the requirements of the relevant standards, has a quality system and follows it, has appropriate operating procedures, has a training program and training records for staff, can evaluate calibration uncertainty, and maintains traceability to national standards.

Without accreditation you have to take care of all these things yourself, which is a huge task.

Calibration laboratories are commonly accredited according to the international ISO/IEC 17025 standard.

ILAC is the international organization for accreditation bodies operating in accordance with ISO/IEC 17011 and involved in the accreditation of conformity assessment bodies including calibration laboratories (using ISO/IEC 17025).

It is important to remember that accreditation does not automatically mean that the laboratory has sufficient accuracy and uncertainty to calibrate your calibration equipment!

So, even though accreditation is an important box to tick, it is not enough on its own. The burden is still on you to judge the calibration laboratory’s capabilities.

 

3. Calibration uncertainty

Even when using an accredited calibration laboratory, you need to make sure that the laboratory can calibrate your calibration equipment with sufficient and appropriate uncertainty.

There are many accredited calibration laboratories that do not offer good enough uncertainty to calibrate all the ranges of a modern multifunctional calibrator such as the Beamex MC6 family of calibrators.

If the laboratory is accredited, it will have a public “Scope of Accreditation” document listing all the uncertainties they can offer for different quantities and ranges. That should be evaluated before proceeding further.

If the laboratory is not accredited, you will need to discuss with the laboratory to find out what kind of uncertainty they can offer and if it is sufficient for your needs.

The calibration uncertainty needs to be documented on the calibration certificate. It is then up to you to decide what kind of uncertainty ratio you can accept between the laboratory’s calibration uncertainty and the uncertainty specification of the equipment. The most common uncertainty ratio is 1 to 4, i.e. the laboratory is four times more accurate than the equipment to be calibrated, or the laboratory’s uncertainty is only one quarter of the equipment’s uncertainty. In practice that is often not possible for all ranges, so you may need to accept a smaller uncertainty ratio.

The most important thing is to know the laboratory’s uncertainty, make sure it is better than the equipment’s specifications and ensure it is documented on the calibration certificate.

More information about calibration uncertainty can be found here:

Calibration uncertainty for dummies

 

4. Calibration certificate

The calibration certificate is the document you get from the calibration, and it should include all the relevant information on the calibration.

Again, if the laboratory is accredited, you don’t need to worry too much about the calibration certificate as an accredited laboratory will follow standards and the calibration certificate content is one of the many audited items included in the laboratory’s periodical accreditation audit.

The basic things on the calibration certificate include:

  • The title: “Calibration Certificate”
  • Identification of the equipment calibrated
  • The calibration laboratory’s contact information
  • Identification of the calibration methods used
  • Calibration data covering all the calibrated points, i.e. the laboratory’s reference standard’s “true value” and the indication of the equipment to be calibrated
  • The found error on each point, i.e. the difference between the reference standard and the calibrated device
  • The total calibration uncertainty (presented in the same unit as that of the measurand or in a term relative to the measurand, e.g. percent) including all the calibration uncertainty components, not only the reference standard, preferably calculated separately for each calibration point
  • Signature of the person(s) that performed the calibration, the calibration date, and details of the environmental conditions during the calibration process

 

5. Pass/Fail judgment

When you send your calibration equipment for calibration, you obviously want to know if the equipment fulfills its accuracy/uncertainty specifications. Although this sounds obvious, I have seen customers who have had their equipment calibrated and the calibration certificate archived without evaluating if the equipment is still as accurate as it is assumed to be.

So please make it a practice to carefully review the calibration certificate before filing it away and taking your calibrator back into use.

The Pass/Fail judgment is not all that common in calibration laboratories, accredited or not.

If the certificate does not include the Pass/Fail judgment, it is then your job to go through all the points on the calibration certificate and to compare the found error against the equipment specifications.

The calibration uncertainty also needs to be taken into account in this comparison – the equipment may be within the specifications, but when the calibration uncertainty is taken into account it is not anymore.

So, take a careful look at the found error and the total uncertainty for each calibration point.

There are different ways to take the calibration uncertainty into account in the Pass/Fail judgment. The ILAC G8 (Guidelines on Decision Rules and Statements of Conformity) standard specifies how accredited laboratories should take it into account.

This topic has been discussed in more detail in an earlier blog article:

Calibration uncertainty for dummies – Part 3: Is it Pass or Fail?

 

6. Adjustment 

When the calibration laboratory receives your equipment they will first calibrate all the ranges of the equipment and document the results on the calibration certificate. This is often called the As Found calibration.

But what if the calibration equipment is found to fail at some point(s), i.e. it does not meet its accuracy specifications?

Naturally, the laboratory needs to be able to judge if some calibration points are out of the specifications.

Does the laboratory have the capability, tools, and know-how to adjust the calibration equipment so that all the ranges are within the specifications?

Is the equipment adjusted only if it fails the As Found calibration, or is it also adjusted if there is drift and a risk that it would drift outside of the specifications by the time of the next recalibration?

Most calibration laboratories do not optimize the equipment by adjusting the ranges if they are still within the specifications but have some error. This can cause the equipment to drift out of the specifications and fail before the next recalibration.

Some calibration equipment can be difficult to adjust and may require special tools and knowledge.

If the laboratory is not able to do this kind of adjustment you will need to send the equipment elsewhere, possibly to the manufacturer. This will obviously result in a delay and add costs.

If the laboratory can make the required adjustment, will it mean additional costs for you?

You should find out whether the laboratory can perform the required adjustments before sending your equipment for calibration.

This goes for accredited and non-accredited calibration laboratories alike.

 

7. As Found / As Left calibration

If the adjustment mentioned in the previous section is done after the As Found calibration, the equipment needs to be calibrated again after the adjustment is done. This is called the As Left calibration.

Will the calibration laboratory perform both As Found and As Left calibrations if necessary?

Are both As Found and As Left calibration included in the calibration price, or do these cost extra?

 

8. Turnaround time

The turnaround time of the calibration laboratory is another consideration. This also includes the time for transportation both ways.

You don’t want your equipment to be out of service for too long.

 

9. Brand and reputation

The calibration laboratory’s brand and reputation are also something that will affect the choice you make, especially if you don’t have previous experience of that calibration laboratory.

 

10.Price

Price is another factor in the selection process.

Don’t just compare prices, but take into account what you will get for that price.

 

11. Repairs, service, and maintenance

Is the calibration laboratory also capable of performing repairs or other maintenance, if needed?

This also includes firmware updates and other software updates.

 

12. Warranty

Is the calibration laboratory authorized to do warranty service for your equipment, if it is still under warranty?

Most likely the manufacturer’s warranty is going to be void if some other company services the equipment.

In some cases, using authorized calibration/service centers enables you to extend the warranty of your equipment without additional costs.

Is the calibration laboratory’s work covered by some kind of warranty?

 

13. Agreements and reminders

Does the calibration laboratory offer the possibility to make a continuous agreement for future calibrations?

Will the calibration laboratory send you a reminder when it is time for the next calibration?

 

Download this article as a free pdf file by clicking the picture below:

New Call-to-action

 

What do we do at the Beamex calibration laboratory?

The Beamex factory calibration laboratory in Finland has been ISO 17025 accredited since 1993 and serves as the standard for the Beamex USA calibration laboratory with over 30 years of experience in calibrations and repairs.

Since our factory manufacturing facilities are in the same location as the calibration laboratory, we have a very good set of laboratory equipment and automated calibration systems that minimize the risk of human error. It would not be realistic to have that kind of equipment only for recalibration purposes.

Please note that we currently only recalibrate Beamex manufactured devices.

Here is a short list of the things that we do at the Beamex factory calibration laboratory when we get a calibrator back for recalibration:

When a unit is received, it is properly cleaned and any minor service needs are taken care of.

The unit is then calibrated (As Found) and an accredited calibration certificate is created.

If the unit fails in some ranges, these range are adjusted; if the unit does not fail but there is some minor drift, the unit will be adjusted to improve its accuracy. If a unit passes but is close to its specification limits, it is adjusted to help prevent it drifting out of specifications by the time of the next calibration.

If any ranges are adjusted, a new As Left calibration will be carried out.

Most of the calibration work is automated, so we can offer fast and reliable service.

Finally, the firmware of the unit as well as any device description files are updated if needed.

 

Here's a short video on our recalibration services:

 

Here's a short video on our calibration laboratories:

 

Beamex Care Plan

We offer a Care Plan agreement for the calibrators we manufacture.

Care Plan is a contract for the recalibration and maintenance of Beamex equipment, ensuring the equipment stays accurate and operational throughout its lifetime.

A Beamex Care Plan includes the following services:

  • A fixed-term contract (one or three years) – a single purchase order reduces unnecessary admin work and associated costs
  • Annual recalibrations with an accredited calibration certificate (including As Found calibration, any necessary adjustments, and As Left calibration)
  • Free express shipments to and from the Beamex factory
  • Free repairs, even in the case of accidental damage
  • Replacement of wear parts
  • Annual email notification when a calibration is due – allows you to schedule your recalibration needs around any potential outages
  • Applicable updates of firmware, device description files, and so on, ensuring your device has the latest features
  • Priority help-desk services
  • Priority service – expedited turnaround times

Learn more about the Beamex Care Plan.

 

Here's a short video on our Care Plan agreement:

Beamex Service Portal

The Beamex Service Portal is an easy way for you to request a quote or return your Beamex equipment for service or calibration.

Learn more about the Beamex Service Portal.

 

Download this article now!

 

 

Topics: Calibration, Calibration process

How to calibrate a temperature switch

Posted by Heikki Laurila on Dec 02, 2020

Calibrating-temp-switch_1200px_v1

 

Temperature switches are commonly used in various industrial applications to control specific functions. As with any measuring instrument, they need to be calibrated regularly to ensure they are working accurately and reliably – lack of calibration, or inaccurate calibration, can have serious consequences. Calibrating a temperature switch is different from calibrating a temperature sensor or transmitter, for example, so this blog post aims to explain how to properly calibrate a temperature switch. Let’s start!

Table of contents

 

Before we go into details, here's a short video on this topic:

 

Download this article as a free pdf file by clicking the below image:

How to calibrate a temperature switch - Beamex white paper

 

How does a temperature switch work?

In short, a temperature switch is an instrument that measures temperature and provides a required function (a switch opens or closes) at a programmed temperature.

One of the most common temperature switches is the thermostat switch in an electric radiator. You can set the thermostat to the required temperature and if the room is colder than the set temperature, the thermostat will switch the radiator on; if the room temperature is higher than required, the thermostat will switch the heating off.

In practice, there is a small difference between the set and reset points so that the control does not start to oscillate when the temperature reaches the set point. This difference is called hysteresis, or deadband. In the above radiator example this means that when the thermostat is turned to 20 °C (68 °F), the radiator may start heating when the temperature is below 19 °C (66 °F) and stop heating when the temperature is 21 °C (70 °F), showing a 2 °C (4 °F) deadband.

Naturally, there are many different applications for temperature switches in industry.

 

The main principle of temperature switch calibration

We will investigate the details of temperature switch calibration later in this article, but to start, let’s briefly summarize the main principle to remember when calibrating a temperature switch:

To calibrate a temperature switch you need to slowly ramp the temperature at the switch input (the temperature-sensing element) while simultaneously measuring the switch output to see at which temperature it changes its state. Then you need to ramp the temperature back to find the “reset” point, where the switch reverts back to its original state.

When the output changes state, you need to record the input temperature at that exact moment.

The switch output usually only has two states, e.g. open or closed.

 

Essential terminology

One term commonly discussed is whether a switch type is normally open (NO) (or closing), or normally closed (NC) (or opening). This indicates if the switch contacts are open or closed by default. Usually temperature switches are in their default position when measuring the environmental temperature.

Operating points may also be referred to as Set and Reset points, or On and Off points.

The temperature difference between the operation points is called deadband. Some difference is needed between the closing/opening operating points to prevent the switch from potentially oscillating on and off if they work at exactly the same temperature. For applications that require a very small deadband, additional logic is provided to prevent the switch from oscillating.

The switch outputs may be mechanical (open/close), electronic, or digital.

Dry/wet switches are also sometimes discussed. Dry means that the output is closed or open, while wet means that there is a different voltage level representing the two switch states.

Some switches have mains voltage over the contacts when the switch is open. This can be a safety issue for both people and test equipment, so it should be taken into account when testing the switch.

A more detailed discussion on terminology can be found in this blog post:

Pressure Switch Calibration

 

Is your temperature sensor separate or attached?

As a temperature switch needs to measure temperature, it needs to have a temperature sensing element, in other words a temperature sensor.

In some cases the temperature sensor is a separate instrument and can be removed from the switch, while in others the sensor is fixed to the switch so they cannot be separated.

These two different scenarios require very different methods to calibrate the switch.

As explained above, you need to provide a slowly changing temperature for the switch input. This is very different depending on if the switch has a fixed temperature sensor or if the sensor can be removed.

Let’s look at these two different scenarios next.

 

#1 - Temperature switch with a separate/removable temperature sensor

In some cases, you can remove the temperature sensor from the temperature switch. The sensor will often be a common standard sensor, such as a Pt100 sensor (or a thermocouple). In these cases you can calibrate the switch without the temperature sensor by using a simulator or calibrator to simulate the Pt100 sensor signal, generating a slow temperature ramp (or a series of very small steps) as the input to the switch.

Naturally you also need to calibrate the temperature sensor, but that can be calibrated using normal temperature sensor calibration at fixed temperature set points, without needing to slowly ramp the temperature, which makes the sensor calibration much easier (and with less uncertainty).

In accurate applications, the switch may be compensating for RTD sensor error by using correction coefficients, such as ITS-90 or Callendar van Dusen, so when simulating the temperature sensor your sensor simulator should be able to take this into account.

Find out more on temperature sensor calibration in this earlier post: how to calibrate temperature sensors.

You can calibrate the sensor and switch together as a loop; you don’t have to calibrate them separately. But if you don’t have a system that generates a slow, controlled temperature ramp, it is easier to calibrate them separately.

If the removable temperature sensor is a not a standard sensor type (neither an RTD nor a thermocouple), then you can’t really calibrate the sensor and switch separately as you can neither measure nor simulate the signal of the non-standard sensor. In that case you need to calibrate them as one instrument when they are connected.

 

#2 - Temperature switch with an integrated/fixed temperature sensor

If your temperature sensor is fixed to your temperature switch and cannot be removed, you need to calibrate it all as one instrument. In that case you need to generate a temperature ramp with a temperature source that you insert the temperature sensor into.

 

How to calibrate temperature switches

Before calibration 

As with any process instrument calibration, before starting, isolate the measurement from the process, communicate with the control room, and make sure the calibration will not cause any alarms or unwanted consequences.

Visually check the switch to ensure it is not damaged and all connections look ok.

If the sensor is dirty, it should be cleaned before inserting it into the temperature block.

 

Generate a slow temperature ramp as input

If you are calibrating the temperature switch and its temperature sensor together, you need to generate a slow enough temperature ramp in the temperature source where you install the switch's temperature sensor.

This means you need to have a temperature source that can generate a controlled temperature ramp at a constant speed, as slow as the application requires.

In practice you can quickly reach a temperature set point close to the calibration range, let the temperature fully stabilize, and then start slowly ramping the temperature across the calibration range. After the calibration you can quickly return back to room temperature.

A temperature ramp like this is most commonly generated with a temperature dry block. Not all dry blocks are able to generate a suitably slow ramp. And you also need to be able to measure the generated temperature very accurately, while at the same time being able to measure the switch output signal. In addition, the calibration system should have the capability to automatically capture the input temperature at the exact moment when the switch output changes its state.

Not all temperature calibration systems can do all this, but needless to say, the Beamex MC6-T temperature calibrator can do it all fully automatically. And not only that, it can do many other things too, so please make sure you check it out!

 

Use an external reference temperature sensor – don’t use the internal one!

Temperature dry blocks always have an internal reference sensor, but do not use this when calibrating temperature switches! 

The internal reference sensor is located in the bottom part of the temperature block, which is heated and/or cooled. The internal reference sensor is also usually close to the heating/cooling elements and responds quickly to any temperature changes.

From that temperature block, the temperature will transfer to the insert and from the insert it will transfer to the actual temperature sensor. This means that there is always a significant delay (lag) between the internal reference sensor and the sensor being calibrated, located in the hole in the insert.

In a normal sensor calibration, done at fixed temperature points, this delay is not so critical, because you can wait for the temperatures to stabilize. But for temperature switch calibration this delay has a huge impact and will cause significant error in the calibration result!

Instead of using the internal reference sensor, you should use an external reference sensor that is installed in the insert together with the switch’s sensor to be calibrated. The external reference sensor should have similar characteristics to the temperature switch sensor in order for them to behave the same way, with a similar lag.

At the very least make sure that the dimensions of the reference sensor and temperature switch sensor are as similar as possible (e.g. similar length and diameter). Ensuring that the sensors have the same length means they will go equally deep into the insert, with the same immersion depth. Different immersion depths will cause error and uncertainty in the calibration.

Naturally the reference temperature sensor also needs to be measured with an accurate measurement device.

 

Measuring the switch output

Once you have the input temperature ramp figured out, you also need to measure the switch output terminals and their state.

With a traditional open/close switch, you need to have a device that can measure if the switch contacts are open or closed.

If the switch is more modern with an electrical output, you need to be able to measure that. That may be current measurement for an mA signal, or voltage measurement for a voltage signal.

Anyhow, as the switch output has two states, you need to have a device that can measure and recognize both.

 

Capturing the operation points

To calibrate manually you need to start the temperature ramp and monitor the switch output. When the switch’s status changes, you need to read what the input temperature is, i.e. what the reference temperature sensor is reading. That is the operating point of the temperature switch. Usually you want to calibrate both operation points (the “set” and “reset” points) with increasing and decreasing temperatures to see the difference between them, which is the hysteresis (deadband).

If you don’t want to do that manually, then you need a system that can perform all of the required functions automatically, i.e. it needs to:

  • Generate the temperature ramp, going up and down at the required speed, within the required temperature range for the switch in question
  • Measure the switch’s output state (open/close, on/off)
  • Measure the reference temperature sensor inserted in the temperature source
  • Capture the temperature when the switch changes state

The Beamex MC6-T can do all of this and much more.

 

Temperature switch calibration steps – a summary

Let’s finish with a short summary of the steps needed to calibrate a temperature switch:

  1. Pre-calibration preparation (disconnect from process, isolate for safety, visual check, cleaning).
  2. Insert the temperature switch’s temperature sensor and a reference sensor into the temperature source.
  3. Connect the switch’s output to a measurement device that measures the switch’s open/close status.
  4. Quickly ramp the temperature close to the switch’s operation range and wait for it to stabilize.
  5. Very slowly ramp the temperature across the switch’s nominal operation range.
  6. When the switch output changes status (set point), capture the temperature in the temperature source.
  7. Slowly ramp the temperature in the other direction until the switch operates again (reset point). Capture the temperature.
  8. Repeat steps 5 to 7 as many times as needed to find the repeatability of the switch. Typical practice is three (3) repeats.
  9. Ramp the temperature quickly back to room temperature.
  10. Document the results of the calibration.
  11. If the calibration failed and the switch did not meet the accuracy requirements, make the necessary adjustments, repair, or replace it.
  12. Repeat the whole calibration process if adjustments were made in step
  13. Connect the switch back to the process.

 

Temperature switch calibration cycle

The above graph illustrates an example temperature cycle during temperature switch calibration. In the beginning you can quickly reach a temperature point close to the calibration range, let the temperature fully stabilize, and then start slowly ramping the temperature up and down across the calibration range to capture the set and reset points. In this example three calibration repeats were done to record the repeatability of the switch. After calibration you can quickly decrease the temperature back to room temperature.

 

Documenting calibration, metrological traceability, and calibration uncertainty

A few important reminders about temperature switch calibration, or indeed any calibration:

Documentation – calibration should always be documented; typically this is done with a calibration certificate.

Metrological traceability – calibration equipment should have valid metrological traceability to relevant standards.

For more information on metrological traceability, check out this blog post:

Metrological traceability in calibration - are you traceable?

 

Calibration uncertainty – calibration uncertainty is a vital part of every calibration process. You should be aware of how “good” your calibration process and the calibration equipment are, and if the process and equipment provides low enough uncertainty for the calibration in question.

For more information on calibration uncertainty, please check this blog post:

Calibration uncertainty for dummies

 

Related blogs

If you found this post interesting, you might also like these blog posts:

 

Beamex solution for temperature switch calibration

Beamex provides a fully automatic system for temperature switch calibration. The heart of the solution is the Beamex MC6-T temperature calibrator. The MC6-T is an accurate and versatile temperature calibrator with built-in multifunction process calibrator and communicator technology.

Calibrating-temp-switch_800x800px_v1

 

With the MC6-T you can create the required temperature ramp, measure the switch output, measure the reference temperature sensor, and capture the operation points. And all of this can be done fully automatically. The calibration results are stored in the MC6-T’s memory, from where the results can be uploaded to Beamex CMX or LOGiCAL calibration software, for storing results in databases and generating calibration certificates. The whole calibration process is automatic and paperless.

Please feel free to contact us to learn more about the MC6-T or to book a free physical or virtual online demonstration:

Contact us (Global)

Find you local Beamex partner

 

 

Topics: Temperature calibration

CMMS and calibration management integration - Bridging the gap

Posted by Tiffany Rankin on Oct 22, 2020

Black and Grey Bordered Travel Influencer YouTube Thumbnail Set (1)

Recently, Patrick Zhao, Corporate Instrument & Analyzer SME for Braskem America, spoke at the Beamex Annual Calibration Exchange. He presented on Braskem’s, the largest petrochemical company in the Americas, integration of their computerized maintenance management systems and calibration management software. The presentation was so well received that we wanted to share some of the highlights with you and also provide a link to the full video recording, found below.

Watch the presentation video recording now! 

 

Braskem, a Beamex customer, uses MC6 calibrators to calibrate field instruments, Beamex CMX software, and the Beamex bMobile solution to perform electronically guided function tests.

In order to improve the automation of their maintenance and calibration work process, they choose to integrate the Beamex calibration software into their plant maintenance management software, SAP and Maximo, using a business bridge. 

A business bridge simply allows communication between the maintenance management software (SAP and Maximo, in this case) and the calibration software (Beamex CMX) via an XML (Extensible Markup Language) data file format. 

This enables the sharing of structured data, including position ID, location, serial number, and work order numbers, across the different information systems.  

With the implementation of this business bridge, Braskem has reduced manual interventions and human error. Additionally, their management team can now see all necessary data related to compliance reporting and overall calibration in one place.

Prior to Beamex, Braskem had a pen and paper calibration process. That process consisted of an Instrumentation and Electrical (I&E) Technician being assigned a work order, writing down the results, turning the results into an I&E Supervisor who would scan it to a pdf document, and then mark the SAP work order as completed. Patrick notes that, “A lot of times that calibration data gets lost. That piece of paper gets lost."

PenandPaperCalibrationProcess

 

After Beamex was implemented, prior to the Maximo bridge integration, a similar process was used. Calibration Tech would be assigned a work order and they would use a Beamex calibrator out in the field to perform the calibration. From here, Beamex CMX could automatically send an email to the Process Maintenance Coordinator (PMC) who would then close the SAP work order. An I&E Approver would have to go back manually into Maximo and scan the Beamex calibration certificate into a pdf and manually attach that pdf to the appropriate work order to close it. 

According to Patrick, this could take anywhere from 10-20 minutes per calibration. 

CalibrationSoftwarewithoutMaximoBridge

 

With the implementation of the business bridge between Maximo and Beamex, once the calibration is completed, Beamex sends an email to the PMC. The I&E Approver logs into Beamex CMX software and clicks on an approve button, once he enters his username and password, which serves as an electronic signature, the calibration results are automatically sent to Maximo and the appropriate work order is automatically completed.

 “We save about 20 minutes per calibration/work order with this integration.” states Patrick.

CalibrationSoftwarewithMaximobridge

 

Overall system integration

For Braskem, system integration combined three programs; SAP used for functional location, equipment data, task lists, notification, and work orders. Maximo also has functional location and equipment, as well as the maintenance plan/scheduler, work orders, calibration data, and compliance reports. Beamex for position and device, function templates (including function and procedure), work orders, and calibration results.

The Beamex business bridge created a link between Maximo, which was already linked to SAP, and the Beamex software. By using a web or file service (Braskem uses a web service) which is XML-based, Maximo can speak to the web service which in turn talks to Beamex. Similarly, Beamex can talk back to the business bridge which goes to the web service and then back to Maximo.

 

Key benefits of integration

According to Patrick Zhao, the three key features of this integration are:

1. They can see SAP data inside of Beamex

SAP and Maximo synchronize function location and equipment each night. With the Beamex bridge, they can synchronize the function location into the position and synchronize equipment into what’s called device inside of Beamex. Maximo also handles work orders as part of the maintenance plan and these work orders can be seen both inside of SAP and Beamex. 

DirectlySeeSAPData

 

2. They can automatically generate calibration procedures based on templates

Inside of SAP, in the equipment data, they set up the equipment category. I is for instruments, E for electrical, and B for Beamex compatible instruments. All equipment marked B can synchronize into Beamex. They also use the calibration profile in SAP. This defines what type of instrument it is inside of SAP. The same code is used inside of Beamex so Beamex can pick the correct function and calibration procedure template based on what the equipment is set up as in SAP. For example, if you have a pressure transmitter catalog profile then Beamex knows what it is and can automatically pick the template for a pressure transmitter.

Auto-generateCalibrationProcedures

 

3. The ability to auto-complete a work order based on the calibration data 

is the third feature, which Patrick Zhao refers to as the “key to how this thing works”.   As before, Maximo generates a maintenance plan and a work order, the I&E Tech completes the work, sends an email notifying the approver and the approver logs into Beamex, reviews the calibration result and clicks approve. Once the approved button is clicked the data is automatically sent back to Maximo. But now, this is also seen within SAP, which automatically resets the maintenance plan and generates a new date.

Auto-completeWO

 

Patrick then shared a Beamex calibration result and screenshot of the Maximo work order calibration record. This area of Maximo was custom programed for Braskem and customized for Beamex. You can see the calibration number of the Beamex calibration result, the ‘as found’ and ‘as found’ error as a percentage. You can also see ‘as left’, an overall pass or fail and the actual finish date which is the same as the Beamex calibration result. 

Auto-completeWO2

 

Conclusion

In conclusion, Patrick states, “Beamex Maximo bridge integration is very critical to our plant maintenance work process. We have had it for two years now and it’s been working very well. We had a lot of support from Beamex. They’re very responsive and it was very pleasant to work with Beamex to get this working.” 

The implementation of this integration means that the Engineering Approver can easily review the data and complete the calibration related maintenance plan work process using just the Beamex CMX software. There is no longer a need to log into Maximo and manually enter the data. 

Braskem installed the standard Beamex CMX software and hired a programmer to program everything on the Maximo side to be able to take the data. The same can be done for SAP. For Braskem, it took approximately 2.5 weeks to complete the programming.

“Braskem America plants are using this Beamex calibration system and Maximo bridge integration every single day to ensure our critical plant instruments are functioning properly, in top performance and that the plant can run safely and produce a polypropylene product for our customers.”

Learn more about how Patrick Zhao and Braskem America have bridged the gap between maintenance management systems and calibration management software by watching his Annual Calibration Exchange presentation. Be sure to stay tuned for the Q&A portion of the presentation for additional insight into the process. 

Watch the video presentation!

Watch Now

 

Looking to implement calibration management software into your infrastructure? 

Contact Beamex at:

 

 

 

 

Topics: Calibration software, CMX

Temperature Calibration Webinars

Posted by Heikki Laurila on Sep 23, 2020

Temperature-webinars-2020_1500px_v1

We have recently done two webinars on temperature calibration; one was done by Beamex, Inc. in USA and the other by Beamex Ltd in UK.

As both webinars discuss temperature calibration, we will share both of them here in the same blog post.

I have done a table of content for both webinars so you can easily see what is included and can jump quickly to the interesting point.

Both webinars include a Live Demo session, demonstrating a fully automatic calibration of temperature sensor.

You can find free webinar recordings and info on upcoming webinars on our webinars page.

 

Basics of Temperature Calibration - webinar

This webinar was done in April 2020 by Beamex, Inc. in co-operation with Chemical Engineering. The presenters are Ned Espy and Roy Tomalino.

Webinar content:

TimeTopic
0:00Welcome, introduction, housekeeping
2:05Presentation of speakers
5:00Webinar agenda
6:00Quick Poll
7:15Temperature terminology
12:15Dry Block vs. Liquid Bath
14:00Best practices
21:00Live Demo - Automatic calibration of a temperature sensor
50:00Quick Poll
52:15Questions and Answers

 

Watch the webinar!

 

 

Temperature Calibration in the field - webinar

This webinar was done in June 2020 by Beamex Ltd, presenters Andy Morsman and Ian Murphy.

TimeTopic
0:00Welcome, introduction, housekeeping
0:55Introduction of speakers
2:00Webinar agenda
03:55Temperature terminology
12:00Dry block structure
15:30Best practices
26:10RTD and PRT probes
28:15Thermocouples
30:30Live Demo - Automatic calibration of a temperature sensor
53:00Questions and Answers

 

Watch the webinar!

 

 

Other content on temperature calibration

If you are interested in temperature calibration, you might like these blog posts:

 

Beamex solution for temperature calibration

Beamex offers many solutions for temperature calibration. The webinars did already show the Beamex MC6-T temperature calibrator in action. Also other MC6 family products can be used for temperature calibration. 

We also offer different reference sensors for temperature calibration.

The calibration software - both Beamex CMX and Beamex LOGiCAL - can be used for temperature calibration.

Please check the list of Beamex temperature calibrators.

 

 

Topics: Temperature calibration

Sustainability in Energy from Waste

Posted by Heikki Laurila on Aug 11, 2020

edited ERF

Waste not, want not; a phrase coined to denote resourcefulness, the idea of utilising what we have to reduce waste. No one likes to be wasteful if they can help it, whether it’s food, time, money, energy… The list is endless. Being sustainable is all about using what we have. But what happens when we do need to dispose of our unwanted goods? Our recyclables and our waste? How can this process be optimised in order for it to be as sustainable as possible?

There are 4 Pillars of Sustainability:

  • Human
  • Social
  • Economic
  • Environment

If you would like to read more about the '4 Pillars of Sustainability', you can do so in our earlier blog, 'How Calibration Improves Plant Sustainability', however in this article, we will be focusing on the 'Environment' pillar. As the name suggests, this is about how we can collectively work towards being more sustainable, environmentally friendly and using the resources that we have to find ‘a better way’.

Environmental Sustainability and Energy from Waste?

So how can environmental sustainability be applied to waste disposal?

Energy from Waste (EFW) is the process of burning any combustible municipal waste which then generates energy in the form of heat and electricity. A byproduct of this process, ash, is recycled as an aggregate to the construction industries (it is typically used in the production of tarmac).

At a first glance, burning waste appears to be an unethical and unsustainable process, but in reality, there are a number of stringent rules that Energy Recovery Facilities (ERF) have to adhere to before any gasses or water vapour are released into the environment. The European Industrial Emissions Directive, or your country equivalent, enforce strict rules to ensure the EFW process is conducted under controlled conditions with cleaned emissions.

Below is a diagram of how an ERF operates:

Beamex Energy from Waste image 1

The residual waste is offloaded and burnt in a furnace at a high temperature of +850 °C (1560 °F), the optimum temperature where materials will combust and the formation of pollutants, such as dioxins, are minimised. The heat creates steam which is used to drive a turbine linked to a generator that produces electricity; this is then exported to the local electricity grid where the heat and electricity generated is used for domestic and industrial purposes. The byproducts at the end, such as the ferrous and non-ferrous metals and the bottom ash are recycled.

The flue gases are cleaned during the process using a Scrubber and chemical cleaners which convert the noxious gases into clean emissions. Continuous Emission Monitoring Systems, or CEMS, are sensors which operate within the stack to ensure that the final emission of Sulphur Dioxide, CO2, Carbon Monoxides and other pollutants released into the environment are minimised.

Calibration and Efficiency in ERFs

Beamex in the EFW Process

The incinerator needs to operate at a high temperature accurately in order for the combustion process to be efficient, maximising energy production and minimising waste products. Functional safety systems also rely on accurate information; any inaccuracy of the instruments that control or monitor the plant can cause higher emission levels that enter the atmosphere or can compromise the operation of the functional safety system. Typical instruments used would be Thermocouples or RTD type temperature probes, pressure transmitters and flow transmitters which are used to measure and control the incinerator.

The stack contains sensors which measure the PH level in the gases; these require regular calibration in order to ensure that any gas byproduct is clean and to mitigate the possibility of unburnt gases being released into the environment. Increased emission levels can result in penalty charges, loss of R1 certification, loss of environmental permits and even plant closure. With frequent calibration, it ensures that regulatory requirements are being adhered to.

Beamex Solution

The Beamex MC6 calibrator can be used to calibrate all process control instruments to ensure high levels of accuracy for optimum performance resulting in higher efficiency and reduced levels of C02 and other toxic gases entering the atmosphere. The MC6 can also be used to record proof-checking operations of the functional safety instrumentation.  

The Beamex bMobile Application can be used for recording the calibration of the CEMS which can then be uploaded into CMX, the calibration management software. This provides a secure repository of traceable data which can be documented for regulatory purposes and to also provide a clear calibration history for technicians to ensure that their processes are performing at an optimal level.

So… Waste not, want not?

Perhaps a little ironic and not the most apt notion when referring to Energy from Waste and Energy Recovery Facilities, but the same sentiment can be applied -  Yes, the waste is being disposed of, but it’s about utilising what we have to be more sustainable, resourceful and as environmentally friendly as we can be. EFW conserves valuable landfill space, it reduces greenhouse gases, the process helps to generate clean energy for domestic and industrial purposes, the byproducts can be recycled and with regular calibration, it ensures that the process is efficient.

Beamex and Sustainability

At Beamex, we pride ourselves on being a sustainable and responsible business. We have recently received a Silver Ecovadis rating for our Corporate Social Responsibility efforts and are continuing to work hard to progress further with this great accolade. We have 5 Sustainability Principals that we follow, our Environmental one focuses on: ‘Care for our environment and respect for ecological constraints’, you can read more about our principals and what sustainability means to us here.  

Topics: sustainability

Sanitary temperature sensor calibration

Posted by Heikki Laurila on Jun 23, 2020

Sanitary temperature sensor calibration - a Beamex blog post

 

Sanitary temperature sensors are commonly used in many industries, such as Food and Beverage, Dairy, Pharmaceutical and Life-science. In this post I will take a look at what these sanitary temperature sensors are and how they differ from common temperature sensors.

The calibration of sanitary sensors is different and way more difficult than calibrating normal temperature sensors. In this blog post, I will be discussing the considerations that should be taken into account when calibrating these sensors; it is easy to make mistakes that will cause big errors in the calibration results.

So if you are calibrating sanitary temperature sensors, you should take a look at this blog post.

Sure, there is also some educational content for everybody interested in temperature calibration.

Let's dive in!

 

Download this article as a free pdf file >>

 

Table of content

What are sanitary temperature sensors?

The role of calibration

Why sanitary sensors are difficult to calibrate?

  • Sensors are very short
  • Sensors often have a clamp connection with a flange

Liquid bath or a dry-block?

  • Liquid bath pros and cons
  • Dry-block pros and cons

How to calibrate in a temperature dry block

  • Using a reference sensor
  • Using internal reference sensor
  • Using a dedicated short reference sensor
  • Short sensor without a clamp connection

Documentation, metrological traceability, calibration uncertainty

Beamex solution for short sanitary sensor calibration

Related blog posts

 

Before we get into the details, here's a short video appetizer on this topic:

 

 

What are sanitary temperature sensors?

Let’s start by shortly discussing what these sanitary temperature sensors are.

Temperature is one of the critical process parameters in many industries and the accurate temperature measurement in processes is a crucial consideration.

Food and Beverage, Dairy, Pharmaceutical and Life-science industries have additional requirements for the temperature measurement sensors because of their processes. They require temperature sensors that are “sanitary”, meaning that these sensors need to be suitable to be installed in hygienic and aseptic process environments.

These sensors need to be hygienic and designed to be easy to clean, often supporting the clean-in-place (CIP) process (cleaning without disassembly).  The mechanical design needs to be free from any cavities, dead-pockets, gaps or anything that would complicate the hygienic cleaning.

Surface finishes of these sensors are hygienically graded and need to meet the strict standards in these industries, such as the 3-AR  (https://www.3-a.org/) or  EHEDG (European Hygienic Engineering & Design Group) https://www.ehedg.org/ .

The material of the wetted parts in these sensors is often high-grade stainless steel, suitable for these applications.

One very common feature in these sanitary temperature sensors is that they are typically very short. This makes the calibration way more difficult than with normal temperature sensors.

Another thing that makes the calibration difficult is the large metallic flange needed for the clamp installation.

The temperature ranges typically go up to around 150 °C (300 °F), or in some cases up to 200 °C (400 °F), so that is not very challenging.

More on these calibration challenges in the following chapters.

Sanitary temperature sensor calibration - a Beamex blog post

Back to top ↑

The role of calibration

In any industry, it is vital that the process measurements do measure correctly and as accurately as designed. This can be achieved with the help of suitable process instruments and with a proper calibration program.

Within the Food and Beverage, Pharmaceutical and Life-science industries, the calibration plays even more important role than most other industries. In these industries, the consequences of a bad or a failed calibration can have really dramatic effect, as we talk about consumer and patient health and safety. As failed calibration can be very costly in these industries, it has to be avoided by all means.

These industries also have dedicated strict regulations concerning calibration, such as various FDA regulations.   

More generic information about calibration can be found in other articles in this blog and on the page What is Calibration?

 

Back to top 

Why sanitary sensors are difficult to calibrate?

Let’s discuss next why these sanitary sensors are difficult to calibrate.

 

1. Sensors are very short

As mentioned earlier, these sanitary temperature sensors are typically very short. Most often less than 100 mm (4 in), typically around 50 mm (2 in), but can also be as short as 25 mm (1 in).

The outer dimension of the sensor typically is 3 mm (1/8 in) or 6 mm (1/4 in).

The commonly used practice in temperature calibration (and an Euramet guideline recommendation) is that a temperature sensor should be immersed deep enough to achieve sufficient accuracy. The recommendation is to immerse into a depth that is 15 times the sensor diameter (plus the length of the sensor element). But with these short sanitary sensors, it is simply impossible to immerse the sensor into sufficient depth during the calibration, because the sensor is so short compared to the diameter.

For example, a typical sanitary sensor with a diameter of 6 mm (1/4 in) should be immersed (15 x 6 mm) into at least 90 mm (3.5 in) depth during the calibration, to ensure accurate results. But if that 6 mm (1/4 in) sensor has a length of only 50 mm (2 in), sufficient immersion is simply not possible.

When not immersed deep enough, additional error and uncertainty will be caused in the calibration.

On an earlier blog post, how to calibrate temperature sensors, our temperature calibration lab people gave these rules of thumb for the immersion depth (when calibrating in liquid bath):

  • 1% accuracy - immerse 5 diameters + length of the actual sensing element inside the sensor
  • 0.01% accuracy - immerse 10 diameters + length of the sensing element
  • 0.0001% accuracy - immerse 15 diameters + length of the sensing element

The “accuracy” in the above rule is to be calculated from the temperature difference between the block temperature and the environment temperature.

 

Example: if the environment temperature is 20 °C and the block temperature is 120 °C, there is a 100 °C difference. If you then immerse the probe only 5 times the dimension (plus the sensing element length) – say you have 6 mm probe with a 10 mm sensing element inside of it - and you immerse it 40 mm (5 x diameter + sensing element) - you can expect about 1 °C error due to the low immersion (1% from 100 °C).

 

Picture: The below picture illustrates the commonly used relationship rule between thermometer immersion depth (in diameters) and the relative error of the temperature difference (of the temperature block and environment temperatures). So if you don't immerse at all, you naturally get a 100% error, and if you immerse deep enough the error caused by immersion becomes insignificant. Somewhere around where the immersion is 5 times the dimension, the error is about 1% of the temperature difference:

graph - error vs immersion

 

This rule of thumb can become quite significant at higher temperatures and/or for extremely short sensor lengths. So, keep this in mind with sensors less than 40 mm or 1-1/2 inches. Also, it may be worth having a conversation with a design engineer to figure out a way to increase the sensor length.

Naturally this accuracy limitation is valid also when the sensor is installed in the process and measuring the process temperature - being too short, the sensor is not able to accurately measure the process temperature!

It is not always easy to know the length of the actual sensing element inside the probe. If that is not mentioned in the datasheet, you can ask the manufacturer.

So how to calibrate these short sensors that can not be immersed deep enough?

This will be discussed in later chapters.

 

2. Sensors often have a clamp connection with a flange

As mentioned in the previous chapter, these sanitary sensors are too short compared to their diameter to enable a proper immersion causing temperature leaks, adding error and uncertainty to the calibration.

Like this would not be enough, these sensors also often have a so-called clamp connection (Tri-clamp, ISO 2852, DIN 11851, DIN 32676, BS 4825, Varivent, etc.) configuration, so there is a relatively large metallic flange, that is causing temperature to conduct / leak from the sensor to the flange. In practice, this temperature leak means that the temperature from the sensor is conducting to the large metallic flange, so the flange causes the sensor to read a bit of a lower temperature (when calibrating temperature higher than environment temperature).

Sanitary temperature sensor calibration - a Beamex blog post

This kind of flange makes the calibration more difficult is several ways:

First, the flange adds temperature leak from the sensor to the flange, the more the bigger the flange is, the more the bigger the temperature difference is to the environment temperature.

While at the same time the sensor is very short, this temperature leak causes the sensor to measure erroneous temperature.

Back to top 

Liquid bath or a dry-block?

Generally, you can calibrate temperature sensors in a liquid bath or in a dry-block. This is also the case with the sanitary temperature sensors.

Let’s discuss next what these are and what are the main pros and cons of both.

 

Liquid bath

As the name suggests, a temperature liquid bath has liquid inside. The liquid is heated / cooled to the required temperature and the temperature sensors to be calibrated are inserted into the liquid. Often the liquid is stirred for even temperature in the liquid.

 

Liquid bath pros and cons

A liquid bath makes it easier to insert any shape of sensors in it and you can also use a reference probe inserted at the same time. Depending on the size of the liquid bath, you may insert several sensors to be calibrated at the same time. In case the sensor to be calibrated is an odd shape, a benefit is that it will still fit inside the liquid bath.

A liquid bath often enables better uniformity and accuracy than a dry-block due to better heat transfer of liquid.

So, this starts to sounds like a favorable option?

A liquid bath has anyhow several drawbacks as to why it is not always the best option:

  • A liquid bath always includes some sort of liquid, such as silicone oil, and often you don’t want to contaminate the sanitary sensor in such a liquid. There is a lot of cleaning after the calibration to ensure that the sensor is clean when installed back into the process.
  • Handling of hot oil is dangerous and any spills may cause injuries.
  • Any oil spills make the floor very slippery and can cause accidents.
  • Liquid baths are very slow. Even if it could fit several sensors in at the same time, it is often several times slower than a dry-block, so overall effectivity is not really any better. Sometimes people may have several baths, each set to different temperature, and they move the sensors manually between the baths to skip the waiting time of the bath to change temperature. This may work in a calibration laboratory but is naturally a very expensive way to calibrate.
  • The sanitary sensor should be placed so that the surface of the liquid touches the bottom of the flange, but in practice this is not always that easy to do. For example silicon oil has pretty large thermal expansion, it means that the surface level is changing slightly as the temperature changes. So, you may need to adjust the height of the sanitary sensor during the calibration. Also, due to the stirring of the liquid, there are small waves on the surface and the liquid level is often deep in the bath, so it is difficult to see that the sensor is at the right depth.
  • Liquid baths are often large, heavy and expensive equipment.

 

Dry-block

A temperature dry-block (or dry-well) is a device that can be heated and / or cooled to different temperature values, and as the name suggests, it is used dry, without any liquids.

 

Dry-block pros and cons

As the earlier chapter discussed the pro and cons of a liquid bath in this application, let’s look at the same also for the dry-block.

The main pros of calibrating the sanitary sensor in a dry-block include:

  • As it is a dry, it is also clean and does not contaminate the sanitary sensor to be calibrated. Sure, the sensor should still be cleaned after calibration, but the cleaning is way easier than with a liquid bath.
  • A dry-block is also a very fast to change temperature.
  • When using a dedicated insert with proper drillings, it is easy to insert the sanitary sensor always the same way (no adjustments), and the calibration is repeatable every time and with different users.
  • A dry-block is light and easy to carry compared to liquid bath.
  • Typically, a dry-block is also cheaper than a liquid bath.

On the downside, a dry-block is a less accurate than a liquid bath, it typically only calibrates one sanitary sensor at a time and needs different inserts drilled for different diameter sensors.

Despite these downsides, customers often prefer to make the calibration of their short sanitary sensors in a dry-block.

So, let’s discuss next the different considerations when calibrating in a dry-block.

 

Back to top 

How to calibrate in a temperature dry block

To calibrate these short sanitary sensors in a temperature dry-block, there are a few considerations to take into account.

 

Using a reference sensor

Firstly, when you do the calibration in a temperature dry block, the flange of the sanitary sensor makes it impossible to use a normal external reference sensor in the same insert because it simply does not fit in, the flange covers the insert top and all the holes in the insert.

 

Picture:  Comparing calibration of a normal (long, no flange) temperature sensor using a reference probe on the first picture, with a short sanitary sensor with a flange on the second one. We can see that the short sensor flange covers all the holes in the insert, so it is not possible to insert a normal reference temperature probe:

 

Sanitary temperature sensor calibration - a Beamex blog post  Sanitary temperature sensor calibration - a Beamex blog post

 

Using internal reference sensor

The dry-block always include an internal reference sensor. Trying to use the internal reference sensor in the dry-block just does not work, because the internal reference sensor is located close to the bottom of the temperature block, and the short sensor to be calibrated is located in the very top part of the insert. The dry-blocks typically control the temperature gradient on a limited range in the bottom of the insert. The top part of the insert typically has a larger temperature gradient, so the top of the insert does not have the same temperature as the bottom of the insert. The size of the gradient depends on the temperature difference between the insert and environment, and how deep you go in the insert.

 

Picture: The internal reference sensor is located in the bottom of the temperature block, while the short sanitary sensor is located in the very top part of the insert. There is a temperature gradient in the insert, causing the top of the insert being different temperature than the bottom. This is causing error in calibration:

Sanitary temperature sensor calibration - a Beamex blog post

 

 

Using a dedicated short reference sensor

As the internal reference sensor in the bottom of the dry-block is not suitable, we need to use a dedicated external reference temperature sensor.

This reference sensor cannot anyhow be a normal long reference sensor, as discussed earlier.

The solution is to use a dedicated reference sensor that is short enough so that it can be immersed into the same depth as the sanitary sensor to be calibrated. Optimally, the middle of the sensor elements should be aligned to the same depth.

Also, the reference sensor needs to have a thin flexible cable so that the cable can fit under the flange of the sanitary sensor. To help that, we can make a grove in the top of the insert where the reference sensor cable fits and the flange of the sanitary sensor still touches the top of the insert.

Naturally the structure of the temperature dry-block needs to be such that the sanitary sensor with the flange fits to its place and touches the insert top end (in some dry-blocks the surroundings are preventing the flange to go deep enough to touch the top of the insert).

 

PictureA dedicated short reference sensor is located at the same depth as the short sanitary sensor to be calibrated, ensuring they are measuring the exact same temperature. Also, the reference sensor cable is in the grove, so it does not prevent the flange of the sanitary sensor to touch the top of the insert:

Sanitary temperature sensor calibration - a Beamex blog post

 

 

Picture: Some example pictures of how the dedicated insert for sanitary sensor calibration could look like. The hole for the sanitary sensor and for the reference sensor are equally deep and there is a grove where the ref sensor cable can go:

Insert pictures

 

Short sensor without a clamp connection

There are also short temperature sensors without the clamp connection and without the flange. With these sensors you should use an external reference sensor that has been immersed to the same depth as the sensor to be calibrated. The reference sensor should be as similar as possible with the sensor to be calibrated (similar diameter, similar response time, etc.).

The internal sensor in the dry-block cannot be used here either since it is located in the bottom of the temperature block and does not measure the same temperature as the short sensor.

 

Picture: Calibrating a short sensor (without a flange) using a short reference sensor:

Sanitary temperature sensor calibration - a Beamex blog post

 

 

Download this article as a free pdf file by clicking the picture below:

Sanitary Temperature Sensor Calibration - Beamex blog post

 

Back to top 

Documentation, metrological traceability, calibration uncertainty

There are many additional things that are important in every calibration; as there are separate articles of many of these in this blog, I only briefly mention these here:

As documentation is included in the formal definition of calibration, it is a vital part of every calibration. This is naturally also valid in sanitary temperature sensor calibration. Typically, in the form of a calibration certificate.

The calibration equipment used should have a valid metrological traceability to the relevant standards, otherwise the calibration does not ensure traceability in the sensor calibration. More info on metrological traceability can be found here:

 

The calibration uncertainty is a vital part in every calibration. If the calibration equipment (and calibration method and process used) is not accurate enough for the sensor calibration, then the calibration does not make much sense. I mean, what’s the point to use a 2% accurate calibrator to calibrate a 1% accurate instrument.

 Learn more about calibration uncertainty here:

 

Back to top 

Beamex solution for short sanitary sensor calibration

Beamex MC6-T

 

The Beamex MC6-T is an extremely versatile portable automated temperature calibration system. It combines a temperature dry-block with Beamex MC6 multifunction process calibrator and communicator technology.

The Beamex MC6-T150 temperature calibrator model is perfectly suited for the application of calibrating this kind of short sanitary temperature sensors. The MC6-T150 can be provided with custom inserts to match your specific sensors.

The Beamex SIRT-155 temperature sensor is a very short and accurate temperature sensor with a thin flexible cable, designed to be a perfect companion with the MC6-T150 for this application.

Using the MC6-T in conjunction with Beamex calibration software, CMX or LOGiCAL, enables you to digitize and streamline your whole calibration process.

 

Pictures:  In the first picture below we can see Beamex MC6-T with a dedicated insert for sanitary sensors calibration. The second picture shows how the short reference sensor (SIRT-155) is being installed. Third picture shows the sanitary sensor to be calibrated being installed. Finally, the fourth picture shows all installations being done and we are to start the automatic calibration:

Calibrating sanitary sensor with Beamex MC6-T Calibrating sanitary sensor with Beamex MC6-T

 

Calibrating sanitary sensor with Beamex MC6-T Calibrating sanitary sensor with Beamex MC6-T

 

In case you want to learn more, or to see a demonstration how to calibrate sanitary temperature sensors with Beamex solution, please feel free to contact us.

Fill the Contact Request Form or find our worldwide contacts.

 

Back to top 

Related blog posts

If you found this article interesting, you might also be interested in the following articles and eBooks:

 

Feel free to ad comments or questions, share the article or suggest interesting topics for new blogs articles.

Thanks for taking the time to read!

Back to top 

 

Topics: Temperature calibration

Future calibration trends by calibration experts in the pharmaceutical industry

Posted by Heikki Laurila on May 07, 2020

Future calibration trends by calibration experts in the pharmaceutical industry

 

We regularly organize user group meetings for our pharmaceutical customers. During a recent meeting, we interviewed some of these pharmaceutical calibration experts on future calibration trends, and we wanted to share the result with you.

In this article, you can read what these calibration experts from the world’s top pharmaceutical companies think about calibration challenges, future trends and other calibration related topics.

The following people were kind enough to join the video interview:

  • Boehringer Ingelheim, Ingo Thorwest
  • Boehringer Ingelheim, Eric Künz
  • Boehringer Ingelheim, Alexander Grimm
  • GlaxoSmithKline, Don Brady
  • GlaxoSmithKline, Simon Shelley
  • Novartis, Kevin Croarkin
  • AstraZeneca, Tomas Wahlgren

In addition, written replies were given by delegates from Lonza, Astellas, AstraZeneca and GlaxoSmithKline.

 

The following questions were asked from all of the delegates:

  1. What are your biggest calibration challenges?
  2. How do you see your calibration changing in the next 5 years?
  3. Do you see any future technology changes that could affect the need to calibrate or how to perform your calibrations?
  4. Do you see the industry digitalization changing your calibration?

 

You can find a summary of the interviews in the below video and also written in the “transcription” section below.

We trust that this information is useful for you and you can learn from these comments. Please feel free to share your questions and comments in the comments section at the end.

 

Executive summary (1 minute read)

If you don't have time to read the whole article, here is a quick-read executive summary of the article discussions:

 

What are your biggest calibration challenges?

For pharmaceutical companies, the compliance to regulation is naturally a vital consideration.

The challenges that seem to repeat in many answers are the challenges with data integrity, i.e. to produce calibration data without any media breaks. There is a strive to remove paper-based systems for recording and approval in calibration solutions and to digitalize the whole calibration process.

Another repeating comment is the mobility, the security and data integrity with mobile devices.

Also, the implementation of a global standardized calibration solution across the multiple sites globally is considered a challenge.

 

How do you see your calibration changing in the next 5 years?

The most often repeating comment is to get rid of paper-based systems and to digitalize the calibration process.

Also, integration of calibration system to other systems (like maintenance management systems) seems to be a common comment.

The use of the calibration data in other systems, is something mentioned, as well as the strive for improved mobility.

 

Do you see any future technology changes that could affect the need to calibrate, or how to perform your calibrations?

When discussing the future technology, the comments included: cloud technology, automatic calibration, digitalization enabling paperless calibration, more productivity with more efficient calibration, using calibration data for analysis, integration of systems, increased mobility and naturally the effects of the Industry 4.0.

 

Do you see the industry digitalization changing your calibration?

Most delegates commented that they will definitely be going digital and are excited to do so.

Other comments include improved data analytics, increased mobility, better connectivity of systems, expecting digitalization to improve data integrity and the development of the DCC (digital calibration certificate) standard.

 

Video interviews

Below you can find the highlights of the of the video interviews:

 

 

Many of the world’s leading pharmaceutical and life sciences companies depend upon Beamex calibration solutions. Book a free consultation with our pharma calibration experts to find the best calibration solution for you.

Book a free consultation

 

 

Transcription of the video

Here you can find the transcription of the above video interviews:

 

1. What are your biggest calibration challenges?

 

Ingo Thorwest, Boehringer Ingelheim

I think the challenge in Pharmaceutical industry is all about data integrity. Producing data without any media break is of an absolute importance for us.

 

Don Brady, GlaxoSmithKline

I would say compliance data and mobility. Compliance, because all of our data that we capture has to be ALCOA - Attributable, Legible, Contemporaneous, Original and Accurate. That's something we worked with providers and Beamex with over the years to get us to that point. Mobility, it is not as easy as implementing a mobile solution, it has to be compliant, it has to be unchallengable, and that’s where we are at today.

 

Simon Shelley, GlaxoSmithKline

 I still think that data integrity is one of our biggest challenges that we are facing. The roll out of Beamex has certainly helped but we still get a number of issues at the sites that are slow to adopt to the solution. We need to look at the advantages we can get from using the technology to move us to paperless and hopefully reduce our data integrity issues.

 

Alexander Grimm and Eric Kuenz, Boeringer Ingelheim 

Alexander: So, at the moment, in the pharma company, we are facing very specific regulations regarding calibration, and I think that the main challenge we have at the moment is about documentation of calibration, and managing calibration data. 

Eric: From the IT point of view, we need to react on that requirement to find the right solution and to bring in more structured data input to the calibration management solution.

 

Kevin Croarkin, Novartis 

I think that until very recently and probably still now, is the ALCOA process and data integrity issues in general. 

The other major challenge that we have, is a lot of calibrations are being externalized. Meaning in the past, we would have had internal people, on site doing the calibrations, where now, we have companies that come in and do the calibrations for us. 

 

2. How do you see your calibration changing in the next 5 years?


Tomas Wahlgren, AstraZeneca

I think it is going to be more integrated with other systems, like the CMS and other types of applications, like laboratory systems or something like that. 

I also see that we must increase the speed of how we perform the calibrations, because it must go quicker and easier, but of course we still need to have the quality in it.

 

Don Brady, GlaxoSmithKline 

Integration. Integration of data; it is not just a matter of gathering calibration data and archiving it, it is now a matter of integrating it with data from the systems that we have taken the calibrations from, and using all of that to create a big picture of the machine we are actually working on.

 

Ingo Thorwest, Boehringer Ingelheim

In the next few years or so it will definitely change into digitalization. Avoiding media breaks, being more and more in partnership with contractors that come in.

 

Simon Shelley, GlaxoSmithKline

I am not sure that calibration itself will change that much, but I think the way that the data will be used will change tremendously. There will be a lot more data usage and therefore the liability of that data will go up.

 

Alexander Grimm and Eric Künz, Boeringer Ingelheim 

Alexander: My assumption is that the pure calibration process: how you handle a real instrument might not have that many changes. But again, talking about documentation, we really hope to get more digitalized, to get rid of paper, to have a completely lean and digital process in the future. 

Eric: That also means for me that for process execution, we bring in the right technology, mobile devices and improved data input media in place, which also means change to IT from an infrastructure point of view, because we have to make sure we have the right infrastructure in place like wireless solutions, Wi-Fi connection or maybe offline capabilities.

 

Kevin Croarkin, Novartis 

My vision would really be, first of all, that we are not using any paper. 

Taking that forward is where our external companies are coming in with their own tools, doing the job and literally just sending us a digital file transfer afterwards when they have the job completed. 

 

3. Do you see any future technology changes that could affect the need to calibrate or how to perform your calibrations?

 

Ingo Thorwest, Boehringer Ingelheim

All technology avoiding media breaks will become more and more important and will change our way of calibrating.  Developing technologies in these environments, in cloud technology, will definitely be one of the changes of the next years

 

Don Brady, GlaxoSmithKline

At the moment we are doing a lot on machine learning, predictive maintenance, which will hopefully lead to less calibration. We look at that we have calibrated this machine 10 times in the last 5 years and it has not failed, so now just let us calibrate it 5 times in the next 5 years. Machine learning has a big part to play in that as well, where we think it is going to automate the whole calibration scheduling and the whole calibration act; where a user can just step back, click a button and go, and it will work seamlessly always remaining compliant, and aligning to the ALCOA goals mentioned previously. That is how we see it changing.

 

Simon Shelley, GlaxoSmithKline

First of all, we are moving into more paperless integrated solution, so, as our technicians go more paperless for all our activities, calibration will be just one of those routines that is also paperless. 

So, I think we will use the data more increasingly for diagnostics and to try increase productivity and just make the operators life more simple by giving them more data available in their hands. I think mobile technology is going to be a real driver for that.

 

Alexander Grimm, Boeringer Ingelheim

By the hope of a higher grade of digitalization, we hope that we have significant improvement in regards of data integrity, but on the other hand also savings and efficiency. 

 

Kevin Croarkin, Novartis

Obviously, at the moment predictive maintenance and digital engineering are the buzzwords in our industry.

I think sometimes people forget that there is a commonly used maintenance pyramid where at the very bottom it is reactive, and then you work your way up where you do preventive maintenance, you’re doing condition-based maintenance, predictive and the proactive. So, it’s really encompassing that whole triangle. 

I think the other side to it is that we are moving toward a more digital and mobile workforce as well.

Obviously with the latest version of CMX and bMobile, we now have a situation where our technicians are able to go into the field, particularly into the hazardous areas where there is no Wi-Fi, with a tablet that they can hold in one hand, which is a massive improvement from the past when they needed a backpack to carry on the front of them. 

 

4. Do you see the industry digitalization changing your calibration work?

 

Ingo Thorwest, Boehringer Ingelheim 

We will go digital, definitely. We see our company already going this way, we see other companies doing this already. It is not only for calibration, it is in all feeds of processing data. So, being on paper in 5 years, nobody will talk about that anymore.

 

Tomas Wahlgren, AstraZeneca 

More data analytics, we are collecting a lot of data and we use the data for planning for the future. If we can use a big amount of data from calibration, we can prepare and say that we do not need to calibrate so often, or we can change the way of calibration. 

 

Don Brady, GlaxoSmithKline

In the 3 to 5 year period. I think it is just about making it easier for the technicians to do calibrations, and again with mobility. 

You need the data that we are historically collecting, so that we can analyze where machine learning fits best.

That is the biggest change that is coming and obviously internet 4.0. All of those things will play and apparent part where everything is connected, and everything is integrated. 

 

Simon Shelley, GlaxoSmithKline

The digital revolution is exciting and lots of people are investing in it heavily. I think in the area that is going to amplify is the vulnerability of the OT space, operational technology. 

With cybercrime going up, we are seeing a number of suppliers more integrated to themselves being victims themselves and that's having a secondary impact on us. 

I think we will see a growth in that interconnectivity between companies as they offer services to each other, but I think we are also going to see an increased focus on the OT cyber security.

 

Other relevant blog posts

If you found this article interesting, you could also like these blog posts:

 

 

Many of the world’s leading pharmaceutical and life sciences companies depend upon Beamex calibration solutions. Book a free consultation with our pharma calibration experts to find the best calibration solution for you.

Book a free consultation

 

Beamex’s solution for pharmaceutical industry

The world’s leading Pharmaceutical and Life Science companies depend upon Beamex calibration solutions. Our solutions have been developed for over 40 years by combining our own experience and feedback gained through close partnerships with our customers to help them achieve their calibration related goals of compliance to regulation and productivity gains.

Many of these customers have selected Beamex CMX calibration softwarecalibrators and comprehensive launch and support services for their enterprise-wide solution. Calibration data has been connected and shared with their globally deployed ERP, maintenance system or instrument and asset management system, to achieve end-to-end paperless calibration, streamlined processes and truly mobile working, while maximizing the integrity of the data to achieve the highest levels of compliance.

The Beamex calibration solution fulfills the requirements of 21 CFR Part 11 and other relevant regulations for electronic records, electronic signatures and data integrity. The CMX calibration software version 2.11 introduced the “Mobile Security Plus” feature, which offers enhanced functionality and is compatible with offline mobile devices, such as the Beamex MC6 family of documenting calibrators and tablets/smartphones with the Beamex bMobile calibration application. This enhancement further lowers the risk of ALCOA violations by identifying those using offline mobile devices by their electronic signature and prevents offline data tampering.

We offer tools for the mobile worker including calibration references for pressure, temperature and electrical signals, the Beamex MC6 family of portable documenting calibrators and the Beamex bMobile application for tablet-based data entry for use in clean rooms to the most arduous of industrial environments with the ability to document and sign calibration results in the field. Beamex mobile security plus technology ensures these portable devices deliver the highest levels of data integrity, well in line with regulations such as FDA and MHRA.

 

Please contact us to discuss how we can help you!

 

 

Topics: Calibration, Calibration in pharmaceutical industry

Pressure Switch Calibration

Posted by Heikki Laurila on Mar 30, 2020

banner_Pressure-switch-calibration_1500px_v1

Pressure switches are very common instruments in the process industry, and various kinds of pressure switches are available. Like many instruments, pressure switches need to be calibrated to ensure their accuracy and reliability. Switches are a bit more difficult to calibrate than transmitters. The wrong kind of calibration can cause many errors in the calibration result. In this article, we will look at how to properly calibrate pressure switches.

Before rushing into the calibration process, let's discuss some fundamental characteristics and terminology of pressure switches.

Download this article now!

 

How does a pressure switch work?

Briefly stated, a pressure switch is an instrument that measures pressure and that has an electrical switch function programmed to operate at a certain pressure.

For example, it can be set so that when no pressure is connected (open to atmosphere) the switch is closed, but when pressure increases up to 10 psi, the switch opens. Again, when the pressure drops below 10 psi, the switch closes.

 

Pressure switch terminology

Let’s first very briefly discuss the related terminology;

 

Normally open / Normally closed

Some switches have the switch terminals open when no pressure is connected, called normally-open (NO) or a closing switch. The opposite is normally-closed (NC) or opening switch. The selection depends what kind do of circuit you want to drive with the switch.

What is "normally"? There is some debate about the definition of the normally open/closed switch. Most commonly it is defined as the state where the pressure switch output is when it's not connected to any pressure, i.e. it has no physical stimulation. 

Others may define the “normal” state as the state where the switch is during the normal operation of the process (un-tripped).

Pressure-switch_normally-close-and-normally-open_1500px_v2

 

A normally-open switch is open when no pressure is connected. When enough pressure is applied, the switch closes:

Pressure switch calibration - Normally-Open switch - Beamex blog post

 

A normally-closed switch is closed when no pressure is connected. When enough pressure is applied, the switch opens:

Pressure switch calibration - Normally-Closed switch - Beamex blog post

 

A switch will always have some deadband, which is the difference between the two operating points (opening and closing points). Deadband is required, because if a switch would open and close at the same point, it could start oscillating when the pressure is on that limit. Also, it could control the circuit on and off with a high frequency if there was no deadband. For example, a closing (NO) pressure switch may close at 10 psi pressure and open again at 9.5 psi pressure, so there is a 0.5 psi deadband.

Some switches operate at rising pressure, others with falling pressure. Sure, you always get one of the functions with rising and other with falling, but the primary desired function happens in one direction.

There are pressure switches that operate with different pressure types: gaugeabsolutedifferential or vacuum pressure.

Some older switches are mechanical (or even pneumatic), so inside the switch the pressure is causing the switch to change its state. Most newer types are electronic or digital, so they measure the pressure and control the switch output accordingly. Many modern switches are programmable, so it is easy to set the desired operating points. While mechanical switches don’t need any power supply, the electrical ones need to have one.

When selecting the switch type, the state should be considered so that should the power supply fail, or a cable becomes loose, the switch status should remain safe. And in the case of a safety switch, it should be configured so that in case a cable comes loose, the alarm goes on. For example, if it is a normally-open (closing switch), you won't notice anything if the cable comes loose, the switch is still open, but it won't make the desired action when the switch closes. So all in all, you should design it to be Fail Safe.

We also talk about dry and wet switches. A dry switch has the connections being open or closed, so it is working like a mechanical switch. A wet switch has two different voltage values representing the two output states.

The output of an electrical wet switch can be a voltage signal with two levels, a current signal, or an open collector type signal.  

Sometimes the switch function can be also done in the control system, by measuring the current signal from a transmitter and programming the switch-like function to control something based on the signal level.

In practice, industrial switches often have double switch contacts that can be programmed separately. This can be the normal Lo and Hi points, but also “Lo Lo” and “Hi Hi” points. While the Lo and Hi are the normal control points, the Lo Lo and Hi Hi are alarm limits that will control for more serious alarm activities.

 

Safety pressure switches 

Safety switches are switches used in the safety instrumented systems (SIS), and these switches have certain safety classifications. Also, the calibration of these safety switches is regulated.

A big difference with these switches is that these switches stay static most of the time without ever functioning. So, they don’t toggle open and closed in normal usage, they are just waiting if the safety alarm level is met, and then they operate.

As these switches very rarely operate, there is a risk that they will get stuck and not work when they should.

When calibrating, do not exercise these safety switches prior to calibration, instead capture the very first point when the switch operates. It can happen that the first operation requires more pressure than the operations after a few exercises.

Normal switches are typically exercised a few times before calibration, but that should not be done for the safety switches.

In a safety switch, the operation point is critical, but often the return point is not that relevant and may not even require to be calibrated.

 

How to calibrate pressure switches

Now, let’s (finally!) discuss how to calibrate pressure switches.

 

Preparations & safety

If the switch is installed in the process, it is very important to make sure it is isolated from the pressure line. You also need to make sure to disconnect any circuit that the switch is controlling - you don’t want big valves to start opening/closing, or pumps to start operating, nor generate a safety alarm.

Some switches may have mains voltage, or another dangerous voltage, across the switch terminals when they open, so make sure that it is isolated.

 

Pressure ramp

To calibrate a pressure switch you need to provide a slowly changing pressure ramp, moving across the operating points of the switch. Depending of the switch type, you need to first supply a suitable pressure to start the calibration.

Often you can start from atmospheric pressure, but in some cases, you need to pump a high pressure and start slowly decreasing the pressure towards the operation point. Or you may need to provide a vacuum to start from. This depends on the switch to be calibrated.

There are different ways to provide the input pressure. You can use a calibration hand pump with a fine adjustment control, you may use shop air supply with a precise pressure controller, or you can use an automatic pressure controller.

It is vital to provide a slow pressure ramp so that you can see the precise pressure whereby the switch operated. If the pressure changes too quickly, you cannot accurately capture the pressure point when the switch operated.

Certainly, some tools (like the Beamex MC6) can automatically capture the exact pressure during the very moment when the switch changed its status.

Anyhow, remember to change the pressure very slowly when you are approaching the operation points of the switch! You may change the pressure faster when you are not yet close to the operation points.

 

Measuring the switch output

You need some tool to measure the switch terminals. If it is a dry switch, with an open and close output, you may use an Ohm meter. If the output is electrical, you will need to find a tool that can measure the output. In some cases, it may be a voltage meter, or current meter. For electrical outputs, it is sometimes a bit difficult to find how to measure the output. You should anyhow be able to recognize the two states of the output and to see when the state changes.

With some tools, you can program a trigger level that suits the switch in question which enables the status change to be captured automatically. This is how the Beamex MC6 works.

 

Capturing the operation points

In the switch calibration, you need to capture the input pressure at the very moment when the output state changes.

You can try to capture the input pressure manually, e.g. when the switch state changes, you stop the ramp and look what is the input pressure (on the device/calibrator that is measuring the input pressure). Most likely there is some delay in your reflexes, so the pressure is already different than what it was during the switch operation moment. That is the main reason you should provide a very slow input pressure, so it has not changed that much during the delay of your reflexes.

Some devices can capture the input pressure automatically at the very same moment when the switch output changes its state. Needless to say, the Beamex MC6 family of calibrators can do that… :-) 

The MC6 can interpolate between the pressure measurement readings. Let me explain; a digital pressure measurement device measures the pressure a few times every second. It may happen that the switch operates in between the two consecutive pressure measurement readings. In that case, the MC6 looks at the time stamp of the switch operation and interpolates between the two consecutive pressure measurement results to get the exact pressure value during the switch operation moment.

 

Delayed output

Some industrial switches may have a delay added to the output so that it does not work too quickly. You should find out if your switch has delay as then the calibration needs to be done even slower than normally.

With some added delay, by the time the output toggles, the input pressure is already far away from the point that actually triggered the output to toggle.

 

Steps in pressure switch calibration:

Here’s a condensed list of steps in pressure switch calibration:

  1. Depressurize & disconnect for safety.
  2. Connect the pressure source and the pressure calibrator to the switch input.
  3. Connect the device to measure the switch output status.
  4. Exercise the switch a few times - pump full pressure and back to zero. Not with safety switches!
  5. Pump normally pressure close to operation point.
  6. Move pressure very slowly across the operation point, until the switch output toggles. Record the operation pressure.
  7. Move pressure very slowly towards the return point, until the switch status toggles. Record the return pressure.
  8. Make required number of repeats - repeat the two previous steps.
  9. Vent pressure.
  10. Disconnect the test equipment.
  11. Return switch back to service.

 

Naturally, you need to document the switch calibration results.

Also, you need to calculate the errors found in the calibration and compare that to the max allowed tolerance for that switch to see if it Passed or Failed calibration. In the case of the switch failed the calibration, then you need to either adjust the switch or replace it. Even if it passes the calibration, you should still analyze how big the error was. If the error was close to the tolerance limit, or if it had drifted much since last calibration, it is good to adjust it to avoid a fail result in the next calibration.

And as with every calibration, based on the calibration result history, you should consider if the calibration period should be changed. You don’t want to waste resources on calibrating it too often, but also you don’t want to calibrate it so seldom that you get a failed calibration result. A failed calibration result should anyhow always start an investigation of the consequences. This can be expensive and work intensive.

 

More discussions on how often instruments should be calibrated can be found in this blog post:

 

And discussions on Fail and Pass calibration can be found here:

 

Documentation, metrological traceability, calibration uncertainty

As documentation is included in the formal definition of calibration, it is a vital part of every calibration. This is also valid in pressure switch calibration. Typically, in the form of a calibration certificate.

The calibration equipment used should have a valid metrological traceability to the relevant standards, otherwise the calibration does not ensure traceability in the switch calibration. More info on metrological traceability can be found here:

The calibration uncertainty is a vital part in every calibration. If the calibration equipment (and calibration method and process used) is not accurate enough for the pressure switch calibration, then the calibration does not make much sense. I mean, what’s the point to use a 2% accurate calibrator to calibrate a 1% accurate instrument.

 

Learn more about calibration uncertainty here:

 

We also have one older blog post that includes a short video on pressure switch calibration here:

 

Download this article

Click the below picture to download this article as a free pdf file:

Pressure switch calibration - Beamex blog post

 

Beamex solution for pressure switch calibration

As you would guess, Beamex offers solutions for pressure switch calibration. 

Our MC6 family of calibrators can perform documented pressure switch calibrations, either semi-automatically with a calibration pump, or fully automatically with a pressure controller.

You can upload the pressure switch calibration results from calibrator to calibration management software for paperless documentation. 

Please contact us learn more:

Contact us

 

 

Topics: Pressure calibration, Pressure switch

Temperature Calibration [eBook]

Posted by Heikki Laurila on Feb 19, 2020

Beamex calibration essentials - temperature ebook

 

In this blog post we want to share with you an educational eBook focusing on temperature calibration and other temperature related topics.

Some of these articles have been already earlier posted in Beamex blog, but now several temperature related resources have been collected into one free handy eBook.

Just give me the free eBook now! >>

 

Contents of the eBook

The eBook contains following articles:

  • Uncertainty components of a temperature calibration using a dry block  (Page 4) 
  • Pt100 temperature sensor — useful things to know (Page 13) 
  • Thermocouple Cold (Reference) Junction Compensation (Page 21)
  • Temperature units and temperature unit conversion (Page 27)
  • How to calibrate temperature sensors (Page 31)
  • AMS2750E heat treatment standard and calibration (Page 37)
  • Optimal testing parameters for process instrument calibration (Page 45)

 

Download the free temperature eBook here!

 

Abstracts of the articles

Here are short abstracts of each article included in the eBook:

Uncertainty components of a temperature calibration using a dry block

Uncertainty components of a temperature calibration using a dry blockIn this article, we will be covering the different uncertainty components that you should consider when you make a temperature calibration using a temperature dry block.

Making a temperature calibration using a dry block seems like a pretty simple and straight forward thing to do, however there are many possible sources for uncertainty and error that should be considered.

Often the biggest uncertainties may come from the procedure on how the calibration is done, not necessarily from the specifications of the components.

 

Pt100 temperature sensor — useful things to know

Pt100 temperature sensor — useful things to knowPt100 temperature sensors are very common sensors in the process industry. This article discusses many useful and practical things to know about the Pt100 sensors. There’s information on RTD and PRT sensors, different Pt100 mechanical structures, temperature-resistance relationship, temperature coefficients, accuracy classes and on many more.

 

Thermocouple Cold (Reference) Junction Compensation

Thermocouple Cold (Reference) Junction CompensationEven people who work a lot with thermocouples don’t always realize how the thermocouples, and especially the cold (reference) junction, works and therefore they can make errors in measurement and calibration.

In this article, we will take a short look at the thermocouple cold junction and the cold junction compensation. To be able to discuss the cold junction, we need to take first a short look into the thermocouple theory and how a thermocouple works.

We won’t go very deep in the theoretical science but will stick more with practical considerations, the kind of things you should know when you work with thermocouple measurements and calibrations in a typical process plant.

 

Temperature units and temperature unit conversion

Temperature units and temperature unit conversionThis article discusses temperature, temperature scales, temperature units and temperature unit conversions. Let’s first take a short look at what temperature really is, then look at some of the most common temperature units and finally the conversions between them.

 

How to calibrate temperature sensors

How to calibrate temperature sensorsEvery temperature measurement loop has a temperature sensor as the first component in the loop. So, it all starts with a temperature sensor. The temperature sensor plays a vital role in the accuracy of the whole temperature measurement loop.

As any measurement instrument you want to be accurate, also the temperature sensor needs to be calibrated regularly. Why would you measure temperature, if you don’t care about the accuracy?

In this article, we will take a look at how to calibrate temperature sensors and what are the most common things you should consider when calibrating temperature sensors.

 

AMS2750E heat treatment standard and calibration

AMS2750E heat treatment standard and calibrationIn this article, we will take a look at the AMS2750E standard, with a special focus on the requirements set for accuracy, calibration and test/calibration equipment.

The AMS2750E is predominantly designed for heat treatment in the aerospace industries. Heat treatment is an essential process for many critical parts of an airplane, so it is understandable that there are tight regulations and audit processes set.

While the results and success of some other industrial processes can be relatively easily measured after the process, this is not the case in a heat treatment process. Therefore, very tight control and documentation of the heat treatment process is essential to assure the quality of the end products.

 

Optimal testing parameters for process instrument calibration

Optimal testing parameters for process instrument calibrationMost calibration technicians follow long-established procedures at their facility that have not evolved with instrumentation technology. Years ago, maintaining a performance specification of ±1% of span was difficult, but today’s instrumentation can easily exceed that level on an annual basis. In some instances, technicians are using old test equipment that does not meet new technology specifications.

This paper focuses on establishing base line performance testing where analysis of testing parameters (mainly tolerances, intervals and test point schemes) can be analyzed and adjusted to meet optimal performance. Risk considerations will also be discussed—regulatory, safety, quality, efficiency, downtime and other critical parameters.

A good understanding of these variables will help in making the best decisions on how to calibrate plant process instrumentation and how to improve outdated practices.

 

Download the free temperature eBook here!

 

New temperature calibrator Beamex MC6-T

If you work with temperature calibration, please check out our latest temperature calibrator Beamex MC6-T.

Click the below picture to learn more:

Beamex MC6-T temperature calibrator

 

Links to the individual blog articles

Here are links to the individual blog articles:

 

 

Topics: Temperature calibration

Calibration Trends Featuring Automation & Digitalization [Webinar]

Posted by Heikki Laurila on Jan 23, 2020

Calibration Trends Featuring Automation & Digitalization - Beamex blog post

In this blog post, I am proud to share a recent webinar collaboration with ISA (International Society of Automation) titled "Calibration Trends Featuring Automation & Digitalization."

Calibration automation and digital data capture have long been trends, but effectively combining these approaches to generate the most benefits has recently become a best practice in many process plants. 

Watch this webinar to learn how advanced technology gives you the ability to digitalize your calibration data and standardize your calibration processes to achieve benefits such as confidence in your data integrity, improved plant reliability and increased efficiency.

Check out the below table of contents and jump to what is interesting for you!

Click here to watch the webinar now >>

 

Table of contents:

0:00 (min:sec)

  • Introduction to the webinar

1:10             

  • Introduction of the presenters

3:45             

  • A brief history of calibration automation

4:45             

  • Presentation of the agenda

6:00             

  • Emergency of Digitalization in Calibration Processes
  • Terminology
  • Where do we need digitalization?
  • Industry 4.0
  • Change of Production Process
  • Digital Twins
  • Automated and Predictive Maintenance

18:40           

  • Digitalization in Calibration Workflow
  • Calibration: Paper vs. Digital
  • Calibration workflow – integrated system

22:35           

  • Demo – paperless calibration of a temperature transmitter

31:05           

  • Questions & Answers

38:50           

  • DCC – Digital Calibration certificate
  • Digital Infrastructure for Calibration Process

47:00           

  • Calibration KPI’s

52:40           

  • Demo – paperless calibration of a pressure transmitter

59:10           

  • Conclusions

1:03:05        

  • Questions & Answers

 

Watch the webinar

Watch the webinar now by clicking the picture below:

New Call-to-action

 

Want to learn more more? 

Check out these related resources:

 

Beamex solution for Automation & Digitalization

We offer an Integrated Calibration Solution that is the combination of softwarecalibration hardware and calibration expertise that delivers an automated and paperless/digitalized flow of calibration data.

Please visit our web site to learn more on our Integrated Calibration Solution.

 

 

Topics: Webinar, Digitalization

Pressure Transmitter Accuracy Specifications – the small print

Posted by Heikki Laurila on Dec 03, 2019

Pressure transmitter accuracy specifications - Beamex blog post

 

Pressure transmitters are widely used in the process industry. The advertised accuracy specification of modern pressure transmitters has become more and more accurate.

Often the advertised accuracy specification includes anyhow only part of the truth. It includes only some of the accuracy components effecting the total accuracy that you can expect from the transmitter in practice in your application.

In this blog post, I will examine some popular pressure transmitters’ accuracy specifications and the different accuracy components, such as effect of: re-ranging, ambient temperature, mounting position, static pressure, long term drift, vibration, power supply and more.

I will shortly explain what these components are and what they mean with a few examples.

Background

We see “number games” being played with some transmitters’ specifications, where they advertise an accuracy number that is just part of the truth, i.e. it is just one of the many accuracy components that you should take into account. In some cases, these advertisements can be confusing and give the wrong impression of the total practical accuracy you will get in your application.

Maybe the competition and race for the best accuracy numbers have led to this situation, that some manufacturers make a “limited” accuracy figure and put that on the cover of the brochure and advertise that on their web site, while the full specifications are found in the user manual.

Typically, a pressure transmitter’s specifications include several accuracy components that you should take into account when considering the total accuracy.

As mentioned, this blog post will review some popular pressure transmitter’s specifications to give you an idea of the kind of important factors you should take into account and be aware of. Also, I will list some typical specification numbers for the different partial accuracy components. I am by no means trying to put down or depreciate any transmitter.

As the transmitter accuracy affects the accuracy of your calibration equipment, we do also get these accuracy questions from customers. Certainly, the calibrator should be more accurate than the transmitter you calibrate with it, but the accuracy ratio between these two is something different people have different opinions on. Anyhow, you should be aware of the total uncertainty of the calibration and document that during the calibration.

The selection of your process transmitter’s tolerance should be anyhow based on the process requirements, not on the specifications of the transmitter that is installed in that location.

Time to dive into it…

 

Pressure transmitter accuracy components

 

“Reference accuracy”

Often there is a separate “limited” accuracy statement mentioned, typically on the cover of the brochure, or on the website.

This can be called “reference accuracy” or something similar, that includes only some parts of the accuracy, not all parts. It includes, for example only linearity, hysteresis and repeatability.

This “best-case accuracy” does not include all the practical accuracy components you should consider (mounting position, ambient temperature, etc.). So, don’t think that this specification is what you can expect in practice from the transmitter when you install it in your process.

This “best-case accuracy” may be for example 0.04 % or even 0.025 % of range, for the most accurate pressure ranges for the most accurate transmitters.

 

Different pressure ranges

Often the best (reference) accuracy is valid only for certain pressure ranges, not for all the ranges available. Also, it may vary on the pressure type, i.e. an absolute range may be different than a gauge range.

While the best ranges can have, say even a 0.04 % of range accuracy, some other range of that same transmitter model may have, for example, a 0.1 % accuracy.

Accuracy specifications may be doubled or tripled for the different pressure ranges available.  So, make sure you know what the accuracy is for the exact pressure ranges/models that you are using.

 

Re-ranging

HART (smart) transmitters can be re-ranged with a wide ratio. Often you can re-range a transmitter with a turndown ratio of 100:1 or even more. Accuracy specifications are commonly given to the full range, or with a limited turndown ratio.

If the HART transmitter (with a mA output) is re-ranged for a smaller range than the full range, that typically worsens the accuracy.  So, if you re-range your transmitter to a smaller range than the max range, please make sure you find if / out how much error that adds to the accuracy.

 

Ambient temperature effect

Most pressure transmitters are used in varying environmental conditions in the processes. Also, the temperature of the pressure media may vary widely during usage.

As with most measurement devices, pressure transmitters typically have some kind of temperature coefficient, i.e. there is an accuracy component that depends on the environmental temperature.

The temperature dependency seems to be often specified in a pretty difficult to understand format. But try to understand that and ask the supplier if you can’t figure that out.

Anyhow, looking at different transmitters, this may vary from say 0.01 % of range even up to 0.5 % of range. The worst models seem to specify the temperature effect being more than 1 % of the range.

If the temperature in your process varies a lot, you should take this into account.

 

Static (line) pressure effect

Differential pressure transmitters can be used under static line pressure conditions. This means that both inputs have a certain pressure and the transmitter is measuring the difference between the two inputs. Compared to a gauge transmitter that is measuring pressure against the atmospheric pressure or an absolute transmitter that measures pressure against full vacuum.

An ideal differential transmitter would measure only the difference between the inputs, but in practice, the common-mode static line pressure has some effect on the output.

If you have both inputs open to atmospheric pressure, the differential pressure is naturally zero. Also, if you have the same pressure (say 50 bar/psi) applied to both inputs the differential pressure is still zero. In practice, that static pressure has some effect to the transmitter output. So, the output changes a little when the line pressure changes.

Typically, the line pressure effect can go from 0.025 % of range up to 0.4 % of range, depending on the transmitter model.

Commonly, the line pressure changes mainly the zero of the transmitter, but does not make a significant change to the span. So, in calibration, you can test this effect by applying the same pressure (a low pressure and a high pressure) to both inputs and see how much the zero changes.

Line pressure may also have some effect to the span of the transmitter, which makes it far more difficult to handle and to calibrate. It requires a differential pressure standard for the calibration.

 

Long term stability

All measurement devices will slowly lose their accuracy over time. Some more, some less. That goes also for the pressure transmitters.

Some pressure transmitters have a 1-year stability specified, some have even a 5- or 10-year specification, or even longer.

For example, a transmitter that has a reference accuracy of 0.04% of range can have 1-year stability of 0.2% of range. Some other models have a similar 0.2 % of range level of specification valid for 5 or even 10 years.

The best one I found was as low as 0.01 % of range as a 1-year stability.

Depending on how often you re-calibrate your pressure transmitters, you should consider the long-term stability effect, as the transmitter may drift that much before the next recalibration (and possible trim).

 

Mounting position (orientation) effect

The mounting position typically has some effect on the accuracy of the pressure transmitter. Most pressure transmitters have a specification for the mounting position.

Typically, a change in the orientation changes the zero and does not affect the span accuracy. In practice, the orientation of the transmitter does not change during normal usage. The orientation should anyhow be considered if you first calibrate the transmitter in a workshop and then install it to the process, or if you remove the transmitter from the process for recalibration.

Certainly, if a transmitter has a remote seal, the location of the capillary tubes will have a big effect on zero value. Again, this is not something that does not change during normal usage, but may affect the calibration, if the transmitter is removed from its install location.

 

Vibration effect

Many pressure transmitters have a specification for the effect of vibration.

Naturally, this needs to be considered only if the transmitter is installed in a vibrating location.

The vibration effect to accuracy is often relatively small and can be, for example, specified of being “less than 0.1% of range.”

 

Power supply effect

A 2-wire transmitter needs an external power supply to work. Typically, the power supply is a 24 VDC supply.

Transmitters can commonly work on a wide supply voltage range, going even down to 10 VDC.

Anyhow, if the supply voltage changes during the operation, that can have a small effect on the accuracy of the transmitter. The effect of the power supply voltage is typically small and can be specified of being “smaller than 0.01 % of span per 1 Volt change,” for instance.  

In practice, if you have a normal good power supply, this is not an issue.

 

Total accuracy specification

Some transmitters have some kind of “total accuracy” specification that includes several of the common accuracy components. This can include the earlier mentioned “reference accuracy” and the ambient temperature effect and static/line pressure effect. This kind of total accuracy has a more user-friendly value as it is gets closer to the real accuracy you can expect from a transmitter.

As an example, the “total accuracy” specification can be 0.14 % of range, while the reference is 0.04 %.

So as soon as you include the temperature and line pressure effects, the reference accuracy gets multiplied by a factor of 3 to 4.

Another example model offers a 0.075 % of range reference accuracy, and when the temperature effect is included it raises to 0.2 %, and when static pressure effects are also included it goes up to 0.3 % of range.

If the transmitter has this kind of “total” accuracy specification, it helps you to get a more realistic picture of what kind of accuracy you can expect in practice. Even though that “total” accuracy is often still missing some accuracy components are listed here.

 

Contamination in usage

When a pressure transmitter is used in a process to measure pressure, there is a big risk that the transmitter’s membrane gets contaminated by the pressure media or some dirt. This kind of contamination can have a huge effect on the transmitter’s accuracy.

This is, of course, not something that can be specified, but is anyhow a big risk in normal use. Especially, if you decide to have a very long recalibration period, such as several years. So, in addition to the transmitter’s long-term drift specification, this should be considered in the risk analysis.

If the transmitter gets very dirty and starts to measure significantly wrong, you will normally see that in the measurement results. But if it only starts to measure a slightly wrong, it is difficult to notice in normal usage.

 

Best-case and worst-case examples

When you add up all the above listed different accuracy specifications, you come to the real total accuracy specification you can expect in practice.

Generally, when you combine independent uncertainty components, the common rule is to use the “root sum of the squares” (RSS) method. Just adding all components together as a straight sum would be a worst-case scenario and statistically, in practice, it is not very likely that all components will be in the same direction at the same time. Therefore, this statistical RSS method is used.

To get a best-case summary, we should take all the smallest accuracy components and neglect the ones that may not be relevant.

For the worst-case scenario, we should take all the accuracy components as their max and assume they are all present.

 

Best-case accuracy

To get the best-case accuracy, the following assumptions were used:

  • Pick the best reference accuracy
  • Choose the most accurate model and range
  • Don’t do any re-ranging -> no effect to the accuracy
  • Use the transmitter in a limited temperature range, close to ambient temperature. Pick the smallest available temperature effect.
  • Assume no static/line pressure effect (used for gauge measurement) -> no effect.
  • Assume no vibration effect -> no effect
  • Assume a good power supply -> no effect
  • Include a one-year drift

After reviewing the specifications for several different transmitters, it seems that the smallest combined accuracy I can find takes me down to around 0.15 % of range. For most other models it seems that the best case is around double that, so around 0.3 % of range at best.

There are also many models that have bigger best-case accuracy.

 

Worst-case accuracy

To find the worst-case accuracy, the following assumptions were used:

  • Pick a model/pressure range with the biggest accuracy spec
  • Assume some re-ranging happening
  • Use the range with bigger temperature effect
  • Assume static/line pressure being used
  • Assume a small vibration effect
  • Assume a small power supply effect
  • Include a one-year drift

Again, looking at the different specifications, it seems that adding these worst-case accuracy specifications we end up somewhere around 1% to 1.5 % of range accuracy, with the most accurate transmitters.

But this figure can also go higher with some models.

 

Summary

As mentioned earlier, modern pressure transmitters are very accurate instruments. It is anyhow good to read the accuracy specifications carefully including all the different components that effect accuracy. It is easy to miss these and just look at the one accuracy, for example, “reference accuracy,” that is shown in marketing and other materials.

The purpose of this post is to raise your awareness on the different things that has an effect to the total accuracy you can expect in practice.

Of course, this same goes for all measurement equipment, not only for pressure transmitters. It is always good to read all specifications, including all the footnotes with small print.

I hope you found this article useful.

 

Beamex solution for pressure calibration

Beamex offers different solutions for pressure calibration, including calibrating pressure transmitters.

Please check out our offering here: Pressure Calibrators.

 

Related blog posts

If you found this article interesting, you might want to check these posts too:

 

 

Topics: Pressure calibration, Transmitter, calibration uncertainty

How calibration improves plant sustainability

Posted by Heikki Laurila on Oct 07, 2019

How calibration improves plant sustainability - Beamex blog post

You only live once: A common phrase used around the world to indicate how one should live their life to the fullest. This is a great concept for individuals to take advantage of the life they have been given, but to assure life for the future, resources and the environment should be taken into consideration in the process. In 1969, sustainability was introduced with the passage of the National Environmental Policy Act (NEPA), and has been an important topic ever since. Sustainable Plant Magazine defines sustainability as, “Operating our business in ways that meet the needs of the present without compromising the world we leave to the future.”

Social, economic, and environmental effects are the three pillars often used to define and measure sustainability.  Calibration plays a critical role impacting these pillars to help maintain sustainability throughout plants. Calibrating process instruments on a regular basis aids in optimizing processes to minimize production downtime, ensure product quality and secure plant reliability. For many plants, calibration is also a critical activity in controlling emissions, as emission-related instruments are often associated with the plant’s license to operate.

The pillars of sustainability

Although social effects are hard to quantify and measure, they still play an important role in maintaining sustainability. Safety is one social factor that tends to be more quantifiable than others: Evident across many industries, plants often display the number of days without injury. Employee safety is a social factor that is every plant's responsibility.

A plant’s overall health and performance is important in protecting  not  only  employees,  but  the  community  too. The community may not be directly impacted by on-the-job injury; however, poor maintenance and operations can lead to harmful impacts on the community, such as toxic gas emissions, out-of-spec products, or worst case scenarios, such as an explosion or poor product quality which leads to an injury or death.

Another social factor is the working and living conditions of the employees and community. Working conditions could include, working hours, industrial noise, plant temperature, and harmful toxin release. In some cases, employees are required to live where they work. An oil platform is a good example where social sustainability becomes even more important. Social sustainability is important in maintaining industrial performance for the future.

Economic sustainability in plant operations includes using available resources to increase performance with positive returns on investment and overall plant profit. Economic impacts are typically measured monetarily. If the return on investment is desirable, the plant can consider the resources justifiable. For example, if a calibration software solution helps monitor the overall instrument  health  of  a plant  which prevents  unplanned  shutdowns (that could cost millions of dollars), it is considered an economic solution to help maintain sustainability.

If available resources are not being used, the plant may not be sustainable in the future especially in competitive markets. In many of those situations, plant personnel do not understand what types of sustainable solutions are available and if they are right for a particular situation.  Fortunately,  many  solution  providers  offer sustainability and return on investment reports to help distinguish sustainable solutions.

Although economic sustainability involves increasing plant profit by using available resources, the environment should not be  compromised  in  the  process.  For  example,  if  cheaper raw materials exist which improve overall profit but create harmful and toxic waste that compromise the environment, that solution is not considered sustainable. Environmental conditions must be considered in sustainable solutions.

Sustainability  initiatives,  regardless  of  the  positive  impacts on the social and economic pillars, all depend on the impact made on the environment, because ultimately, the future depends on today’s efforts to maintain a livable environment.  Environmental  initiatives  could  include many  different  projects  to  decrease  negative  effects  on  natural resources available today. One such project is developing paperless environments or digitalizing data to not only maintain trees and reduce waste, but also to create a more economical solution that decreases time spent performing work. Other projects include the design and construction of green buildings that use less energy and water, manufacturing process modification that reduce greenhouse gas emissions that destroy the atmosphere, and restoration of different aspects of the environment that have been destroyed in the past, such as greenery and natural streams and rivers.

Different governmental agencies and acts, such as those from the EPA, OSHA and NEPA, have set regulations to help advance sustainability initiatives that promote positive influence on the social, economic, and environmental aspects indicating the importance of sustainability to ensure a future for this planet.

Impact of calibration on plant sustainability

Process instrumentation exists to monitor how much, how high, how little and how often, contents are being used to create a product. Calibrating process instrumentation adheres to the social, economic, and environmental pillars of sustainability. As mentioned above, social sustainability includes the safety of the employees and community.  For example, toxic gas emissions are  monitored  by process instrumentation which must be calibrated and the results documented to ensure  accurate readings required  by regulatory agencies, such as the EPA and OSHA. 

Using a calibration software can help improve plant sustainability in many ways. For instance, calibration software can remind plant personnel when instruments are due for calibration, reducing the chance for these instruments to be overlooked, which if not calibrated could be out of tolerance, causing the process to emit harmful chemicals into the atmosphere which could negatively affect the environment and be deadly or harmful to the community. Calibration  helps  to  ensure  proper  function, reliability and accuracy of instrumentation.

An automated, integrated calibration program can integrate with maintenance  management  systems (MMS),  to help increases quality and decrease the time and money spent on calibration, especially when compared to manual systems, such as pen and paper.  Many plants receive work orders from the MMS queuing them to perform calibration. Traditionally, results were written down using paper and pen then inserted into several databases,  once  in  a calibration database  and  once  in an  MMS. This manual process can take hours of work and is prone to errors, while a calibration software saves a considerable number of man-hours and enhances data integrity. Streamlined calibration processes have fast returns on investment and secure plant profit by catching potential failures before they cause unplanned shutdowns, thus making a plant more sustainable for the future.

Beamex solution

Not  only  does  calibration  promote  sustainability,  but  Beamex  calibration  solutions  are  manufactured  with  sustainability  in  mind  as  well.  Beamex’s  product  development and production teams have received training on the environmental impact of product design. Beamex products are also designed to have a long operating life – typically a customer uses a Beamex calibrator for over ten years. This minimizes the waste generated from the products.

The  Beamex  production  process  follows  the  Waste  Electrical  and  Electronic  Equipment  (WEEE)  directive  2002/96/EC that sets collection, recycling and recovery targets for electrical goods and is part of a European Union legislative initiative to solve the problem of huge amounts of toxic electronic waste. Beamex also takes into consideration the Eco Vadis and ISO 14001 environmental standards in their ISO 9001 quality system.

 

References

Larson, Keith. “Why Sustainability Now?” Sustainable Plant. Putnam Media. 2013. Web. 26 March 2013. <http://www.sustainableplant.com/about-us/

 

Topics: sustainability

How to calibrate temperature sensors

Posted by Heikki Laurila on Aug 27, 2019

How to calibrate temperature sensors - Beamex blog post

 

Temperature measurement is one of the most common measurements in the process industry.

Every temperature measurement loop has a temperature sensor as the first component in the loop. So, it all starts with a temperature sensor. The temperature sensor plays a vital role in the accuracy of the whole temperature measurement loop.

As any measurement instrument you want to be accurate, also the temperature sensor needs to be calibrated regularly. Why would you measure temperature, if you don’t care about the accuracy?

In this blog post, I will take a look at how to calibrate temperature sensors and what are the most common things you should consider when calibrating temperature sensors.

Download this article as free pdf file

 

Before we get into details, here is a short video on how to calibrate a temperature sensor:

 

What is a temperature sensor?

Let's start from the basics... discussing what a temperature sensor is: 

As the name indicates, a temperature sensor is an instrument that can be used to measure temperature. It has an output signal proportional to the applied temperature. When the temperature of the sensor changes, the output will also change accordingly.

There are various kinds of temperature sensors that have different output signals. Some have a resistance output, some have a voltage signal, some have a digital signal and many more.

In practice, in industrial applications, the signal from temperature sensor is typically connected to a temperature transmitter, that will convert the signal into a format that is easier to transfer for longer distances, to the control system (DCS, SCADA). The standard 4 to 20 mA signal has been used for decades, as a current signal can be transferred longer distances and the current does not change even if there is some resistance along the wires. Nowadays transmitters with digital signals or even wireless signals are being adopted.

Anyhow, to measure temperature, the measuring element that is used is the temperature sensor.

 

Measuring the temperature sensor output

As most temperature sensors have an electrical output, that output obviously needs to be measured somehow. That being said, you need to have a measurement device to measure the output, resistance or voltage, for example. 

The measurement device often displays an electrical quantity (resistance, voltage), not temperature. So it is necessary to know how to convert that electrical signal into a temperature value.

Most standard temperature sensors have international standards that specify how to calculate the electrical/temperature conversion, using a table or a formula. If you have a non-standard sensor, you may need to get that information from the sensor manufacturer.

There are also measuring devices that can display the temperature sensor signal directly as temperature. These devices also measure the electrical signal (resistance, voltage) and have the sensor tables (or polynomials/formulas) programmed inside, so they convert it  into temperature. For example, temperature calibrators typically support the most common RTD (resistance temperature detector) and thermocouple (T/C) sensors used in the process industry.

 

So how to calibrate a temperature sensor?

Before we go into the various things to consider when calibrating a temperature sensor, lets take a look at the general principle.

First, since the temperature sensor measures temperature, you will need to have a known temperature to immerse the sensor in to calibrate it. It is not possible to “simulate” temperature, but you must create a real temperature using a temperature source.

You can either generate an accurate temperature, or you can use a calibrated reference temperature sensor to measure the generated temperature. For example, you may insert the reference sensor and the sensor to be calibrated into a liquid bath (preferably a stirred one) and you can perform calibration at that temperature point. Alternatively, a so called dry-block temperature source can be used.

As an example, using a stirred ice-bath provides pretty good accuracy for the 0 °C (32°F) point calibration.

For industrial and professional calibration, typically temperature baths or dry-blocks are used. These can be programmed to heat or cool the temperature into a certain set point.

In some industrial applications, it is a common practice to replace temperature sensors on regular intervals and not to calibrate the sensors regularly.

 

How to calibrate temperature sensors – things to consider

Lets start digging into the actual calibration of temperature sensors and the different things to consider....

 

1 - Handling temperature sensor

Different sensors have different mechanical structures and different mechanical robustness.

The most accurate SPRT (standard platinum resistance thermometer) sensors, used as reference sensors in temperature laboratories, are very fragile. Our temperature calibration laboratory people say that if a SPRT touches something so that you can hear any sound, the sensor must be checked before any further use.

Luckily most of the industrial temperature sensors are robust and will survive normal handling. There are some industrial sensors that are made very robust and then can withstand pretty rough handling.

But if you are not sure of the structure of the sensor you should calibrate, it is better to be safe than sorry.

It’s never wrong to handle any sensor as if it was a SPRT.

In addition to mechanical shocks, a very fast change in temperature can be a chock to the sensor and damage it or affect the accuracy.

Thermocouples are typically not as sensitive as RTD probes.

 

2 - Preparations

There are normally not that many preparations, but there are some things to take into account. First, a visual inspection is performed in order to see that the sensor looks ok and make sure it has not been bent or damaged, and that the wires look ok.

External contamination can be an issue, so it is good to know where the sensor has been used and what kind of media it has been measuring. You may need to clean the sensor before calibration, especially if you plan to use a liquid bath for calibration.

The insulation resistance of an RTD sensor can be measured in prior to calibration. This is to make sure that the sensor is not damaged and the insulation between the sensor and the chassis is high enough. A drop in insulation resistance can cause error in measurements and is a sign of a sensor damage.

 

3 - Temperature source

As mentioned, you need to have a temperature source to calibrate a temperature sensor. It is just not possible to simulate temperature.

For industrial purposes, a temperature dry-block is most commonly used. It is handy and portable and typically accurate enough.

For higher accuracy needs, a liquid bath can be used. That is anyhow not typically easily portable but can be used in laboratory conditions.

For zero Centigrade point, a stirred ice-bath is often used. It is pretty simple and affordable yet provides a good accuracy for the zero point.

For the most accurate temperatures, fixed-point cells are being used. Those are very accurate, but also very expensive. Those are mostly used in accurate (and accredited) temperature calibration laboratories.

 

4 - Reference temperature sensor

The temperature is generated with some of the heat sources mentioned in the previous chapter. You obviously need to know with a very high degree of accuracy the temperature of the heat source. Dry-blocks and liquid baths offer an internal reference sensor that measures the temperature. But for more accurate results, you should be using a separate accurate reference temperature sensor that is inserted in the same temperature as the sensor(s) to be calibrated. That kind of reference sensor will more accurately measure the temperature that the sensor to be calibrated is measuring.

Naturally the reference sensor should have a valid traceable calibration.  It is easier to send a reference sensor out for calibration than sending the whole temperature source (it is good also to keep in mind the temperature gradient of the temperature block if you always only have the reference sensor calibrated not the block).

As for thermodynamic characteristics, the reference sensor should be as similar as possible compared to the sensor to be calibrated, to ensure they behave the same way during temperature changes.

The reference sensor and sensor to be calibrated should be immersed in the same depth in the temperature source. Typically, all sensors are immersed to the bottom of a dry-block. With very short sensors, it gets more difficult as they will only immerse a limited depth into the temperature source, and you should make sure that your reference sensor is immersed equally deep. In some cases, this requires a dedicated short reference sensor to be used.

Using fixed-point cells, you don’t need any reference sensor, because the temperature is based on physical phenomena and is very accurate by its nature.

 

5 - Measuring the temperature sensor output signal

Most temperature sensors have an electrical output (resistance or voltage) that needs to be measured and converted to temperature. So, you need to have some device to be used for the measurement. Some temperature sources offer also a measurement channels for the sensors, both device under test (DUT) and reference.

If you measure the electrical output, you will need to convert that into temperature, using international standards. In most industrial cases, you will use a measurement device that can do the conversion for you, so you can see the signal conveniently in the temperature unit (Centigrade or Fahrenheit).

What ever means you use for the measurement, make sure you know the accuracy and uncertainty of the device and ensure it has valid traceable calibration.

 

6 - Immersion depth

Immersion depth (how deep you insert the sensor into temperature source) is one important consideration when calibrating temperature sensors.

Our temperature calibration lab people gave this rule of thumb when using a stirred liquid bath:

  • 1% accuracy - immerse 5 diameters + length of the sensing element
  • 0.01% accuracy - immerse 10 diameters + length of the sensing element
  • 0.0001% accuracy - immerse 15 diameters + length of the sensing element

Heat conduction in a stirred liquid bath is better than in a dry-block and the required immersion depth is smaller.

For dry-blocks, there is an Euramet recommendation that you should immerse 15 times the diameter of the sensor added with the length of the sensor element. So, if you have a 6 mm diameter sensor, which has a 40 mm element inside, you immerse it (6 mm x 15 + 40 mm) 130 mm.

Sometimes it is difficult to know how long the actual element is inside the sensor, but it should be mentioned in the sensor specifications.

Also, you should be aware of where the sensor element is located (it is not always in the very tip of the sensor).

The sensor to be calibrated and the reference sensor should be immersed into the same depth so that the middle points of the actual sensor elements are in the same depth.

Naturally with very short sensors, it is not possible to immerse them very deep. That is one reason for the high uncertainty when calibrating short sensors.

 

7 - Stabilization

Remember that a temperature sensor always measures its own temperature!

Temperature changes pretty slowly and you should always wait long enough to have all parts stabilized to the target temperature. When you insert the sensor into a temperature, it will always take some time before the temperature of the sensor has reached that temperature and stabilized.

Your reference sensor and the sensor to be calibrated (DUT) may have very different thermodynamic characteristics, especially if they are mechanically different.

Often one of the biggest uncertainties related to temperature calibration can be that the calibration is done too quickly.

If you most often calibrate similar kinds of sensors, it is wise to make some type tests to learn the behavior of those sensors.

 

8 - Temperature sensor handle

The sensor handle part, or the transition junction, typically has a limit of how hot it can be. If it is heated too hot, the sensor may be damaged. Make sure you know the specifications of the sensors you calibrate.

If you calibrate in high temperatures, it is recommended to use a temperature shield to protect the sensor handle.

 

9 - Calibrated temperature range

With temperature sensors, it is pretty common that you don’t calibrate the whole temperature range of the sensor.

The very top of the range is something you should be careful in calibrating. For example, a RTD sensor can drift permanently if you calibrate it in too high temperature.

Also, the coldest points of the sensor’s temperature range can be difficult/expensive to calibrate.

So, it is recommended to calibrate the temperature range that the sensor is going to be used in.

 

10 - Calibration points

In industrial calibration, you need to pick enough calibration points to see that the sensor is linear. Often it is enough to calibrate 3 to 5 points throughout the range.

Depending on the sensor type, you may need to take more points, if you know that the sensor may not be linear.

If you calibrate platinum sensors and you plan to calculate coefficients based of the calibration results, you will need to calibrate at suitable temperature points to be able to calculate the coefficients. The most common coefficients for the platinum sensors are the ITS-90 and Callendar van Dusen coefficients. For thermistors, Steinhart-Hart coefficients can be used.

When sensors are calibrated in an accredited laboratory, the points may also be selected based on the lab’s smallest uncertainty.

 

11 - Adjusting / trimming a temperature sensor

Unfortunately, most temperature sensors can not be adjusted or trimmed. So, if you find an error in calibration, you cannot adjust that. Instead you will need to use coefficients to correct the sensor’s reading.

In some cases, you can compensate the sensor error in other parts of the temperature measurement loop (in transmitter or in DCS).

 

Other things to consider

Documentation

As with any calibration, the temperature sensor calibration needs to be documented in a calibration certificate.

 

Traceability

In calibration, the reference standard used, must have a valid traceability to National Standards, or equivalent. The traceability should be an unbroken chain of calibrations each having stated uncertainties.

More info on metrological traceability, please see the blog post Metrological Traceability in Calibration – Are you traceable?

 

Uncertainty

As always in calibration, also in temperature sensor calibration, you should be aware of the total uncertainty of the calibration process. In temperature calibration the calibration process (the way you do the calibration) can easily be by far the biggest uncertainty component in the total uncertainty.

More information on calibration uncertainty, please see the blog post Calibration uncertainty for dummies

 

Automating the calibration

Temperature calibration is always a pretty slow operation since temperature changes slowly and you need to wait for the stabilization. You can benefit a lot, if you can automate your temperature calibrations. The calibration will still take long time, but if it is automated, you don't need to be there to wait for it.

This will naturally save time and money for you.

Also, when automated, you can be sure that the calibration gets always done the same way.

 

Download free white paper

Click the picture below to download this article as a free pdf file:

How to calibrate temperature sensors - Beamex blog post

 

Other related blogs

If you found this blog post interesting, you may also like these ones listed below. Please feel free to browse all the articles in the Beamex blog, maybe you find some interesting articles to read.

 

Beamex solutions for temperature calibration

Please check out the new Beamex MC6-T temperature calibrator, that is a perfect tool for temperature sensor calibration and for much more. Click the below picture to read more:

Beamex MC6-T temperature calibrator

Please check also what else Beamex can offer you for temperature calibration or for temperature calibration services.

 

Temperature Calibration eLearning

Free eLearning course on industrial temperature calibration.

Master temperature calibration with this free comprehensive eLearning course from Beamex. Deepen your knowledge, pass the quiz, and earn your certificate!

Read more and enroll >

 

Temperature Sensor Calculator

A free tool to easily convert between temperature and electrical signals for thermocouples and RTD sensors. 

https://www.beamex.com/resources/temperature-sensor-calculator/

 

Thanks to our accredited temperature calibration laboratory persons for their help in making this article. Special thanks to Mr. Toni Alatalo, the head of our accredited temperature laboratory!

 

Topics: Temperature calibration

Why use calibration software?

Posted by Chuck Boyd on Jul 24, 2019

The shortest answer is to automate documentation to save time, lower risks, and quickly analyze data to make better decisions.

Beamex CMX with Beamex MC6 and bMobile

Most process plants have some sort of system in place for managing instrument calibration operations and data. However, just like airport security, the systems and processes can be very different even within the same company across different plants. Methods often differ greatly in terms of cost, quality, efficiency, and accuracy of data and the level of automation.

If you are manually writing results on paper or even manually entering data electronically, you’re spending about half your time on paperwork.  Using a documenting calibrator to automatically transfer test data to calibration software designed for the task can decrease the amount of time spent on calibration in many cases by up to 75%.

If you’re thinking about leaving the paper documentation lifestyle, using calibration software should be the ultimate goal.  On your way there, you could store results in a spreadsheet or generic database.  That will get you paperless, but it won’t realize all the benefits. The risk of human error and compromised data integrity will still be high and data entry will be time consuming.  It won’t automate updating calibration due dates like software designed for the job.  Here’s a secret you may not know—many people still write down calibrations on paper. They think that they are the only ones and are usually embarrassed at the thought and hesitant to reach out for help. If this is you, know you are not alone.  Start by reading this blog post and asking for help!

Paper-based systems

Beamex integrated calibration diagram

Traditionally, engineers and technicians used pen and paper to record calibration results while out in the field. On returning to the shop, notes are tidied up or transferred to another paper document, after which they are archived as paper documents. Inherent in managing hundreds or even thousands of pieces of paper is the eventual misplaced, lost or damaged document.

While using a manual, paper-based system requires little or no investment, it is very labor-intensive and means that historical trend analysis becomes very difficult to carry out.

In addition, the calibration data is not easily accessible. The system is time consuming, soaks up a lot of resources and typing errors are commonplace. Dual effort and re-keying of calibration data are also significant costs here.

In-house legacy systems (spreadsheets, databases, etc.)

Although certainly a step in the right direction, using an in-house legacy system to manage calibrations has its drawbacks. In these systems, calibration data is typically entered manually into a spreadsheet or database. The data is stored in electronic format, but the recording of calibration information is still time-consuming and typing errors are common. Also, the calibration process itself cannot be automated. For example, automatic alarms cannot be set up on instruments that are due for calibration.

Calibration module of a maintenance management software

Some use the calibration module of their maintenance management software for calibration management. Plant hierarchy and work orders can be stored in the it, but the calibration cannot be automated because the system is not able to communicate with ‘smart’ calibrators.

Furthermore, these softwares are not designed to manage calibrations and so often only provide the minimum calibration functionality, such as the scheduling of tasks and entry of calibration results. Although instrument data can be stored and managed efficiently in the plant’s database, the level of automation is still low. In addition, the system may not meet the regulatory requirements (e.g. FDA or EPA) for managing calibration records.

Calibration Software 

With calibration software, users are provided with an easy-to-use Windows Explorer-like interface. The software manages and stores all instrument and calibration data. This includes the planning and scheduling of calibration work; analysis and optimization of calibration frequency; production of reports, certificates and labels; communication with smart calibrators; and easy integration with maintenance management systems such as SAP and Maximo. The result is a streamlined, automated calibration process, which improves quality, plant productivity, safety and efficiency.

Calibration software is the most advanced solution available to support and guide calibration management activities. In order to understand how software can help you better manage process plant instrument calibrations, it is important to consider the typical calibration management tasks that companies undertake. There are five main areas here, comprised of planning and decision-making, organization, execution, documentation, and analysis.

Planning and decision-making

All plant instruments and measurement devices should be listed, then classified into ‘critical’ and ‘non-critical’ devices. Once these have been set up, the calibration ranges and required tolerances should be identified. Decisions then need to be made regarding the calibration interval for each instrument. The creation and approval of standard operating procedures (SOPs) for each device should be defined, followed by the selection of suitable calibration methods and tools for execution of these methods. Finally, the current calibration status for every instrument across the plant should be identified.

Organization

The next stage, organization, involves training the company’s calibration staff – typically maintenance technicians, service engineers, process and quality engineers and managers – in using the chosen tools and how to follow the approved SOPs. Resources should be made available and assigned to actually carry out the scheduled calibration tasks.

Execution

The execution stage involves supervising the assigned calibration tasks. Staff carrying out these activities must follow the appropriate instructions before calibrating the device, including any associated safety procedures. The calibration is then executed according to the plan, although further instructions may need to be followed after calibration.

The documentation and storage of calibration results typically involves electronically signing or approving all calibration records generated.

Based on the results, analysis should be performed to determine if any corrective action needs to be taken. The effectiveness of calibration needs to be reviewed and calibration intervals checked. These intervals may need to be adjusted based on archived calibration history. If, for example, a sensor drifts out of its specification range, the consequences could be disastrous for the plant, resulting in costly production downtime, a safety problem or leading to batches of inferior quality goods being produced, which may then have to be scrapped.

Documentation

Documentation is a very important part of a calibration management process. Many regulatory agencies and auditors require that records are maintained and must be carried out according to written, approved procedures. Without implicit documentation proving traceability of measurement standards used the result, by definition, cannot be considered calibration.

An instrument engineer can spend as much as 50% of his or her time on documentation and paperwork – time that could be better spent on other value-added activities. This paperwork typically involves preparing calibration instructions to help field engineers; making notes of calibration results in the field; and documenting and archiving calibration data.

Imagine how long and difficult a task this is if the plant has thousands of instruments that require calibrating on at least a six month basis? The amount of manual documentation increases almost exponentially!

Any type of paper-based calibration system will be prone to human error. Noting down calibration results by hand in the field and then transferring these results into a spreadsheet back at the office may seem archaic, but many firms still do this. However, with regulatory agencies requiring data integrity procedures, many are turning digital. Furthermore, analysis of paper-based systems and spreadsheets can be almost impossible, let alone time consuming.

Analysis

Calibration history trend

(Example of automatically generated history trend above)

Using a specialist calibration management software  enables faster, easier and more accurate analysis of calibration records and identifying historical trends.

Plants can therefore reduce costs and optimize calibration intervals by reducing calibration frequency when this is possible, or by increasing the frequency where necessary.

For example, for improved safety, a process plant may find it necessary to increase the frequency of some sensors that are located in a hazardous, potentially explosive area of the manufacturing plant.

Just as important, by analyzing the calibration history of a flow meter that is located in a ‘non-critical’ area of the plant, the company may be able to decrease the frequency of calibration, saving time and resources. Rather than rely on the manufacturer’s recommendation for calibration intervals, the plant may be able to extend these intervals by looking closely at historical trends provided by calibration management software. Instrument ‘drift’ can be monitored closely over a period of time and then decisions made confidently with respect to amending the calibration interval.

Benefits of Using Calibration Software

Beamex Calibration Certificate

(Example calibration certificate above)

With software-based calibration management, planning and decision-making are improved. Procedures and calibration strategies can be planned and all calibration assets managed by the software. Instrument and calibrator databases are maintained, while automatic alerts for scheduled calibrations can be set up.

Organization also improves. The system no longer requires pens and paper. Calibration instructions are created using the software to guide engineers through the calibration process. These instructions can also be downloaded to a technician’s handheld documenting calibrator while he is in the field.

Execution is more efficient and errors are eliminated. Using software-based calibration management systems in conjunction with documenting calibrators means that calibration results can be stored in the calibrator’s memory, then automatically uploaded back to the calibration software. There is no re-keying of calibration results from a notebook to a database or spreadsheet. Human error is minimized and engineers are freed up to perform more strategic analysis or other important activities.

Documentation is easier. The software generates reports automatically and all calibration data is stored in one database rather than multiple disparate systems. Calibration certificates, reports and labels can all be printed out on paper or sent in electronic format.

Analysis becomes easier too, enabling engineers to optimize calibration intervals using the software’s trending function. Also, when a plant is being audited, calibration software can facilitate both the preparation and the audit itself. Locating records and verifying that the system works is effortless when compared to traditional calibration record keeping. Regulatory organisations and standards such as FDA and EPA place demanding requirements on the recording of calibration data. Calibration software has many functions that help in meeting these requirements, such as Change Management, Audit Trail and Electronic Signature functions.

 

Business Benefits

For the business, implementing software-based calibration management means overall costs will be reduced. These savings come from fully digitized calibration procedures, now paperless  with no manual documentation. Engineers can analyze calibration results to see whether the calibration intervals on plant instruments can be optimized. For example, those instruments that perform better than expected may well justify a reduction in their calibration frequency.

Plant efficiencies should also improve, as the entire calibration process is now streamlined and automated. Manual procedures are replaced with automated, validated processes, which is particularly beneficial if the company is automating a lot of labour-intensive calibration activities. Costly production downtime will also be reduced.

Even if a plant has already implemented a maintenance management software, calibration management software can be easily integrated to this system. If the plant instruments are already defined on a database, the calibration management software can utilize the records available in the system database.

The integration will save time, reduce costs and increase productivity by preventing unnecessary double effort and rekeying of work orders in multiple systems. Integration also enables the plant to extend automated data acquisition to their ERP system with smart calibrators, which simply is not possible with a standalone system.

Beamex Solutions

CMX group photo croppedBeamex’s suite of calibration management software can benefit all sizes of process plant. For relatively small plants, where calibration data is needed for only one location, only a few instruments require calibrating and where regulatory compliance is minimal, Beamex LOGiCAL calibration software maybe the most appropriate.

Companies that have medium to large amount of instruments and calibration work or strict regulatory compliance, Beamex CMX Professional is ideal. It fulfills the requirements of 21 CFR Part 11 and other relevant regulations for electronic records, electronic signatures and data integrity. It also offers Mobile Security Plus, which provides enhanced functionality with compatible offline mobile devices, such as Beamex MC6 family of documenting calibrators and tablets/smartphones with the Beamex bMobile  application. This enhancement further lowers the risk of ALCOA violations by identifying those using offline mobile devices by their electronic signature and by protecting the offline data against tampering.

bMobile-1Along with CMX, the Beamex bMobile application allows for paperless execution and documentation of inspection activities in the field. It works offline as well, which is ideal where reliable network connections are not available. bMobile also supports Beamex’s “Mobile Security Plus” technology, a solution to ensure the integrity of calibration data throughout the entire process.

Beamex’s multi-site solution, CMX Enterprise, is suitable for process manufacturers with multiple or global sites, multilingual users and a very large amount of instruments that require calibration. Here, a central calibration management database is often implemented, which is used by multiple plants across the world.

Please see also our Calibration Software main page. 

Summary

Every type of process plant, regardless of industry sector, can benefit from using calibration management software. Compared to traditional, paper-based systems, in house built legacy calibration systems or calibration modules of maintenance management systems, using dedicated calibration management software results in improved quality, increased productivity and reduced costs of the entire calibration process.

Calibration Software key benefits:

  • Better Planning & Decision-Making
  • Easier organization
  • Faster Execution
  • Automated Documentation
  • Analysis capabilities
  • Cost reduction
  • Quality improvements
  • Increase in efficiency

 

Download your copy of the Calibration Essentials Software eBook, to learn more about calibration management and software.

Calibration Essentials- Software eBook

 

Topics: Calibration software

Calibration uncertainty and why technicians need to understand it [Webinar]

Posted by Heikki Laurila on Jun 27, 2019

Calibration uncertainty and why technicians need to understand it - Beamex webinar

In this blog post, I want to share a webinar titled "Calibration uncertainty and why technicians need to understand it." This webinar was a collaboration with Beamex and ISA (International Society of Automation) subject matter experts.

It describes a practical approach to calibration uncertainty and  also provides real-life applications.

The webinar speakers are Beamex's own "dynamic duo" Ned and Roy. Both Ned Espy and Roy Tomalino have long careers with Beamex and have many year's experience with calibration.

Please click the picture below (under the topic list) to watch the webinar.

To make it easier for you to jump to a relevant part of the webinar, below is a list of the main topics and the time.

Calibration uncertainty and why technicians need to understand it - content list

 

Watch the webinar by clicking the picture below:

Calibration uncertainty webinar - Beamex

 

More on calibration uncertainty

For additional information on calibration uncertainty, please check out the blog post Calibration uncertainty for Dummies. 

Other webinar blog posts

If you like webinars, here are some related ones we have posted in our blog:

Products used in the webinar demo sessions

In the demo sessions, Roy is using Beamex MC6 Calibrator and Beamex CMX Calibration Management software.

 

 

Topics: calibration uncertainty

How a business analyst connected calibration and asset management [Case Story]

Posted by Heikki Laurila on May 14, 2019

Beamex blog post - Picture of SRP site - How a business analyst connected calibration and asset management [Case Story]

How one of America’s largest public power utilities integrated its asset management software with calibration software to reduce risk and increase efficiency

We haven't earlier shared any customer case stories in the this blog. It seems to be anyhow interesting for people  to read what other companies have done, and learn the best practices from them. Hopefully you find this story useful. Let's dig into it: 

Salt River Project (SRP)

For more than a century, Salt River Project (SRP) has produced power and delivered water to meet the needs of its customers in the greater Phoenix metropolitan area. Today, as one of the nation's largest public power utilities, SRP provides reliable electricity and water to more than 1 million customers and employs around 4,500 people.

Jody Damron

Jody DaJody Damron, a Business Analyst at Salt River Project.mron, a Business Analyst at Salt River Project’s corporate headquarters in Tempe, Arizona, has been serving the company for more than 40 years and has helped develop Salt River Project’s calibration processes. Several years ago, he started to investigate the possibility of linking their calibration software, Beamex CMX, to their asset management software, Maximo.   

IT Projects

Jody began by researching IT integration projects. He was soon amazed to discover the mind-boggling number of failed projects, costing companies up into the trillions of dollars. He read about major failures where no progress was made, even situations in which companies were forced to go back to the original way after failed attempts. He declared, right then and there that, “failure is not an option.”

Project Team

Through a preliminary analysis, he concluded that an integration project would require a substantial amount of planning and input from a team of internal departmental experts to ensure that it functioned appropriately for all parties involved. He also knew the external parties, or vendors, would be just as vital to their success.

 

Beamex blog post - How a business analyst connected calibration and asset management - Salt River Project project team

It was important that he put together a quality team (Fig. 1) that he trusted, because he knew he had to rely on everyone’s input and expertise. During this process, he learned important lessons about building a successful team. Jody soon discovered how each party tended to speak different technical languages as well as have different goals and ideology. He determined that communication was going to be the key to success. Jody explains, “the business will say they need an apple cut in 6 pieces and the IT side will hear cut a watermelon in half. Technical, cultural and language communication barriers are real challenges that needed full attention."

He knew they would run into many implementation roadblocks if the team did not work together during the entire process. The team stayed focused on the detailed requirements and met often to review the business expectations.

 

Responsibilities of vendors and customer

As important as it is for the entire project team to understand everyone’s roles and responsibilities to ensure efforts weren’t duplicated or missed altogether, it was also essential to define the roles of the vendors and establish clear operation guidelines. The following chart (Fig. 2) defines responsibilities along with brief descriptions for some of the sector’s key duties:

Beamex blog post - How a business analyst connected calibration and asset management - Salt River Project team roles.

 

  • Business: Data integrity is an important and an ongoing process. For SRP, it has never stopped since it first began in 1974. It is a time consuming, but important process – one which can go south in a very short period of time if it is not continually monitored. SRP put a lot of man hours into ensuring clean data. 
  • Beamex CMX calibration software: SRP relied on Beamex’s expertise. Beamex acted as consultants and were quick to communicate how the integration could work most efficiently and made no empty promises.
  • Maximo: The Maximo team worked hand in hand with SRP technicians to meet business expectations and functionality requirements.
  • Integration: It was imperative to make sure the right data was transferred back and forth between systems in the correct manner.

After analyzing all of these factors and gathering information from the project team, risks had to be considered so that Jody could be 100% confident that the integration would be successful. After all, failure was not an option.

 

How it works today

Upon completion of in-depth analysis by the team, Jody determined that the integration could be completed to meet both the business and IT needs. As Jody eloquently puts it, “it’s extremely simple, if you think of how complicated it could be.” 

These are the basic rules used to form SRP’s system:

  1. Beamex CMX is the calibration system of record that stores the detailed calibration information. 
  2. Maximo tracks all plant assets and is the master of scheduling.
  3. As for calibration, the only information Maximo needs is if an instrument passed or failed during the calibration event. 
  4. In Maximo, there are two types of instrument assets. The first type are regular instrument assets that are never calibrated, for example an orifice plate. Secondly, there are calibrate-able assets, for example a transmitter.
  5. For a Maximo asset to be transferred into CMX, the asset has to be defined as a calibrate-able asset. Out of 28,000 instruments, there are 7,700 assets that require calibration and meet the calibrate-able asset 
  6. If a Maximo work order is written or automatically generated by the preventive maintenance application for a calibrate-able asset, it automatically flows into CMX. This is critical because the rules create a built-in method of security that does not allow “garbage” data to be transferred back and forth. This ensures good data integrity for both software platforms. If a work order is not for a calibrate-able asset, it does not go to CMX.
  7. Work orders are generated by a planner. Technicians will paperlessly pick them up and calibrate them. This process allows field personnel to work only within CMX, and they do not deal with work orders in Maximo, saving them time, money and frustration.

 

For example, during a typical unit overhaul, many of the site’s 7,700 calibrate-able instrument assets need to be tested. Work orders are planned, put into progress, the information is automatically transferred to CMX and the technician is alerted by the planner via email. The technician can then download the asset test information to a Beamex MC6 calibrator & communicator and perform the necessary work. Since the MC6 is a multifunction, documenting calibrator, the entire calibration process is automated because the results are stored in the calibrator’s memory. When the technician returns to the shop, they upload results into CMX. When a calibration test passes, an automatic notification is sent back into Maximo that closes the work order and documents who performed the work and when it was done. A failure requires the initiation of a follow up work order.

 

Summary and the results

The most significant impact overall is that Salt River Project has been able to save about 30 minutes per calibration using an automated approach. This equates up to 1,000 man-hours in the previously cited unit overhaul example. Further savings are anticipated as history analysis will confirm that extended calibration intervals are recommended. It is important to note that SRP’s work order history for calibration is 100% automated and technicians never work in Maximo. Other major benefits of the automated calibration system include:

  • System oversight has been minimized.
  • Audits are easy to perform and are less stressful.
  • Defined calibration procedures provide a corporate “best practices” approach to calibration.
  • Better decision making because of accurate data.

In the simplest terms, the new Beamex/Maximo calibration system gives back time to the people working in the field. As a result, as Jody explains, “With this software integration project, we were able to realize a significant return on investment during the first unit overhaul. It’s unusual, since ROI on software projects is usually nonexistent at first.

 

Download a pdf 

Download a pdf of this story >> 

 

Read more case stories

To read more case stories like this, click the link below:

Read more case stories >>

 

Read more about Beamex CMX Calibration Management Software

 

Topics: Calibration software, Case Story

Optimal Calibration Parameters for Process Instrumentation

Posted by Ned Espy on Apr 24, 2019

Optimal Calibration Parameters for Process Instrumentation

Many calibration technicians follow long-established procedures at their facility that have not evolved with instrumentation technology. Years ago, maintaining a performance specification of ±1% of span was difficult, but today’s instrumentation can easily exceed that level on an annual basis. In some instances, technicians are using old test equipment that does not meet new technology specifications. This article focuses on establishing base line performance testing where analysis of calibration parameters (mainly tolerances, intervals and test point schemes) can be analyzed and adjusted to meet optimal performance. Risk considerations will also be discussed – regulatory, safety, quality, efficiency, downtime and other critical parameters. A good understanding of these variables will help in making the best decisions on how to calibrate plant process instrumentation and how to improve outdated practices.

 

Introduction

A short introduction to the topics discussed in this post: 

How often to calibrate?

The most basic question facing plant calibration professionals is how often should a process instrument be calibrated? There is not a simple answer, as there are many variables that effect instrument performance and thereby the proper calibration interval, these include:

  • Manufacturer’s guidelines (a good place to start)
  • Manufacturer’s accuracy specifications
  • Stability specification (short term vs. long term)
  • Process accuracy requirements
  • Typical ambient conditions (harsh vs. climate controlled)
  • Regulatory or quality standards requirements
  • Costs associated with a failed condition 
Pass/Fail tolerance

The next question for a good calibration program is what is the “Pass/Fail” tolerance? Again, there is no simple answer and opinions vary widely with little regard for what is truly needed to operate a facility safely while producing a quality product at the best efficiency. A criticality analysis of the instrument would be a good place to start. However, tolerance is intimately related to the first question of calibration frequency. A “tight” tolerance may require more frequent testing with a very accurate test standard, while a less critical measurement that uses a very accurate instrument may not require calibration for several years. 

Calibration procedures

What is the best way to determine and implement proper calibration procedures and practices is another question to be answered. In most cases, methods at a particular site have not evolved over time. Many times, calibration technicians follow practices that were set up many years ago and it is not uncommon to hear, “this is the way we have always done it.” Meanwhile, measurement technology continues to improve and is becoming more accurate. It is also getting more complex – why test a fieldbus transmitter with the same approach as a pneumatic transmitter? Performing the standard five-point, up-down test with an error of less than 1% or 2% of span does not always apply to today’s more sophisticated applications. As measurement technology improves, so should the practices and procedures of the calibration technician. 

Finding the optimum...

Finally, plant management needs to understand the tighter the tolerance, the more it will cost to make an accurate measurement. It is a fact that all instruments drift to some degree. It should also be noted that every make/model instrument has a unique “personality” for performance in a specific process application. The only true way to determine optimum calibration parameters is to somehow record calibration in a method that allows performance and drift to be analyzed. With good data and test equipment, the lowest, practical tolerance can be maintained while balancing that with an optimum schedule. Once these parameters are established, associated costs to perform a calibration can be estimated to see if there is justification to purchase a more sophisticated instrument with better performance specifications or purchase more accurate test equipment in order to achieve even better process performance.

Download this article as a free pdf file by clicking the picture below:

Optimal Calibration Parameters for Process Instrumentation - Beamex white paper

 

Calibration basics 

Optimum calibration interval

Determining a proper calibration interval is an educated guess based on several factors. A best practice is to set a conservative interval based on what the impact of a failure would be in terms of operating in a safe manner while producing product at the highest efficiency and quality. It is also important to review calibration methods and determine the best practices where there will be a minimal impact on plant operations. By focusing on the most critical instruments first, an optimum schedule can be determined and would allow for less critical testing if personnel have availability.

Since all instrumentation drifts no matter the make/model/technology, suppliers end up creating vastly different specifications making it difficult to compare performance. Many times there are several complicating footnotes written in less than coherent terminology. Instrument performance is not always driven by price. The only true way to determine an optimum interval is to collect data and evaluate drift for a specific make/model instrument over time.

Starting off with a conservative interval, after 3 tests, a clear drift pattern may appear. For example, a particular RTD transmitter is tested every three months. The second test indicates a maximum error drift of +0.065 % of span. The third test indicates another +0.060 % of span (+0.125% of span over 6 months). While more data should be used for analysis, a good guess is that this instrument drifts +0.25% per year. Statistically, more data equates to a higher confidence level. If this pattern is common among many of the same make/model RTD transmitters in use throughout the plant, the optimum calibration interval for ±0.50% of span tolerance could be set between 18 to 24 months with a very a relatively high level of confidence.

When collecting data on calibration, it is a good practice to not make unnecessary adjustments. For example, if the tolerance is ±1% of span and the instrument is only out by -0.25% of span, an adjustment should not be made. How can drift be analyzed (minimum of 3 points) with constant adjustment? For certain “personalities,” not adjusting can be a challenge (people strive for perfection), but note that every time an adjustment is made, drift analysis gets more difficult. In general, a best practice is to avoid adjusting until the error is significant. With a consistent schedule, a trim most likely will be needed on the next calibration cycle and not cause an As Found “Fail” condition. Of course, this may not be possible due to criticality, drift history, erratic scheduling or other factors, but when possible, do not automatically make calibration adjustments. 

What if the drift is inconsistent, both increasing, then decreasing over time? More analysis is required; for instance, are the ambient conditions extreme or constantly changing? Depending on the process application, instrument performance may be affected by media, installation, throughput, turbulence or other variables. This situation indicates there is a level of “noise” associated with drift. When this is the case, analysis should show there is a combination random error and systematic error. Random error consists of uncontrollable issues (ambient conditions and process application) vs. systematic error that consists of identifiable issues (instrument drift). By focusing on systematic error and/or clear patterns of drift, a proper calibration interval can be set to maximize operation efficiencies in the safest manner possible.

For more details, there is a dedicated blog post here: How often should instruments be calibrated? 

 

Setting the proper process tolerance limits 

Accuracy, Process Tolerance, Reject Error, Error Limit, Maximum Permissible Error, Error Allowed, Deviation, etc. – these are a few of the many terms used to specify instrument performance in a given process. Transmitter manufacturers always specify accuracy along with several more parameters associated with error (long term stability, repeatability, hysteresis, reference standard and more). When looking at setting a process tolerance, manufacturer accuracy offers a starting point, but it is not always a reliable number. Also, no measurement is better than the calibration standard used to check an instrument. What is behind a manufacturer’s accuracy statement in making an accurate instrument? For pressure, a good deadweight tester in a laboratory should be part of the formula.

At the plant level, a well-known simplified traditional rule of thumb is to have a 4:1 ratio for the calibrator’s uncertainty (total error) vs. the process instrument tolerance (TAR / TUR ratio).

Instead of using the simplified TAR/TUR ratio, the more contemporary approach is to always calculate the total uncertainty of the calibration process. This includes all the components adding uncertainty to the calibration, not only the reference standard.

To learn more on the calibration uncertainty, please read the blog post Calibration Uncertainty for Dummies.

When setting a process tolerance, a best practice is to ask the control engineer what process performance tolerance is required to make the best product in the safest way? Keep in mind the lower the number, the more expensive the calibration costs may be. To meet a tight tolerance, good (more expensive) calibration standards will be required. Also, another issue is to determine whether calibration should be performed in the field or in the shop. If instrumentation is drifting, a more frequent interval will need to be set to catch a measurement error. This may mean increased downtime along with the costs associated with making the actual calibration tests. As an example, review the three graphs of instrument performance:

 

Example 1 - Tolerance 0.1% of span:

Graph 1 -tolerance (0.1) 

Example 2 - Tolerance 0.25 % of span:

Graph 2 - tolerance (0.25)

Example 3 - Tolerance 1 % of span:

Graph 3 - tolerance (1)

Note the first graph above shows a failure (nearly double the allowed value), the second shows an adjustment is required (barely passing) and the third shows a transmitter in relative good control. The test data is identical for all 3 graphs, the only difference is the tolerance. Setting a very tight tolerance of ±0.1% of span can cause several problems: dealing with failure reports, constant adjustment adds stress to the calibration technician, operations does not trust the measurement, and more. Graph #2 is not much better, there is not a failure but 0.25% of span is still a tight tolerance and constant adjusting will not allow analysis of drift nor for evaluation of random error or systematic error. There are many benefits in #3 (note that ±1% of span is still a tight tolerance). If a failure were to occur, that would be an unusual (and likely a serious) issue. The calibration technician will spend less time disturbing the process and overall calibration time is faster since there is less adjusting. Good test equipment is available at a reasonable cost that can meet a ±1% of span performance specification.

There may be critical measurements that require a demanding tolerance and thereby accrue higher costs to support, but good judgements can be made by considering true performance requirements vs. associated costs. Simply choosing an arbitrary number that is unreasonably tight can cause more problems than necessary and can increase the stress level beyond control. The best approach would be to set as high a tolerance as possible, collect some performance data and then decrease the tolerance based on a proper interval to achieve optimum results. 

 

Calibration parameters

A subtle yet important detail is to review calibration procedures to see if further efficiencies can be gained without impacting the quality of the data. Years ago, technology was more mechanical in nature, board components were more numerous/complex, and instruments were more sensitive to ambient conditions. Today’s smart technology offers better accuracy and “brain power” with less, simplified components and with improved compensation capabilities. In many cases, old testing habits have not evolved with the technology. A good example is an older strain gauge pressure sensor that when starting from zero “skews” toward the low side as pressure rises due to the change from a relaxed state. Likewise, when the sensing element is deflected to its maximum pressure and as the pressure then decreases, there is a mechanical memory that “skews” the measure pressure toward the high end. This phenomenon is called hysteresis and graphically would resemble the graph below when performing a calibration:

 

5-point Up/Down calibration with hysteresis:

Graph 4 - calibration with hysteresis

 

Today’s smart pressure sensors are much improved, and hysteresis would only occur if something were wrong with the sensor and/or it is dirty or has been damaged. If the same calibration was performed on a modern sensor, the typical graphical representation would look like this:

 

5-point Up/Down calibration with no hysteresis:

Graph 5 - calibration  no hysteresis

 

This may look simple, but it takes significant effort for a calibration technician to perform a manual pressure calibration with a hand pump. Testing at zero is easy, but the typical practice is to spend the effort to hit an exact pressure test point in order to make an error estimate based on the “odd” mA signal. For example, if 25 inH2O is supposed to be exactly 8 mA, but 8.1623 mA is observed when the pressure is set to exactly 25 inH2O, an experienced technician knows he is dealing with a 1% of span error (0.1623 ÷ 16 x 100 = 1%). This extra effort to hit a “cardinal” test point can be time consuming, especially at a very low pressure of 25 in H2O. To perform a 9-point calibration, it might take 5 minutes or more and makes this example calibration unnecessarily longer. It is possible to perform a 5-point calibration and cut the time in half – the graph would look identical as the downward test points are not adding any new information. However, a pressure sensor is still mechanical in nature and, as mentioned, could have hysteresis. on a pressure transmitter. The quality of the test point data is equivalent to a 9-point calibration and if there is hysteresis, it will be detected. This also places the least stress on the technician as there are only 3 “difficult” test points (zero is easy) compared to 4 points for a 5-point calibration and 7 for a 9-point up/down calibration. Savings can be significant over time and will make the technician’s day to day work much easier.

Using this same approach can work for temperature instrumentation as well. A temperature sensor (RTD or thermocouple) is electromechanical in nature and typically does not exhibit hysteresis – whatever happens going up in temperature is repeatable when the temperature is going down. The most common phenomenon is a “zero shift” that is indicative of a thermal shock or physical damage (rough contact in the process or dropped). A temperature transmitter is an electronic device and with modern smart technology exhibits excellent measurement properties. Therefore, a best practice is to perform a simple 3-point calibration on temperature instrumentation. If calibrating a sensor in a dry block or bath, testing more than 3 points is a waste of time unless there is a high accuracy requirement or some other practical reason to calibrate with more points.

There are other examples of optimizing parameters. Calibration should relate to the process; if the process never goes below 100°C, why test at zero? When using a dry block, it can take a very long time to reach a test point of 0°C or below – why not set an initial test point of 5°C with an expected output of 4.8 mA, for example, if it is practical and will save time and make calibrating easier. Another good example is calibrating a differential pressure flow meter with square root extraction. Since a flow rate is being measured, output test points should be 8 mA, 12 mA, 16 mA and 20 mA, not based on even pressure input steps. Also, this technology employs a “low flow cut-off” where very low flow is not measurable. A best practice is to calibrate at an initial test point of 5.6 mA output (which is very close to zero at just 1% of the input span).

Do not overlook how specific calibrations are performed. Why collect unnecessary data? It is simply more information to process and can have a very significant cost. Why make the job of calibration harder? Look at the historical data and make decisions that will simplify work without sacrificing quality.

 

Calibration trend analysis and cost 

Temperature transmitter example

 As mentioned, the best way to optimize calibration scheduling is to analyze historical data. There is a balance of process performance vs. instrument drift vs. tolerance vs. optimum interval vs. cost of calibration and the only way to truly determine this is through historical data review. Using similar data for the temperature transmitter example in the Tolerance Error Limits section, apply the concepts to optimize the calibration schedule with this scenario:  

 

Calibration history trend example:

Graph 6 - Calibration history trend 

After a discussion between the Control Engineer and the I&C Maintenance group, a case was made for a tolerance of ±0.5% of span, but it was agreed that ±1% of span was acceptable until more information becomes available. This particular measurement is critical, so it was also agreed to calibrate every 3 months until more information becomes available. At the end of the first year, a drift of approximately +0.25% of span per year was observed and no adjustments were made. After further discussion, it was agreed to lower the tolerance to ±0.5% of span (the Control Engineer is happy) and to increase the interval to 6 months. An adjustment was finally made 1-1/2 years after the initial calibration. At the end of year 2, no adjustment was required and the interval was increased to 1 year. At the end of year 3, an adjustment was made, and the interval was increased to 18 months (now the Plant Manager, the I&C Supervisor and the I&C Technicians are happy). All this occurred without a single failure that might have required special reporting or other headaches.

Obviously this scenario is perfect, but if there are multiple instruments of the same make/model, strong trends will emerge with good historical data; affirming best practices and allowing best decisions to be made. For critical instrument measurements, most engineers are conservative and “over-calibrate”.  This example should open a discussion on how to work smarter, save time/energy and maintain a safe environment without compromising quality.

 

Cost of calibration

One other best practice is whenever possible, try to establish the cost to perform a given calibration and include this in the decision process. Consider not only the man hours, but the cost of calibration equipment, including annual recertification costs. When discussing intervals and tolerances, this can be very important information in making a smart decision. Good measurements cannot be made with marginal calibration equipment. As an example, in order to meet an especially tight pressure measurement tolerance, a deadweight tester should be used instead of a standard pressure calibrator – this is a huge step in equipment cost and technician experience/training. By outlining all the extra costs associated with such a measurement, a good compromise could be reached by determining the rewards vs. risks of performing more frequent calibrations with slightly less accurate equipment or by utilizing alternative calibration equipment.

Another overlooked operational cost is the critical need to invest in personnel and equipment. With either new technology or new calibration equipment, maintenance and/or calibration procedures should be reinforced with good training. ISA offers several excellent training options and consider local programs that are available for calibration technicians via trade schools or industry seminars. Finally, a review of calibration assets should be done annually to justify reinvestment by replacing old equipment. Annual recertification can be expensive, so when choosing new calibration equipment, look for one device that can possibly replace multiple items.

One other important cost to consider is the cost of failure. What happens when a critical instrument fails? If there are audits or potential shut-down issues, it is imperative to have a good calibration program and catch issues before they begin in order to avoid a lengthy recovery process. If calibration equipment comes back with a failed module, what is the potential impact on all the calibrations performed by that module in the past year? By understanding these risks and associated costs, proper decisions and investments can be made.

 

Conclusion

Obviously, not all instrumentation is going to offer easy analysis to predict drift. Also, calibration schedules get interrupted and many times work has to be done during an outage regardless of careful planning. In some areas there are regulatory requirements, standards or quality systems that specify how often instrument should be calibrated – it is difficult to argue with auditors. A best practice is to establish a good program, focusing on the most critical instruments. As the critical instruments get under control, time will become available to expand to the next level of criticality, and on and on.

Alternate or “hybrid” strategies should be employed in a good calibration management program. For example, loop calibration can lower calibration costs, which is performing end-to-end calibrations and only checking individual instruments when the loop is out. A good “hybrid” strategy is to perform a “light” calibration schedule combined with a less frequent “in-depth” calibration. As an example, make a minimally invasive “spot check” (typically one point) that has a lower tolerance than normal (use the recommended 2/3 of the normal tolerance value). Should the “spot check” fail, the standard procedure would be to perform the standard in-depth calibration to make necessary adjustments. A technician may have a route of 10 “spot checks” and end up only performing 1 or 2 in-depth calibrations for the entire route. Performing “spot checks” should still be documented and tracked, as good information about drift can come from this type of data.

To summarize, several best practices have been cited:

  • Set a conservative calibration interval based on what the impact of a failure would mean in terms of operating in a safe manner while producing product at the highest efficiency and quality.
  • Try not to make adjustments until the error significance exceeds 50% (or greater than ±0.5% for a tolerance of ±1% of span); this may be difficult for a technician striving for perfection, however, when unnecessary adjustments are made, drift analysis is compromised.
  • Ask the control engineer what process performance tolerance is required to make the best product in the safest way?
  • Set as high a tolerance as possible, collect some performance data and then decrease the tolerance based on a proper interval to achieve optimum results.
  • Perform a 3-point up/down calibration on a pressure transmitter; the quality of the test point data is equivalent to a 9-point calibration and if there is hysteresis, it will be detected.
  • Perform a simple 3-point calibration on temperature instrumentation. If calibrating a sensor in a dry block or bath, calibrating more than 3 points is a waste of time unless there is a high accuracy requirement or some other practical reason to calibrate with more points.
  • Calibrate a differential pressure flow transmitter with square-root extraction at an initial test point of 5.6 mA output (which is very close to zero at just 1% of the input span). Also, since a flow rate is being measured, the sequential output test points should be 8 mA, 12 mA, 16 mA and 20 mA, not based on even pressure input steps.
  • Whenever possible, try to establish the cost to perform a given calibration and include this in the decision process.
  • Focus on the most critical instruments, establish a good program and as the critical instruments get under control, time will become available to expand to the next level of criticality, and on and on.

Always keep in mind that instruments drift, some perform better than others. The performance tolerance set will ultimately determine the calibration schedule. Via documentation, if there will be capability to distinguish systematic error (drift) from random error ("noise") and a systematic pattern emerges, an optimal calibration interval can be determined. The best tolerance/interval combination will provide good control data for the best efficiency, quality and safety at the lowest calibration cost with minimal audit failures and/or headaches. Establishing best practices for calibration should be a continuous evolution. Technology is changing and calibration should evolve along with it. As discussed, there are many variables that go into proper calibration – by establishing base line performance, as observed in the operating environment, smart decisions can be made (and modified) to operate at optimal levels when it comes to calibration.

 

Download this article as a free pdf file by clicking the picture below:

Optimal Calibration Parameters for Process Instrumentation - Beamex white paper

 

Beamex CMX Calibration Management software

All graphics in this post have been generated using the Beamex CMX Calibration Management software.

 

Related blog posts

You might find these blog posts interesting:

 

Topics: Calibration process, calibration period, Calibration management

Weighing Scale Calibration Video

Posted by Heikki Laurila on Feb 22, 2019

Scale calibration video -How to calibrate weighing scales - Beamex blog post 

In this post we share an educational video on how to calibrate weighing scales.

The video goes through the various tests that should be performed during a scale calibration / scale recalibration.

These tests include:

  • Eccentricity test
  • Repeatability test
  • Weighing test
  • Minimum weight test

This video is based on our earlier blog post on weighing scale calibration. In that post you can find more details and also download the free white paper. To read that post, please click the below link:

Weighing scale calibration - How to calibrate weighing instruments

As any accurate measuring instruments, also weighing scales needs to be calibrated regularly using reference weights that are accurate, calibrated and traceable. In case of scales, there are a few standards specifying the calibration procedures (such as EURAMET Calibration Guide, NIST Handbook 44, OIML guides) .

 

Anyhow, enough talking, please watch the video below:

  

The mobile application used in this video is the Beamex bMobile calibration application, that can be used together with Beamex CMX Calibration Management Software for completely digital and paperless flow of calibration data.

I hope you find the video useful. As mentioned, for more detailed instructions, please visit our other scale calibration blog post on this topic. 

Please don't be afraid to subscribe to our blog and you will get an email notification when new posts are published (about once per month). The subscribe button is hidden in the top right corner.

 

 

 

Topics: weighing scale

Calibration in Times of Digitalization - a new era of production

Posted by Antonio Matamala on Jan 11, 2019

Calibration in Times of Digitalization - a new era of production. Beamex blog post.

This is the first blog post in the Beamex blog series "Calibration in Times of Digitalization" we will take a look into the future to give you an understanding of hot topics such as Industry 4.0 and Smart Factory, which are on everyone's minds. 

You have probably heard of terms such as Digitilization, Industry 4.0 and Smart Factory. But what do these really mean? Many users of these new technologies don’t yet fully grasp them. This is completely understandable and that's why we will gradually bring you closer to these topics in this Beamex blog series and help you enter the future well informed. 

Whether on television, in newspapers or on social media, hardly a day goes by without futuristic topics such as digitalization, big data, artificial intelligence and machine learning. But futuristic? Hasn't the future already arrived?

Download the white paper 

 

What will tomorrow's process industry look like?

Almost everyone today owns a smartphone that, without being aware of it, is equipped with a variety of sensors and communication technologies. Worldwide, there are now 5.1 billion users of mobile devices and the number is growing at an annual rate of 4%. Whether with a smartphone or directly on your computer, you are most likely a private individual who purchases comfortably from your sofa at home and are hardly surprised if underneath the product you have selected on the website says "Customers who bought this product were also interested in this product." At home, the future seems to have made its way long ago. Behind these platforms hide exactly those technologies that have made your private life easier and more convenient, and these exact technologies are currently making their way into the process industry within the framework of Industry 4.0. But now please ask yourself the question, what does your work environment look like? Are you still working with outdated technologies, or are you now also seeing a wave of modernization on the horizon?

The fact is, many employees are worried about terms such as machine learning, robotics and smart factories and what will happen tomorrow. You may also be afraid your future colleagues will be made of metal and make "blip" noises like in the award-winning Star Wars series. Or you have the idea that in the future you will hardly meet any people on the shop floor. To take away at least some of these worries in advance, the industrial production of tomorrow, and thus the factory of the future, will rely much more on the latest information and communication technologies than it does today. And no, the "factory of the future" cannot be imagined without people. They will even play a very important role. This should be reassuring, and it is, but there is a good chance that things will change in your environment in the future. Because digital technologies in modularly organized plants of the future will make processes flexible, the maintenance of such machines will be equally affected, as will the calibration of the growing number of process sensors that make Industry 4.0 possible in the first place. In other words, the digital factory will automatically lead to digital maintenance. And that could happen faster than you think. 

So you should start to proactively prepare for a digital future, starting by getting a picture of what will change. Because the way we work in maintenance today will change. That's for sure! But what we can tell you in advance: If you work in a calibration environment, then your work will gain in importance! 

How exactly which factors will play a role in the future will be explained step by step in this and other blogs. It is important that you first understand the technologies and important interrelationships that lie at the basis of these digital changes that you have probably already experienced through a wide variety of media channels.

 

Leaving the comfort zone step by step

There are trends and you should accept if you can't stop them. For example, when the first computers came onto the market, the then CEO of one of the leading technology companies made a forecast: "I think there's a world market for maybe five computers.” Maybe you're smiling while reading it because this forecast seems completely absurd to you. But at the beginning of the computer industry, nobody really knew where this new technology would take us. But the explosion of desktop computing has changed our lives a lot. And even if you think that the role of a computer in your private environment is limited, in our modern society nothing would work without computers. By the way, the same applies to the role of the Internet in our society. Was it not then to be expected that computers and above all Internet technologies would sooner or later find their way into the process industry?

In the Industry 4.0 era, production is closely interlinked with information and communication technologies, making it more flexible, efficient and intelligent. There is even talk of batch size 1, which might perhaps cause a question mark rather than an "A-ha" effect on you. Well, it's quite simple: with the expectation to meet the ever faster and more comprehensively changing customer requirements, customers expect individualized products that meet their requirements, but at prices that only series production can offer. How is that supposed to work? The answer is Industry 4.0. 

Industry 4.0 has set itself this goal and offers a variety of concepts, guidelines and technologies for building new factories or retrofitting existing ones, which, thanks to modular production lines equipped with flexible automation, IT and communication technologies, make it possible for customers to choose from a variety of variants at series production prices. In addition, the interconnection of the value chain extends far beyond the manufacturing company. The entire value chain, including suppliers, is also to be connected horizontally. Connectivity even goes one step further: thanks to connectivity, products that leave the factory should also report regularly to the manufacturer, e.g. for maintenance, and report on their status.

Calibration in Times of Digitalization - a new era of production. Beamex blog post.

Nevertheless, there are big differences between the time when this already mentioned CEO ventured to forecast the world computer market and the present time. Although the term Industry 4.0 - meaning the fourth industrial revolution - today causes similar social uncertainties as computers did at that time, it is decisive for the future of the process industry, especially for the manufacturing industry. Where the computer was a fundamental new technological invention, Industry 4.0 consists of composite technological components, some of which already exist as modules, but interoperability for fast and flexible "plug & play" deployment is still in its infancy. It should be noted that the first three industrial revolutions (1. steam engine, 2. assembly line , 3. automation and IT) were only subsequently classified and recognized as revolutions. In contrast, the so-called 4th Industrial Revolution is more like a controlled process, which from today's point of view takes place in the near future and is currently in the process of unfolding.

Sensors as key technology

The fact that Industry 4.0 is more like a controlled process than a wild revolution is of great benefit to many participants, even though it is not possible to say exactly where the journey will lead. What we can predict, however, is that tomorrow's world will be much more digital than it is today, and that will certainly affect your work as process workers, both in production and maintenance. If you are working in calibration, the following may be particularly important to you. For the factory of the future to exist at all, smarter objects (whether machines or end products) will need to be used to orchestrate manufacturing processes according to Industry 4.0 objectives. Objects without sensors are blind and unfeeling and can neither see how they have to act in connection with other modules nor can they report their own condition to top-level systems about, for example, the need for timely and optimized maintenance to prevent costly system failures. 

Sensors therefore play not only an important, but an essential role in the implementation of Industry 4.0. They form the interface between the digital and the real world. Data generated by these sensors must be correctly interpreted for further processing, and they must always be of excellent quality! Industry 4.0 also means that, in the future, sensors will be used far beyond the actual production processes. They also play a role in upstream, downstream and parallel sub-processes, such as predictive maintenance. One could therefore say that without the right sensors, all higher-level systems are blind, and with incorrect measurement data, wrong decisions are made. What should hardly surprise the maintenance staff is that the data quality of measurement data is based on a professional and prompt calibration of the sensors.

In the next blog we continue explaining the role of sensors and other technologies in Industry 4.0 and how these may affect your daily work.

Download the free white paper by clicking the picture below:

Calibration in the times of digitalization - Beamex white paper

 

About Beamex

Beamex has set itself the goal to find a better way to calibrate together with its customers. This also means taking a leading role in digitilization and Industry 4.0. If we have aroused your interest after reading this article, we would like to discuss this topic with you. We are very interested in exploring your current business processes regarding calibration to provide you with concepts for a better calibration in the digital era.
 

 

 

Topics: Calibration, Digitalization, Industry 4.0

Do more with less and generate ROI with an Integrated Calibration Solution

Posted by Heikki Laurila on Nov 19, 2018

CMX---Beamex-man-advert-photo-v1_2900x770px_v1 

Process instrument calibration is just one of the many maintenance related activities in a process plant. The last thing you want to do is to have your limited resources wasting time performing unnecessary calibrations or using time-consuming, ineffective calibration procedures.

Yet, you need to make sure that all critical calibrations are completed, ensuring the site stays running efficiently with minimal downtime, product quality is maintained, while the plant remains regulatory and safety compliant, and audit-ready.   

Most often you can’t just go and hire an army of external calibration specialists, so you need to get more done with your existing resources.

In this article, let’s examine at what an “Integrated Calibration Solution” is and how it can help you with your challenges – make your calibration process more effective, save time and money and improve the quality and integrity of the results. We will also discuss how it can quickly generate a great return your investment.

If any of that sounds interesting to you, please continue reading …

Or just download the free pdf white paper here: Download the white paper 

Improve the whole calibration process with an Integrated Calibration Solution

It is not enough to just buy some new calibration equipment or calibration software - that does not make your calibration process leaner and more effective. Instead, you should analyze at all the steps of your calibration process, and with the help of a suitable solution and expertise, find ways to improve the whole calibration process.

Let’s quickly look at a typical calibration process from the beginning to the end and explore how an integrated system could help:

ICS-summary

Typically, work is planned, and work orders are created in the maintenance management system. With an integrated solution, these work orders move automatically and digitally from the maintenance management system to the calibration software. There is no need to print work orders and distribute them manually.

The necessary calibration details are handled by the dedicated calibration software and it sends the work orders to the mobile calibration equipment. Again, this happens digitally.

While the technicians are out in the field performing the calibration activities, the results are automatically stored in the mobile devices, and users signs off the results using an electronic signature. From the mobile device the results are automatically transferred back to the calibration software to save and analyze.

Once the work orders are completed, the calibration software automatically sends an acknowledgement to the maintenance management software and work orders are closed.

So, the whole process is paperless and there is no need for manual entry of data at any point. This makes the process far more effective and saves time. This also helps minimize mistakes typically related with manual data entry, so it improves the quality and integrity of the calibration data. Furthermore, calibration results are safely stored and easily accessible in the calibration software for review for example in case of audits and for analysis purposes.

As mentioned, improving the calibration process is not just about buying some new equipment or software, but the project should also include improvement of the whole calibration process together with the new tools supporting it. Implementing a new process is a project with a formal implementation plan, ensuring that the new system/process is adopted by the users.

  

The key benefits of an integrated calibration solution

Here are listed some of the key benefits of an integrated calibration solution:

Improve operation efficiency – do more with less

  • Automate calibrations and calibration documentation. Eliminate all manual entry steps in the calibration process. Use multifunctional tools to carry less equipment in the field and lower equipment life-cycle costs

Save time and reduce costs – get a great ROI

  • With automated processes, get more done in shorter time. Don’t waste time on unnecessary calibrations. Let the data from the system guide you to determine the most important calibrations at appropriate intervals.

Improve quality

  • With electronic documentation, avoid all errors in manual entry, transcriptions and Pass / Fail calculations.

Guides non-experienced users

  • Let the system guide even your non-experienced users to perform like professionals.

Avoid system failures and out-of-tolerance risks

  • Use a calibration system that automatically ensures you meet required tolerance limits, to avoid system downtime and expensive out-of-tolerance situations.

Be compliant

  • Use a system that helps you meet regulations and internal standards of excellence.

Ensure safety

  • Ensure safety of the plant workers, and customers, using a calibration system that helps you navigate through safety critical calibrations.

Safeguard the integrity of calibration data

  • Use a calibration system that ensures the integrity of the calibration data with automatic electronic data storage and transfer and relevant user authorization.

Make audits and access data easy

  • Use a system that makes it easy to locate any record an auditor asks for.

 

What do the users say?

 Here are just a few testimonials on what the users have said about the Beamex Integrated Calibration Solution:

 

“With the Beamex integrated calibration solution, the plant has experienced a dramatic time savings and implemented a more reliable calibration strategy while realizing a 100% return on investment in the first year.

Using the Beamex tools for pressure calibrations has decreased the time it takes to conduct the calibration procedure itself in the field by over 80%.”

 

“Time is of the essence during an outage and the Beamex Integrated Calibration Solution allows technicians to maximize the amount of work accomplished in the shortest amount of time, while effectively performing vital tasks and managing workflows.”

 

“After the incorporation of Beamex’s integrated calibration solutions, calibrations that would take all day are now performed in a couple hours.”

 

“With this software integration project, we were able to realize a significant return on investment during the first unit overhaul. It’s unusual, since ROI on software projects is usually nonexistent at first.”

 

“After implementing the Beamex CMX calibration management system, GSK will be able to eliminate 21,000 sheets of printed paper on a yearly basis, as the entire flow of data occurs electronically, from measurement to signing and archiving.”

  • GlaxoSmithKline Ltd, Ireland

 

Related posts

If you like this post, you could like these posts too:

 

Check out the Beamex solution

Please check out the below link for the Beamex integrated Calibration Solution, which is a combination of calibration software, calibration hardware, various services and expertise:

Beamex Integrated Calibration Solution (Global web site)

Beamex Integrated Calibration Solution (USA web site)

 

Download the free white paper by clicking the picture below: 

New Call-to-action

 

 

Topics: Calibration process, Calibration management

How to calibrate temperature instruments [Webinar]

Posted by Heikki Laurila on Oct 18, 2018

How to calibrate temperature instruments - Beamex blog post 

In this blog post, I will share with you a two-part webinar series titled “How to calibrate temperature instruments”. We did this webinar together with our partner ISA (International Society of Automation).

The webinars covers some theory, many practical things, demonstrations of temperature instrument calibration and Questions & Answers sections.

More information on ISA can be found at https://www.isa.org/

These webinars include following experienced speakers:

  • Thomas Brans, Honeywell customer marketing manager .
  • Ned Espy has worked over 20 years with calibration management at Beamex, Inc. and also has calibration experience from his previous jobs.
  • Roy Tomalino has worked for 15 years at Beamex, Inc. teaching calibration management and also has prior calibration experience

Below you can find a short table of contents with the main topics, so it will be easier for you to see if there is something interesting for you. Of course, there is also a lot of other useful discussions in these webinars.

Please click the pictures below the table of content to view the webinar recording.

 

How to calibrate temperature instruments - Part 1 (1:39:05)

 

TimeTopic
0:00Introduction
4:30Presentation of speakers
7:00Presentation of agenda
8:00Process control framework
9:30Temperature
11:15Temperature units
15:20Thermocouple sensors
20:30Demonstration – calibration of a thermocouple transmitter
26:30Questions & Answers
28:50RTD basics
34:20Calibration basics
42:40Demonstration – calibration of an RTD transmitter
55:40Thermocouple and RTD basics
1:03:20Questions & Answers
1:39:05End of webinar

 

Watch the webinar (Part 1) by clicking the picture below:

How to calibrate temperature instruments, Part 1 - Beamex webinar

 

How to calibrate temperature instruments - Part 2 (1:21:33)

 

TimeTopic
0:00Introduction
0:10Presentation of speakers
2:20Presentation of agenda
3:10Temperature measurement in a heat exchanger
5:45Demonstration – calibration of a temperature sensor
9:00Loop basics
16:50Quick poll on loop testing
19:15Questions & Answers
32:00Temperature measurement in an autoclave
37:00Demonstration - calibration of a temperature sensor continues
41:30Measuring temperature in refinery
45:30Other applications - Infrared calibration
50:30Demonstration - wrap up
58:50Questions & Answers
1:21:33End of webinar

 

Watch the webinar (Part 2) by clicking the picture below:

How to calibrate temperature instruments, Part 2 - Beamex webinar

 

 

Other "temperature" blog posts

If you like temperature and temperature calibration related blog posts, you may these below blog posts also interesting:

 

Other "webinar" blog posts

These are some other webinar type of blog posts:

 

Beamex temperature offering

Beamex offers various tools for temperature calibration. Please check our offering at our web site in the link below:

Beamex temperature calibration products

 

 

 

Topics: Temperature calibration

Uncertainty components of a temperature calibration using a dry block

Posted by Heikki Laurila on Aug 23, 2018

Temperature dry block uncertainty components - Beamex blog post

 

In some earlier blog posts, I have discussed temperature calibration and calibration uncertainty. This time I will be covering the different uncertainty components that you should take into account when you make a temperature calibration using a temperature dry block. 

Making a temperature calibration using a dry block seems like a pretty simple and straight forward thing to do, however there are many possible sources for uncertainty and error that should be considered. Often the biggest uncertainties may come from the procedure on how the calibration is done, not necessarily from the specifications of the components.

Let’s turn the heat on!

If you just want to download this article as a free pdf file, please click the below button:

Uncertainty components of a temperature dry block

 

Table of contents

  • What is a dry block?
  • So, it's not a bath?
  • EURAMET Guidelines
  • Uncertainty Components
    • Internal or External reference sensor
    • 1. Internal reference sensor
    • 2. External reference sensor
    • 3. Axial temperature homogeneity
    • 4. Temperature difference between the borings
    • 5. Influence of loading
    • 6. Stability over time
    • 7. Don't be in a hurry
  • Summary

 

What is a “dry block”?

Let’s start anyhow by discussing what I mean with a “temperature dry block” in the article.

A temperature dry block is sometimes also called a dry-well or a temperature calibrator.

It is a device that can be heated and/or cooled to different temperature values, and as the name hints, it is used dry, without any liquids.

A dry block typically has a removable insert (or sleeve) that has suitable holes/borings for inserting temperature sensors into.

The dry block typically has its own internal measurement for the temperature, or you may use an external reference temperature sensor that you will insert into one of the holes.

Commonly a dry block has interchangeable inserts, so you may have several inserts, each being drilled with different holes, to suit for calibration of different sized temperature sensors.

It is very important in a dry block that the hole for the temperature sensor is sufficiently tight to enable low thermal resistance between the sensor and the insert. In too loose of a boring, the sensor stabilizes slowly or may not reach the temperature of the insert at all due to stem conduction.

Commonly, you would insert a temperature sensor in the dry block to be calibrated or calibrate a temperature loop where the temperature sensor is the first component in the loop.

The main benefits of a dry block are that it is easy to carry out in the field and there is no hot fluid that would spill when you carry it around. Also, a dry block will not contaminate the temperate sensors being calibrated.

Dry blocks are almost always used dry. In some very rare cases you may use some heat transfer fluids or pastes. In most cases you may damage the dry block if you use liquids.

Using oil or pastes also cause a potential health and fire risk if later used in temperatures higher than for example a flash point of the foreign substance. A 660 °C dry block that has silicon oil absorbed into its insulation may look neat outside, but it will blow out a noxious fumes when heated up. Calibration labs everywhere are probably more familiar with this than they would like to be…

As drawbacks for dry blocks, we could consider lower accuracy/stability than with a liquid bath and more difficult to calibrate very short and odd shaped sensors.

 

So, it’s not a “bath”?

Nope, I said “dry block”, didn’t I … ;-)

There are also temperature baths available, having liquid inside. The liquid is heated/cooled and the temperature sensors to be calibrated are inserted into the liquid. Liquid is also being stirred to get even temperature distribution in the liquid.

There are also some combinations of dry block and liquid bath, these are devices that typically have separate dry inserts and separate liquid inserts.

The main benefits of a liquid bath are the better temperature homogeneity and stability and suitability for short and odd shaped sensors.

While the drawbacks of liquid bath are the bigger size, heavier weight, working with hot liquids, poorer portability and they’re often slower than dry blocks.

Anyhow, in this article we focus on the temperature dry blocks, so let’s get back to them.

 

EURAMET Guidelines

Let’s take a quick look into Euramet guides before we proceed. And yes, it is very relevant for this topic.

EURAMET is the Regional Metrology Organisation (RMO) of Europe. They coordinate the cooperation of National Metrology Institutes (NMI) in Europe. More on Euramet at https://www.euramet.org/

Euramet has also published many informative guidelines for various calibrations.

The one that I would like to mention here is the one dedicated for temperature dry block calibration: EURAMET Calibration Guide No. 13, Version 4.0 (09/2017), titled “Guidelines on the Calibration of Temperature Block Calibrators”.

The previous version 3.0 was published in 2015. First version was published in 2007. That guideline was earlier called EA-10/13, so you may run into that name too.

The guideline defines a normative way to calibrate temperature dry blocks. Many manufacturers use the guideline when calibrating dry blocks and when giving specifications for their dry blocks.

To highlight some of the contents of the most recent version 4.0, it includes:

Scope

Calibration capability

Characterisation

  • Axial homogeneity
  • Temperature difference between borings
  • Effects of loading
  • Stability over time
  • Heat conduction

Calibration

  • Measurements
  • Uncertainties

Reporting results

Examples

You can download the Euramet guide pdf free here:

Guidelines on the Calibration of Temperature Block Calibrators

 

Uncertainty components

Let’s get into the actual uncertainty components. When you make a temperature calibration using a dry block, these are the things that cause uncertainty/error to the measurement results.

 

Internal or External reference sensor?

There are two principle ways to measure the true (correct) temperature of a dry block. One is to use the internal measurement using the internal reference sensor that is built in into the dry block, the other is to use an external reference sensor that is inserted into the insert boring/hole.

There are some fundamental differences between these two ways and they have a very different effect on the uncertainty, so let’s discuss these two options next:

 

1. Internal reference sensor

An internal reference sensor is permanently inserted into the metal block inside the dry block, it is typically close to the bottom part of the block and it is located in the metallic block surrounding the interchangeable insert.

So, this internal sensor does not directly measure the temperature of the insert, where you insert the sensors to be calibrated, but it measures the temperature of the surrounding block. Since there is always some thermal resistance between the block and the insert, this kind of measurement is not the most accurate one.

Especially when the temperature is changing, the block temperature normally changes faster than the insert temperature. If you make the calibration too quickly without waiting sufficient stabilization time, this will cause an error.

An internal reference sensor is anyhow pretty handy, as it is always readily inside the block, and you don’t need to reserve a dedicated hole in the insert for it.

The recalibration of the internal measurement is a bit difficult, as you need to send the whole dry block into recalibration.

An internal measurement sensor’s signal is naturally measured with an internal measurement circuit in the dry block and displayed in the block’s display. The measurement typically has an accuracy specification given. As discussed earlier, in practice this specification is only valid in stable conditions and does not include the uncertainties caused if the calibration is done too quickly or the sensors to be calibrated are not within the calibration zone at the bottom part of the insert, in a sufficiently tight boring.

 

Two internal refernce sensors at diff height

 

The above left side pictures illustrate how the internal reference sensor is typically located in the temperature block, while the sensor to be calibrated is inserted into the insert. If the sensor to be calibrated is long enough and reaches the bottom on the insert, the boring is tight enough, and we waited long enough for stabilization, we can get good calibration with little error.

In the right side picture we can see what happens if the sensor to be calibrated is too short to reach to the bottom of the insert. In this case, the internal reference sensor and the sensor to be calibrated are located at different heights and are measuring different temperatures, causing a big error to the calibration result .

 

2. External reference sensor

The other way is to use an external reference sensor. The idea here is that you insert a reference sensor into a suitable hole in the insert, while you enter the sensors to be calibrated in the other holes in the same insert.

As the external reference sensor is inserted into the same metal insert with the sensors to be calibrated, it can more precisely measure the same temperature as the sensors to be calibrated are measuring.

Ideally, the reference sensor would have similar thermal characteristics as the sensors to be calibrated (same size and thermal conductance). In that case, as the insert temperature changes, the external reference sensor and the sensor to be calibrated will more accurately follow the same temperature changes.

The external reference sensor naturally needs to be measured somehow. Often a dry block has internal measurement circuitry and a connection for the external reference sensor or you can use an external measurement device. For uncertainty, you need to consider the uncertainty of the reference sensor and the uncertainty of the measurement circuitry.

Using an accurate external reference sensor results in a more accurate calibration with smaller uncertainty (compared to using an internal reference sensor). So, it is highly recommended if you want good accuracy (small uncertainty).

An external reference sensor also promotes reliability. If the internal and external sensor readings differ a lot, it’s a warning signal to the user that something is probably wrong and the measurements may not be trustworthy.

For recalibration, in the case of an external reference sensor, you can send just the reference sensor for recalibration, not the whole dry block. In that case, you naturally will not have the dry block’s functionalities being checked (and adjusted if necessary), like the axial temperature homogeneity, for example.

If you don’t send the dry block for calibration, be sure to measure and record the axial gradient regularly yourself, as it’s typically the biggest uncertainty component also when the external reference sensor is used. Otherwise a strict auditor may profoundly question the traceability your measurements.

 

 

Three sensor pairs at diff heights 

The above picture illustrate how the external reference sensor and the DUT (Device Under Test) sensor are both located in the insert. The first picture shows the case when both sensors reach the bottom of the insert, resulting in best calibration results.

The second picture shows what happens if the reference sensor and DUT sensor are at different depth. This will cause a big temperature difference between the two sensors and will result in error in calibration.

The third picture shows an example where the DUT sensor is short, and the reference sensor has been correctly positioned in the same depth as the DUT sensor. With this you can get the best possible calibration result, although the homogeneity of the insert is not very good at the upper part of the insert.

So, if the sensors are located at different heights, that will cause additional error, but using an external reference sensor the error is still anyhow typically smaller than when calibrating a short sensor using the internal reference sensor.

 

3. Axial temperature homogeneity

Axial homogeneity (or axial uniformity) refers to the difference in temperature along the vertical length of the boring in the insert.

For example, the temperature may be slightly different in the very bottom of the boring in the insert, compared to the temperature a little higher in the boring.

Typically, the temperature will be different in the very top of the insert, as the temperature is leaking to the environment, if the block’s temperature is very different than the environmental temperature.

Some temperature sensors have the actual measurement element being shorter and some longer. Also, some have the element closer to the tip than others. To assure that different sensors are in the same temperature, the homogenic zone in the bottom of the block’s insert should be long enough. Typically, the specified area is 40 to 60 mm.

A dry block should have sufficient area in the insert bottom within which the temperature homogeneity is specified. During a calibration of the block, this may be calibrated by using two high-accuracy reference sensors at different heights or using a sensor with a short sensing element that is gradually lifted higher from the bottom. This sort of short sensing element sensor needs to be stable but does not necessarily be even calibrated because it’s used just for measuring temperature difference at different heights. If needed, the axial temperature gradient can typically be adjusted in a dry block.

If you have a short (sanitary) temperature sensor that does not reach all the way to the bottom of the boring in the insert, then things will get a bit more complicated. In that case, the internal reference measurement in the dry block cannot really be used, as is typically in the bottom of the block. An external reference sensor should be used and it should have the center of the measurement zone inserted as deep as the center of the measurement zone of the short sensor to be calibrated.

Often, this means that a dedicated short reference sensor should be used, and inserted into the same depth as the short sensor to be calibrated. It gets even more difficult if the short sensor to be calibrated has a large flange as that will soak up temperature from the sensor.

Summary - During the calibration you should ensure that your reference sensor is inserted to the same depth as the sensor(s) to be calibrated. If you know the lengths and the locations of the sensing elements, try to align the centers horizontally. If that is not possible, then you need to estimate the error caused by that. You should use an external temperature sensor, if the accuracy requirements of the calibration are higher, or if the sensor to be calibrated is not long enough to reach the bottom on the insert hole.

 

Axial temp homogeneity with two pics

 

The above picture illustrate what the “axial temperature homogeneity” means. Typically, a dry block has a specified area in the bottom that has a homogenic temperature, but as you start to lift the sensor to be calibrated higher, it will not be in the same temperature anymore. 

 

4. Temperature difference between the borings

As the title hints, the temperature difference between the borings, sometimes referred as “radial uniformity”, is the temperature difference between each boring (hole) in the insert. Although the insert is made of metal compounds and has a good thermal conductivity, there can still be a small difference between the borings, especially the opposite ones.

In practice, when you have two sensors in the insert installed in the different borings, there can be a small temperature difference between them.  

The difference can be caused by the insert touching the block more on one side or the insert being loaded unequally (more sensors on one side, or thicker sensors in one side than on the other side). Of course, the heaters and Peltier elements, located on different sides, have their tolerances too.

The temperature difference between the borings in normally relatively small in practice.

Summary – the specification of the temperature difference between borings should be taken into account.

 

Difference between borings - Uncertainty components of a temperature calibration using a dry block. Beamex blog article.

 

 

5. Influence of loading

 There is always some heat conducted through the sensors to the environment (stem conductance) if the block’s temperature differs from the environmental temperature.

If there are several sensors installed in the insert, there will be more temperature “leaking” to the environment. Also, the thicker the sensors are, the more temperature leakage there will be.

The bigger the temperature difference between the insert and the environment temperature, the bigger the leakage will be.

For example, if you have the dry block at high temperature, this temperature leakage will cause the insert to cool down because of the loading. The top of the insert will lose more temperature than the bottom of the insert and the top becomes cooler.

The deeper the insert is, the less loading effect there will be. Also, some dry blocks have two or more heating/cooling zones: one in the bottom, one in center and one in the top of the block. This will help to compensate the loading effect (e.g. the top heating can heat more to compensate the top of the insert to cool down).

If you use the internal reference measurement of the dry block, there will typically be a larger error since the internal reference is not in the insert but is in the bottom of the surrounding block. Therefore, the internal reference sensor does not see this effect of loading very well.

An external reference sensor can better see the effect of loading, as it is in the insert and it will also have the same change in the temperature. The error caused by the loading effect is much smaller when external reference sensors is used (compared to using internal reference sensor), and the results are better.

Summary – check out the loading effect of your dry block in your application (how many sensors, which type of sensor) and use that as one uncertainty component.

 

Stem conductanse two pics

 

The above pictures illustrate the stem conductance caused by the sensors leaking the temperature to the environment. In the second picture there are several sensors inserted, so the stem conductance/leakage is larger.

 

 

6. Stability over time

 Stability over time describes how well the temperature remains stable over a longer period. The temperature needs to be stable for certain time, as the different sensors may have different thermal characteristics and it takes different time for different sensors to stabilize. If the temperature is constantly creeping up and down, the different sensors may read different temperatures.

In case there is some fluctuation in the temperature, an external reference sensor will anyhow result in more accurate results, compared to the use of an internal reference sensor.

Often a dry block manufacturer has given a stability specification, for example for a 30 minute period.

 

7. Don’t be in a hurry!

 It’s good to remember the fact that a temperature sensor will always measure only its own temperature. So, it does not measure the temperature where it is installed, but it will measure its own temperature.

Also, temperature changes pretty slowly and it takes some time before all parts of the system have stabilized to the same temperature, i.e. system has reached equilibrium.

If you make a temperature calibration with a dry block too fast, that will be the biggest source of uncertainty! 

So, get to know your system and the sensors you calibrate and experiment to see how long time is enough for sufficient stabilization.

Especially if you use the internal reference sensor, it will reach the set temperature much faster than the sensors to be calibrated located in the insert. That is because the internal sensor is in the block that is heated/cooled, and the sensors to be calibrated are in the insert. Taking the results too soon can cause a big error.

In case of an external reference sensor, the need for stabilization depends on how different your reference sensor is compared to your sensors to be calibrated. If they have different diameter, they will most likely have different stabilization time. Anyhow, using an external reference sensor will be much more accurate than internal one, in case you don’t wait long enough for stabilization.

Often a dry block will have a stability indicator, but that may be measuring the stability of the internal reference sensors, so don’t trust only on that one.

Summary – shortly, if you do the temperature calibration too fast, the results will be terrible.

 

Temperature sensor stability - Uncertainty components of a temperature calibration using a dry block. Beamex blog article.

 

The above picture illustrates an (exaggerated) example where the temperature set point has been first 10°C and at the 5 minutes mark it has been changed to 150°C (blue line represents the set point).

There have been two sensors in the dry block – a reference sensor and a sensor to be calibrated.

We can see that the Sensor 1 (red line) changes much faster and reaches the final temperature at about 11 minutes point. The Sensor 2 (green line) changes much slower and it reaches the final temperature at around the 18 minutes mark.

The Sensor 1 is our reference sensor and the Sensor 2 is the sensor to be calibrated. We can see that if we read the temperatures too early at 10 minutes mark, we will get a huge error (about 85°C) in our results. Even if we take the readings at the 15 minutes mark, we still have around 20°C difference.

So, we should always make sure that we wait long enough to make sure that all sensors are stabilized to the new temperature, before we read the readings.

 

Summary

Making a temperature (sensor) calibration using a dry block seems pretty simple and straight forward thing to do. But there are anyhow many possible sources for uncertainty and error that should be taken into account.

Often the biggest uncertainties may come from the procedure on how the calibration is done, not necessarily from the specifications of the components.

For example, you may have an accurate dry block that has combined total uncertainty being 0.05°C and a high-quality reference sensor with uncertainty of 0.02°C. But anyhow calibrating a temperature sensor with these devices can have an uncertainty of several degrees, if it is not made properly.

That is one reason I don’t like the discussion of TAR (Test Accuracy Ratio) as it does not take into account all the uncertainties caused by the calibration procedure, it only uses the accuracy specifications.

I hope these considerations listed in the article help you to realize the possible sources of uncertainty and also how to minimize them.

 

If you want to download this article as a free pdf file, please click the below button: 

Uncertainty components of a temperature dry block

 

Related blog posts

The main topics discussed in this article are temperature calibration and calibration uncertainty. Other blog posts on these topics, that you could be also interested in, are for example following:

 

Beamex offering

The Beamex MC6-T is a revolutionary temperature calibration tool, and it provides an excellent accuracy & uncertainty for temperature calibration. Please click the picture below to learn more:

Beamex MC6-T temperature calibrator

 

Beamex offers also various other temperature calibration products, please check our offering in the below link:

Beamex temperature calibration products

 

Please subscribe & suggest topics

If you like these blog articles, please subscribe to this blog by entering your email address to the "Subscribe" box on the upper right-hand side. You will be notified by email when new articles are available, normally about once in month.

Also, please feel free to suggest good and interesting topics for new articles!

 

  

Topics: Temperature calibration, calibration uncertainty

AMS2750E Heat Treatment Standard and Calibration

Posted by Heikki Laurila on Jun 20, 2018

AMS2750 Heat treatment furnace - Beamex blog post

Update July 2020: Please note that a new F version (AMS2750F) has been released in June 2020.  

 

In this blog post, I will take a short look at the AMS2750E standard, with a special focus on the requirements set for calibration, calibration accuracy and test/calibration equipment.

The AMS2750E is predominantly designed for heat treatment in the aerospace industries. Heat treatment is an essential process for many critical parts of an airplane, so it is understandable that there are tight regulations and audit processes set.

While the results and success of some other industrial processes can be relatively easily measured after the process, this is not the case in a heat treatment process. Therefore, very tight control and documentation of the heat treatment process is essential to assure the quality of the end products.

Download White Paper: AMS2750E Heat Treatment Standard and Calibration  

 

AMS2750 standard

As mentioned, the AMS2750E is a standard for the heat treatment. The “AMS” name in the standard is an abbreviation of “Aerospace Materials Specifications”. The standard is published by SAE Aerospace, part of SAE International Group. The first version of the AMS2750 standard was published in 1980. Followed by revisions: revision A in 1987, B also in 1987, C in 1990 and D in 2005.  The current revision AMS2750E was published in 2012.

The AMS2750 standard was initially developed to provide consistent specifications for heat treatment through the aerospace supply chain. The use of the standard is audited by PRI (Performance Review Institute) for the Nadcap (National Aerospace and Defense Contractors Accreditation Program). Prior to Nadcap, aerospace companies each audited their own suppliers, so there was a lot of redundancy and duplication of efforts. In 1990, the PRI was established to administer the Nadcap program.

 

AMS2750E scope

According to the standard itself, the scope of the AMS2750E standard is the following:

"This specification covers pyrometric (high temperature) requirements for thermal processing equipment used for heat treatment. It covers temperature sensors, instrumentation, thermal processing equipment, system accuracy tests, and temperature uniformity surveys. These are necessary to ensure that parts or raw materials are heat treated in accordance with the applicable specification(s)."

 

Why for heat treatment?

In some industrial processes, it is relatively easy to measure and check the quality of the final product and judge if the product fulfills the requirements after the process is complete. You may be able to simply measure the end product and see if it is good or not.

In other processes where it is not possible/easy/practical to measure the quality of the final product you need to have a very tight control and documentation of the process conditions, in order to be sure that the final product is made according to the requirements.

It is easy to understand that heat treatment is a process where you need to have a very good control of the process in order to assure that you get the required end product, especially since the products are mostly used by the aerospace industry.  

 

Who is it for?

The AMS2750E is predominantly designed for the aerospace industries. But the same standards and processing techniques can be used within any industry which requires control of the thermal processing of raw materials and manufactured components, such as automotive, rail and manufacturing.

 

But what is the CQI-9?

The CQI-9 is a similar set of requirements for heat treatment, mainly aimed for the automotive industry. The first edition of CQI-9 was published in 2006. The CQI-9 “Heat Treatment System Assessment” is a self-assessment of the heat treatment system, published by AIAG (Automotive Industry Action Group). More details about CQI-9 maybe in an other post later on...

 

Test instruments and calibration

Let’s discuss Test Instruments (calibrators) and what AMS2750E says about them.

traceable calibration of different levels of measurement instruments is obviously required. The higher level standards are typically calibrated in an external calibration laboratory. The process measurements are calibrated internally using “field test instruments”.

Metrological Traceability is often described as traceability pyramid, or as a traceability chain, see below:

 

Traceability pyramid:

Metrological traceability pyramid - Beamex     

 

Traceability chain:

Metrological traceability chain - Beamex

 

To learn more about the metrological traceability in calibration read the following blog post:

Metrological Traceability in Calibration – Are you traceable?

 

The magical "Table 3" 

In the Table 3 in the AMS2750E standard, there are different specifications for the test standards and test equipment/calibrators. The different levels of instruments are classified as follows:

  • Reference standard
  • Primary standard
  • Secondary standard instruments                                                 
  • Secondary standard cell
  • Field test instrument
  • Controlling, monitoring or recording instruments

For each instrument class, there are specifications for the calibration period and calibration accuracy. If we think about calibrators/calibration equipment, it is typically used as “field test instrument” or sometimes as “secondary standard instrument” and following are said about those:

Secondary standard instrument

  • Limited to laboratory calibration of field test instruments, system accuracy test sensors, temperature uniformity survey sensors, load sensors and controlling, monitoring or recording sensors 

Field test instrument

  • Calibration of controlling, monitoring, or recording instrument, performance of system accuracy tests, and temperature uniformity surveys

 

AMS2750E accuracy requirements

AMS2750E also specifies the calibration period and accuracy requirements for the different levels of instruments, below is what is said about the secondary standard instrument and field test instrument:

 AMS2750E heat treatment Table 3 - Beamex

 

Sometimes it is easier to look at a visual, so let's look at this required calibration accuracy graphically for field test instrument” and “secondary standard instrumentAnd as the Centigrade and Fahrenheit are different, below is a graph of both for your convenience:

 

AMS2750 calibration accuracy - Beamex

 

AMS2750 calibration accuracy - Beamex

 

Contradiction with different thermocouples types and accuracy

The AMS2750E standard specifies different thermocouple types for different usage. Types B, R and S are included for more demanding use, while types J, E, K, N, T are also included in the standard.

Anyhow, the standard has the same accuracy specification regardless of the thermocouple type. This is a slightly strange requirement, as different thermocouples have much different sensitivities.

In practice, this means that a test field instrument (calibrator) normally has a specification for millivoltage, and when this mV accuracy is converted to temperature it means that the calibrator normally has different specifications for different thermocouple types. Some thermocouple types have very low sensitivity (voltage changes very little as temperature changes), especially in the lower end.

For example - a calibrator can have an electrical specification of 4 microvolts at 0 V. With a K type, this 4 µV equals a temperature of 0.1 °C (0.2 °F), but for a S type, this equals 0.7°C (1.3°F), and for a B type it equals almost 2°C (3.6 °F). Therefore, calibrators normally have very different accuracy specifications for different thermocouple types.

So the standard having the same accuracy regardless of the thermocouple type is a bit strange requirement.

To illustrate the different sensitivities of different thermocouple types, please see the graphics below. The graph shows what kind of thermovoltage (Emf) is generated in different temperature by different thermocouple types:

 

Thermocouple emf voltage versus temperature - Beamex

 

To learn more about thermocouples, different thermocouple types and thermocouple cold junction compensation, please read this blog post:

Thermocouple Cold (Reference) Junction Compensation

 

AMS2750E contents in a nutshell

Let’s take a brief look at the contents of the AMS2750E standard and further discuss a few key points in the standard.

The AMS2750E standard starts with sections:

  • 1. Scope
  • 2. Applicable documents

Chapter 3 “Technical Requirements” of AMS2570E includes the following key sections. (These sections are discussed in more details in the next chapters):

  • 3.1  Temperature sensors
  • 3.2. Instrumentation
  • 3.3. Thermal processing equipment
  • 3.4. System Accuracy Tests (SAT)
  • 3.5. Furnace Temperature Uniformity Survey (TUS)
  • 3.6. Laboratory furnaces
  • 3.7. Records
  • 3.8. Rounding

The remaining sections are:

  • 4. Quality assurance provisions
  • 5. Preparation for delivery
  • 6. Acknowledgement
  • 7. Rejections
  • 8. Notes

 

3.1 Temperature sensors

Section 3.1 discusses temperature sensors. Some key bullets from that section:

  • The AMS2750E standard specifies the thermocouple sensors to be used, as well as the sensor wire types.
  • The voltage to temperature conversion standard to be used (ASTM E 230 or other national standards).
  • Correction factors may be used to compensate for the errors found in calibration.
  • The temperature range for the sensors used.
  • Allowance to use wireless transmitters.
  • Contents of a sensor calibration certificate.
  • The max length of sensor wire/cable.
  • The max number of usage of thermocouples in different temperatures.
  • Types of thermocouple sensors to be used, the use for thermocouples (primary calibration, secondary calibration, sensor calibration, TUS, SAT, installation, load sensing), calibration period for thermocouples, and maximum permitted error.

 

3.2 Instrumentation

Section 3.2 covers the instrumentation that the sensors are used with. This includes control, monitoring, recording, calibration, instrumentation, etc.

  • Instruments need to be traceably calibrated.
  • Minimum resolution/readability of test instruments (1 °F or 1 °C).
  • Specifications for electronic records.
  • Contents of calibration sticker:
    • Date, due date, performed by, any limitations
  • Contents of calibration record:
    • Instrument identification, make and model, standard(s) used, calibration method, required accuracy, as found and as left data of each calibration point, offset, found/left, sensitivity, statement of acceptance or rejection, any limitations or restrictions, calibration date, due date, performed by, calibration company, signature, quality, organization approval.

 

3.3 Thermal processing equipment

Section 3.3 discusses the furnace classification and the temperature uniformity requirements in each class. Going from class 1 having uniformity requirement of ±5°F / ±3 °C, to class 6 with ±50 °F / ±28 °C.

 

3.4 System accuracy test (SAT)

Section 3.4 discusses the system accuracy tests (SAT). The SAT is an on-site test where the whole measurement loop (instrument / lead wire / sensor) is calibrated using appropriate calibration equipment. This is typically done by placing a reference thermocouple close to the thermocouple to be calibrated and comparing the read-out of the measurement loop to the reference.

SAT shall be performed with a “field test instrument,” specified in the standard’s Table 3. SAT should be performed periodically or after any maintenance. SAT interval is based on equipment class and instrumentation type.

SAT test records shall include:

  • identification of sensor calibrated
  • ID of reference sensor
  • ID of test instrument
  • date and time
  • set points
  • readings of furnace under test
  • test instrument readings
  • test sensor correction factors
  • corrected test instrument reading
  • calculated system accuracy difference
  • an indication of acceptance or failure
  • who performed the test
  • signature
  • quality organization approval

 

3.5 Temperature uniformity surveys (TUS)

Section 3.5 is about furnace temperature uniformity survey (TUS). The TUS is the testing of the temperature uniformity in all sections/zones of the furnace in the qualified operating range. An initial TUS needs to be performed for any new, modified (example modifications are listed in the standard) or repaired furnace, and thereafter it should be performed in accordance with the interval specified in the standard. For furnaces with multiple qualified operating ranges, TUS shall be performed within each operating range.

There are many detailed specifications for TUS testing in the AMS2750E standard.

The TUS report shall include:

  • furnace identification
  • survey temperatures
  • sensor location and identification including detailed diagrams
  • time and temperature data from all sensors
  • correction factors for sensors in each temperature
  • as found and as left offsets
  • corrected/uncorrected readings of all TUS sensors at each temperature
  • testing company identification and signature
  • identification of the person who performed the survey
  • survey start date and time
  • survey end date and time
  • test instrumentation identification
  • identification of pass or fail
  • documentation of sensor failures (when applicable)
  • summary of corrected plus and minus TUS readings at each temperature after stabilization
  • quality organization approval

 

Q&A session with Deutsche Edelstahlwerke 

We had a Questions & Answers session with Mr. Julian Disse (Team Coordinator Quality Assurance Special Standards) at Deutsche Edelstahlwerke and discussed on the challenges on following the AMS2750 standard, audits, measurements, calibrations, sensors, traceability and other things.

Please download the adjacent white paper to read the discussion. 

 

An example customer case story

Here's an example case story of company Trescal, UK. They are a calibration service provider for aerospace customers and need to follow the AMS2750 standard. Trescal have found Beamex calibrators (MC2, MC5 and MC6) as a good fit for the work they do. Click the link below to read the case Trescal story:

Case story: Trescal, UK - Extreme accuracy calibrations for aerospace giant

 

Summary

The AMS2750E specifications set a high standard for the aerospace industry. After reviewing sensor technology and the challenges for test equipment to make proper measurements, meeting accuracy requirements takes careful analysis and continuous supervision. It should be noted that the AMS2750E specifications are not easily met and accurate test equipment must be utilized. By addressing calibration requirements up front, maintenance personnel will be equipped with the proper tools and procedures to not only maintain compliance but ensure the best product quality. Good sensor measurements set the stage for good process control with repeatable results – a good formula for staying in business.

 

Download the free white paper

You can download the free pdf white paper by clicking the picture below:

AMS2750E Heat Treatment Standard and Calibration - Beamex White Paper

 

Beamex solutions for AMS2750E

Beamex offers various temperature calibration products that can be used (and are being used) in an AMS2750E environment. You can find the detailed information of our offering on our website in the below link:

Beamex temperature calibration products

Please contact us for information on how our products can be used in an AMS2750 environment.

 

Related blog posts

If you found this article interesting, you might also like these blog posts:

 

Please subscribe

If you like these blog articles, please subscribe to this blog by entering your email address to the "Subscribe" box on the upper right-hand side. You will be notified by email when new articles are available, normally about once in month.

Also, feel free to suggest topics for new articles!

 

 

Topics: AMS2750, heat treatment

Using Metrology Fundamentals in Calibration to Drive Long-Term Value

Posted by Chuck Boyd on May 30, 2018

Power-plant_reflection_1024px-602996-edited

 

This article discusses some critical items to address for a calibration program based on sound metrology fundamentals without a complete overhaul of the calibration program. Having properly calibrated process control instrumentation provides a high quality of process control, a process that will operate to design specifications, and prevents the process from being stressed as it compensates for inaccurate measurement data feeding the DCS. Realization of these benefits may be challenging to quantify and attribute to implementing any of the suggested changes, but conversely, implementation of the changes should not be extraordinarily burdensome on resources.  

Introduction

The science of metrology is seemingly calm on the surface but has extensive depth of very technical concepts. Metrology is the science of measurement and incorporates aspects of many diverse fields such as mathematics, statistics, physics, quality, chemistry and computer science-all applied with a little common sense. Because metrology work is interspersed with other job duties, many rely on knowledge of metrology, but the science is intimately understood by only a small percentage. Most often a diverse educational background is found across the maintenance stakeholders in a power plant and most, if not all, of the metrology knowledge, is learned on-the-job.

Many times calibration programs are based on the minimal definition of calibration, which is comparing an instrument’s measurement to a known standard, followed by documentation of the results. With the lean staffing levels typical in for example power plant maintenance groups today, it’s natural for these programs to evolve out of expediency. This expediency, and the lack of defined metrologist roles on these staffs, inhibits the development of a program that includes additional metrology fundamentals above and beyond what is needed to get the job done.   

The typical Electrical & Instrumentation (E&I) Manager has responsibility over a wide range of electrical equipment and instrumentation to support operations of the plant. The Manager’s purview includes management of E&I department staff, safety and environmental leadership, preventive maintenance, predictive maintenance and critical repair activities, and working with internal and external groups on special projects among many other areas of accountability. While instrument calibration is a critical activity, completion of the task requires a relatively small amount of focus, all else being considered. There are instrument calibration requirements critical to maintaining compliance with environmental regulations defined by the Environmental Protection Agency such as Mercury and Air Toxics Standards (the MATS rule) and regulation of greenhouse gases (GHGs) under the Clean Air Act, employer responsibility to protect workers from hazardous conditions defined by the Occupational Safety and Health Administration, and reliability standards defined by the North American Electric Reliability Corporation. Of course, nuclear-fueled generators must comply with additional regulatory requirements, but outside of complying with these requirements, natural gas and coal fueled generators are self-governed with regard to the balance of their instrumentation.

At a minimum, a competent calibration program insures instruments are calibrated on a schedule, the results are documented for audit purposes, and there is traceability. A competent calibration program helps maintain safety and production, but calibration is also a matter of profitability. Instruments measuring more accurately can improve safety, allow increased energy production, and reduce the stress on equipment. Unfortunately, the nature of the benefits presents a major challenge; unlike a metric such as labor savings, these benefits are extremely difficult to quantify for use in justifying the cost of change.

When instruments are calibrated with a traceable standard and the results are documented, many consider this to be adequate and no change is necessary. This position is bolstered by the very nature of how maintenance is scheduled. Months of planning go into an outage and when time to execute arrives, challenged with tight resources and tight schedules, the work must be accomplished as expeditiously as possible. All unnecessary steps must be eliminated as to not jeopardize the outage schedule. Therefore, adding steps to the process is counter-intuitive to this, which may be necessary to improve the process. E&I leadership must have the foresight to implement strategic change in order to realize the benefits of improvement.

Implementing metrology-based principles does not have to be a dramatic change. A substantial positive impact to quality can be realized by making some adjustments and tweaking the existing calibration program. These changes are easy to implement and simultaneously will reinforce a culture change focusing more on metrology aspects of calibration. Metrology as a science has an immense number of elements to consider, but initially focusing on the following areas will provide huge strides in building and maintaining a calibration program that provides confidence in measurement accuracy for process control instrumentation:

  • Measurement tolerance and pass/fail determination
  • Test strategy including hysteresis
  • Maintaining acceptable Test Uncertainty Ratios
  • Securing information assets

 

Measurement tolerance and pass/fail determination

The calibration tolerance assigned to each instrument is the defining value used to determine how much measurement error is acceptable. This subject is one that should rely heavily on the requirements of the process and not by looking at what the instrument is capable of performing. Ideally, the tolerance is a parameter that is set in process development where the effect of variation is measured. Unfortunately, there is no hard-and-fast formula for developing a tolerance value, it should be based on some combination of the process requirement, manufacturers’ stated accuracy of the instrument, criticality of the instrument and intended use. Care should be taken not to set a range too tight as it will put pressure on the measurement to be unnecessarily accurate.

Tolerance can be stated as a unit of measure, percentage of span, or percentage of reading. It is critical during calibration to mathematically calculate the error value in order to determine pass/fail status. This calculation is an additional step in the process and particularly with tolerances defined as a percentage of span or reading, mathematical calculations invite opportunities for errors. As calibration programs evolve this aspect of the calibration process will get relegated to the calibration technician’s discretion. There have been occasions where the decision on pass/fail is relegated to technician experience, or gut feel, or asking another technician for his input. This is a practice that provides questionable results and although the resulting calibration certificate may show the measurement is within tolerance, the instrument is recorded as within tolerance in perpetuity when in fact this result was not mathematically confirmed. More importantly, plant operators could be making decisions based on wrong data. This method of determining pass/fail should not be allowed and enforced either procedurally, which should require recording of the error limits as well as the calculated error, or enforced programmatically, having the inputs entered into a computer-based system where the pass/fail is indicated automatically.  

 

Chuck pass fail determination

 

Test strategy including Hysteresis

Hysteresis errors occur when the instrument responds differently to an increasing input compared to a decreasing input and is almost always caused by mechanical friction on some moving element. (See Figure 1) These types of errors rarely can be rectified by simply making calibration adjustments and typically require replacement of the instrument or correction of the mechanical element that is causing friction against a moving element. This is a critical error due to the probability that the instrument is failing.

 

Hysteresis_picture-1

 

Most calibration test strategies will include a test point at zero (0%) and a test point at span (100%), and typically is at least another test point at mid-range (50%). This 3-point test can be considered a good balance between efficiency and practicality during an outage. The only way to detect hysteresis is to use a testing strategy that includes test points up the span and test points back down the span. Critical in this approach is that the technician not overshoot the test point and reverse the source signal, approaching the test point from the wrong direction. Technicians should be instructed to return to previous test point and approach the target point from the proper direction.  

 

Maintaining acceptable Test Uncertainty Ratios

Measurement uncertainty is an estimate of the error associated with a measurement. In general, the smaller the uncertainty, the higher the accuracy of the measurement. The uncertainty of the measurement standard (i.e. – calibrator) is the primary factor considered along with potential errors introduced in the calibration process to get an estimation of the calibration uncertainty, typically stated at a 95% confidence level (k=2). The comparison between the accuracy of the instrument under test and the estimated calibration uncertainty is known as a Test Uncertainty Ratio (TUR).  

The instrument measurement is within tolerance if the uncertainty of the standard used to calibrate the instrument is known. Once the tolerance is defined, a good rule of thumb is that the measurement standard does not exceed 25% of the acceptable tolerance. This 25% equates to a TUR of 4:1; the standard used is four times more accurate than the instrument being checked. With today's technology, a TUR of 4:1 is becoming more difficult to achieve, so accepting the risk of a lower TUR of 3:1 or 2:1 may have to be considered.

Another challenge in many plants are legacy measurement standards. These are standards with acceptable measurement uncertainty compared to the process control instruments of its day, but have very low ratios today. These standards have not been replaced throughout multiple automation system upgrades over the years. Automation suppliers continue to evolve technology yielding more and more accurate measurement capability to the point where some plants may struggle to get a 1:1 TUR, or less. It should be determined if the standards used in the calibration program are fit for purpose by confirming the unit’s uncertainty, defining tolerances, and using the two values to mathematically calculate TUR. This exercise will provide the confidence that the standard being used is sufficient for the measurement being made.

 

Chuck TUR-931099-edited

 

Securing information assets

Information is one of the company’s most valuable assets, but less so if activity is not fully documented, organized efficiently, and easily accessed by those that need the information. Not meeting these characteristics carries a cost to the business in the amount of time and resources to gather the information, and the errors/mistakes made due to inaccurate, incomplete, or outdated data. These costs are magnified with regard to calibration data where defined metrology-based parameters directly impact the quality of process control. For a sustainable calibration program, there must be a point of reference to serve as a definition or benchmark for applying metrology principles.

Harnessing this type of information should be a top priority as the metrology data clearly provides competitive advantage in higher quality calibration work and higher efficiency in execution. An exacerbating circumstance for this issue is the loss of personnel, who are responsible for development and management of the calibration program and in possession of the knowledge base. These losses occur when they leave the company or make internal job changes. This includes the phenomenon of the aging workforce such as baby boomers leaving the workforce at an accelerated rate. With the exit of experienced, skilled workers, critical knowledge will slowly be drained from E&I groups. The industrialized world is transitioning into what is known as the knowledge economy; the concept that success is increasingly based on effective utilization of knowledge as the key resource for competitive advantage. With the attrition of highly skilled workers requiring the replacement with much less experienced workers, this knowledge will be critical in getting them productive as quickly as possible.  

 

Conclusion

The novelty of calibration and metrology alone has inherent complexities. Metrology is a specialized and highly technical subject, and metrology subject matter experts make up a fraction of the overall population of maintenance personnel in the power generation industry. Whereas the desire to maximize electrical output requires maximum availability, reliability and efficiency of the plant, the drive in the industry is to reduce costs in part by running lean with little chance of a dedicated metrologist role. The health and accuracy of measurement and control devices directly impacts the plant’s reliability and up-time, so the resolve to make quality improvements to the calibration program is justified.

Transforming the calibration program doesn’t have to be a resource intensive, immense undertaking. In the absence of a dedicated project, formally managed and resourced, implementing a high-performing calibration program progressively by strategically focusing on specific weaknesses is possible. The effort will require dedicating some time on behalf of select E&I stakeholders to see the initiative through, but weak areas of the metrology program and corrective actions can be documented to show progress.

The specific subject areas highlighted in this paper were selected because they are often overlooked, based on Beamex experience working with various power plants. Corrective action taken on these areas will provide solid strategic improvement in measurement accuracy and enhance the plant’s ability to control its process within design limits. Failure to address these areas will continue the plant on a trajectory that will incur avoidable cost due to additional stress on the plant and lost revenue due to substandard heat rate.

 

Download this article as a pdf file by clicking the picture below:

Metrology fundamentals in calibration to drive long-term value - Beamex White Paper 

 

Related blog articles

Other blog articles related to this topic you might be interested in:

Please browse the Beamex blog for more articles.

 

Related Beamex offering

 Please check the Beamex web site for products and services we offer to help you.

 

 

Topics: Calibration process

Pt100 temperature sensor – useful things to know

Posted by Heikki Laurila on Apr 17, 2018

banner-Pt100-sensors_2000px_v1-635410-edited

Edit October 2023: The Tolerance (Accuracy) Classes edited per IEC 60751:2022.

Pt100 temperature sensors are very common sensors in the process industry. This blog post discusses many useful and practical things to know about the them, including information on RTD and PRT sensors, different Pt100 mechanical structures, the temperature-resistance relationship, temperature coefficients, accuracy classes and much more.  

A while back I wrote about thermocouples, so I was thinking it’s time to write about RTD temperature sensors, especially on the Pt100 sensor which is a very common temperature sensor. This blog ended up being pretty long as there is a lot of useful information to share. I hope you like it and that you learn something from it. Let’s get into it!

 

Table of contents

 

A note on terminology, both “sensor” and “probe” are widely used, I mainly use “sensor” in this article. 

Also, "Pt100" and "Pt-100" are both being used, but I will mainly use the Pt100 format. (Yep, I know that IEC / DIN 60751 uses Pt-100, but I am so used to writing Pt100).

 

Just give me this article in pdf format! Click the link below to download pdf:

White paper: Pt100 temperature sensor - useful things to know

 

RTD temperature sensors

As the Pt100 is an RTD sensor, let’s look first at what an RTD sensor is.

The abbreviation RTD is short for “Resistance Temperature Detector.” So it is a temperature sensor in which the resistance depends on temperature; when the temperature changes, the sensor’s resistance changes. So, by measuring the RTD sensor’s resistance, an RTD sensor can be used to measure temperature.

RTD sensors are most commonly made from platinum, copper, nickel alloys, or various metal oxides and the Pt100 is one of the most common.

 

PRT temperature sensors

Platinum is the most common material for RTD sensors. Platinum has a reliable, repeatable, and linear temperature-resistance relationship. RTD sensors made of platinum are called PRT, “Platinum Resistance Thermometer.” The most common PRT sensor used in the process industry is the Pt100 sensor. The number “100” in the name indicates that is has a resistance of 100 ohms at 0°C (32°F) temperature. More details on that later.

 

PRTs versus thermocouples

In an earlier blog post, we discussed thermocouples, which are also used as temperature sensors in many industrial applications. So, what’s the difference between a thermocouple and a PRT sensor? Here’s a short comparison:

Thermocouples

  • Can be used to measure much higher temperatures
  • Very robust
  • Inexpensive
  • Self-powered, does not need external excitation
  • Not very accurate
  • Requires cold junction compensation
  • Extension wires must be made of suitable material for the thermocouple type
  • Attention must be paid to temperature homogeneity across all junctions in the measurement circuit
  • Inhomogeneities in wires may cause unexpected errors

PRTs:

  • More accurate, linear and stable than thermocouples
  • Do not require cold junction compensation 
  • Extension wires can be made of copper
  • More expensive than thermocouples
  • Need a known excitation current suitable for the sensor type
  • More fragile

 

In short, thermocouples are more suitable for high-temperature applications and PRTs for applications that require better accuracy.

More information on thermocouples and cold junction compensation can be found in this earlier blog post:

Thermocouple Cold (Reference) Junction Compensation

 

Measuring a RTD/PRT sensor

Since an RTD sensor’s resistance changes when  the temperature changes, it is pretty clear that when measuring the RTD sensor you need to measure resistance. You can measure the resistance in Ohms then convert that manually into a temperature measurement according to the conversion table (or formula) of the RTD type being used.

More commonly nowadays, you use a temperature measurement device or calibrator that automatically converts the measured resistance into a temperature reading. This requires the correct RTD type to be selected in the device (assuming it supports the RTD type used). If the wrong RTD sensor type is selected, it will result in incorrect temperature measurement results.

There are different ways to measure resistance. You can use a 2, 3, or 4 wire connection. The 2-wire connection is only suitable for very low accuracy measurement (mainly troubleshooting) because any wire resistance or connection resistance will introduce error to the measurement.

Sure for some high-impedance thermistors, Pt1000 sensors, or other high-impedance sensors, the additional error caused by the 2-wire measurement may not be too significant.

Any normal process measurement should be done using 3 or 4 wire measurement.

For example, the IEC 60751 standard specifies that any sensor with higher than class B accuracy must be measured using a 3 or 4 wire measurement. More on the accuracy classes later in this article.

Just remember to use a 3 or 4 wire measurement and you are good to go.

More information on 2, 3, and 4 wire resistance measurement can be found in the blog post below:

Resistance measurement; 2, 3 or 4 wire connection – How does it work and which to use?

 

Measurement current

As explained in the above-linked blog post in more detail, when a device is measuring resistance it sends a small accurate current through the resistor and then measures the voltage drop generated over it. The resistance can then be calculated by dividing the voltage drop by the current according to Ohm’s law (R=U/I).

If you are interested in more detailed info on  Ohm’s law, check out this blog post:

Ohm’s law – what it is and what an instrument tech should know about it

 

Self-heating

When the measurement current goes through the RTD sensor, it also causes the RTD sensor to warm slightly. This phenomenon is called self-heating. The higher the measurement current and the longer it is on, the more the sensor will warm. The sensor’s structure and its thermal resistance to its surroundings will also have a big effect on the self-heating. It is pretty obvious that this kind of self-heating in a temperature sensor will cause a small measurement error.

The measurement current is typically a max of 1 mA when measuring a Pt100 sensor, but it can be as low as 100 µA or even lower. According to standards such as IEC 60751, self-heating must not exceed 25% of the sensor’s tolerance specification.

 

Mechanical structures of PRT sensors

PRT sensors are generally very delicate instruments and unfortunately, accuracy is almost without exception inversely proportional to mechanical robustness. To be an accurate thermometer, the platinum wire inside the element should be able to contract and expand as freely as possible as the temperature changes to avoid strain and deforming. The drawback is that this sort of sensor is very sensitive to mechanical shocks and vibration.

 

Standard Platinum Resistance Thermometer (SPRT)

Standard Platinum Resistance Thermometer (SPRT) sensors are highly accurate instruments for realizing the ITS-90 temperature scale between fixed points. They’re made from very pure (α = 3,926 x 10-3 °C-1) platinum and the wire support is designed to keep the wire as strain-free as possible. The “Guide to the Realization of the ITS-90” published by the Bureau International des Poids et Mesures (BIPM) defines the criteria that SPRT sensors must fulfill. Other sensors are not and should not be called SPRTs. There are glass, quartz, and metal sheathed sensors for different applications. SPRTs are extremely sensitive to any kind of acceleration such as tiny shocks and vibrations, which limits their use to laboratories requiring the very highest accuracy measurements.

 

Partially supported PRTs

Partially supported PRTs are a compromise between thermometer performance and of mechanical robustness.  The most accurate PRTs are often called Secondary Standard or Secondary Reference sensors. These sensors may adopt some structures from SPRTs and the wire grade may be the same or very close. Due to some wire support, they are less fragile than SPRTs and are even usable for field applications if handled with care, offering excellent stability and low hysteresis.

 

Industrial Platinum Resistance Thermometers, IPRTs

When the wire support is increased, the mechanical robustness increases, but so does the strain from drift and hysteresis issues. Fully supported Industrial Platinum Resistance Thermometers (IPRTs) have even more wire support and are mechanically very robust. The wire is encapsulated completely in ceramic or glass, making it highly resistant to vibration and mechanical shocks. The drawback is much poorer long-term stability and large hysteresis as the sensing platinum is bonded to the substrate, which has different thermal expansion characteristics.

Film PRTs

Film PRTs have evolved a lot in recent years and better ones are now available. They come in many forms for different applications. The platinum foil is sputtered onto the selected substrate; the resistance of the element is often laser-trimmed to the desired resistance value and eventually encapsulated for protection. Unlike wire elements, thin film elements make it easier to automate the manufacturing process, which makes them often cheaper than wire elements. The advantages and disadvantages are typically the same as with fully supported wire elements except that film elements often have a very low time constant, meaning that they react very fast to temperature changes.  As mentioned earlier, some manufacturers have developed techniques that better combine performance and robustness.

 

Other RTD sensors

Other platinum sensors

Although the Pt100 is the most common platinum RTD/PRT sensor, there are several others such as the Pt25, Pt50, Pt200, Pt500, and Pt1000. The main difference between these sensors is pretty easy to guess; it is the sensor's resistance at 0°C, which is mentioned in the name. For example, a Pt1000 sensor has a resistance of 1000 ohms at 0°C. The temperature coefficient is also important to know as it affects the resistance at other temperatures. If it is a Pt1000 (385), this means it has a temperature coefficient of 0.00385°C.

 

Other RTD sensors

Although platinum sensors are the most common, there are also RTD sensors made of other materials including nickel, nickel-iron and copper. Common nickel sensors include the Ni100 and Ni120, an example of a nickel-iron sensor is the Ni-Fe 604-ohm and a common copper sensor is the Cu10. These materials each have their  ownadvantages in certain applications. Common disadvantages are rather narrow temperature ranges and susceptibility to corrosion compared to the noble metal platinum.

RTD sensors can also be made with other materials like gold, silver, tungsten, rhodium-iron or germanium. They excel in some applications but are very rare in normal industrial operations.

Since an RTD sensor’s resistance depends on temperature, we could also include all generic positive temperature coefficient (PTC) and negative temperature coefficient (NTC) sensors in this category. Examples of these are thermistors and semiconductors that are used for measuring temperature. NTC sensors are especially common to use for measuring temperature.

 

Need a break? Download this article as a pdf to read when you have more time -  just lick the picture below.  

Download the white paper

 

Pt100 sensors

Temperature coefficient

The most common RTD sensor in the process industry is the Pt100 sensor, which has a resistance of 100 ohms at 0°C (32°F).

With the same logical naming convention, a Pt200 sensor has a resistance of 200 ohms and a Pt1000 has  a resistance of 1000 ohms at 0°C (32°F).

The resistance of the Pt100 sensor (and other Pt sensors) at higher temperatures depends on the sensor version, as there are a few different ones with slightly different temperature coefficients. Globally, the most common is the 385 version. If the coefficient is not mentioned, it is typically the 385.

The temperature coefficient (indicated by the Greek letter Alpha => α) of the Pt100 sensor is the difference between the resistance at 100°C and 0°C, divided by the resistance at 0°C multiplied by 100°C.

The formula is pretty simple, but it does sound a bit complicated when written, so let’s look at it as a formula:

 

Alpha formula

Where:
α = temperature coefficient
R100 = resistance at 100°C
R0 = resistance at 0°C

 

Let’s take a look at an example to make sure this is clear:

Pt100 has a resistance of 100.00 ohms at 0°C and a resistance of 138.51 ohms at 100°C. The temperature coefficient can be calculated with the following equation:

 

alpha example picture

 

We get a result of 0.003851 /°C.

Or as it is often written: 3.851 x 10-3 °C-1

Often this figure is rounded and the sensor is referred to as a “385” Pt100 sensor.

This is also the temperature coefficient specified in the IEC 60751:2008 standard.

 

The temperature coefficient of the sensor element mostly depends on the purity of the platinum used to make the wire. The purer the platinum, the higher the alpha value. Nowadays it’s not a problem to get very pure platinum material. For manufacturing sensors to meet the IEC 60751 temperature/resistance curve, the pure platinum must be mixed with suitable impurities to bring the alpha value down to 3.851 x 10-3 °C-1.

The alpha value originates from when the melting point (≈0 °C) and the boiling point (≈100 °C) of water were used as reference temperature points, but it is still used to define the grade of the platinum wire. Since the boiling point of water is actually a better altimeter than a reference temperature point, another way to define the wire purity is the resistance ratio at the gallium point (29.7646 °C) which is a defined fixed point on the ITS-90 temperature scale. This resistance ratio is represented by the Greek letter ρ (rho).

 

formula roo

 

A typical ρ value for a 385 sensor is 1.115817 and for an SPRT it is 1.11814. In practice, the good old alpha is, in many cases, the most convenient, but rho may also be announced.

 

 

Pt100 (385) temperature resistance relationship

 

In the graph below, you can see how a Pt100 (385) sensor’s resistance depends on temperature:

 

resistance vs temperature graph

 

When looking at these, you can see that the resistance-temperature relationship of a Pt100 sensor is not perfectly linear, but the relationship is somewhat curved.

 

The table below shows the numerical values of a Pt100 (385) temperature vs. resistance at a few points:

temperature resistance table dots

 

 

Other Pt100 sensors with different temperature coefficients

Most sensors have been standardized, but there are different standards around the world. This is also the case with Pt100 sensors. Over time, there have been a few different standards specified. In most cases, there is only a relatively small difference in the temperature coefficient.

 

As a practical example, the standards that we have implemented in Beamex temperature calibrators include the following:

  • IEC 60751
  • DIN 43760
  • ASTM E 1137
  • JIS C1604-1989 alpha 3916, JIS C 1604-1997
  • SAMA RC21-4-1966
  • GOCT 6651-84, GOST 6651-94
  • Minco Table 16-9
  • Edison curve #7

 

Make sure your measurement device supports your Pt100 sensor

The good thing about standard Pt100 probes is that each sensor should fulfill the specifications and you can just plug it into your measurement device or calibrator and it will measure its own temperature as accurately as the specifications (sensor + measurement device) define. Sensors in the process should be interchangeable without calibration, at least for less critical measurements. Nevertheless, it would still be  good practice to check the sensor at some known temperature before use.

Anyhow, since the different standards have slightly different specifications for the Pt100 sensor, it is important that the device you use for measuring your Pt100 sensor supports the correct temperature coefficient. For example, if your measuring device supports only Alpha 385 and you are using a sensor with an Alpha 391, there will be some error in the measurement. Is that error significant? In this case (385 vs 391), the error would be roughly 1.5°C at 100°C. So I think it is significant. Of course, the smaller the difference between temperature coefficients, the smaller the error will be.

So make sure that your RTD measurement device supports the Pt100 probe you are using. Most often if the Pt100 has no indication of the temperature coefficient, it is a 385 sensor.

As a practical example, the Beamex MC6 calibrator & communicator supports the following Pt100 sensors (temperature coefficient in parenthesis) based on different standards:

  • Pt100 (375)
  • Pt100 (385)
  • Pt100 (389)
  • Pt100 (391)
  • Pt100 (3926)
  • Pt100 (3923)

 

Pt100 accuracy (tolerance) classes

Pt100 sensors are available in different accuracy classes, the most common of which are AA, A, B and C being defined in the IEC 60751 standard. Standards define a sort of ideal Pt100 sensor for the manufacturers to aim at. If it was possible to build the ideal sensor, tolerance classes would be irrelevant.

As Pt100 sensors cannot be adjusted to compensate for errors, you should buy a sensor with a suitable accuracy for your application. Sensor errors can be corrected in some measurement devices with certain coefficients, but more on that later.

The accuracy (tolerance) values of the different accuracy classes (IEC 60751:2022):

Accuracy class table 1

 

There are also so-called fraction tolerance classes, such as 1/3, 1/5 and 1/10. They were earlier standardized classes in, for example, DIN 43760:1980-10 that was withdrawn in 1987, but they were not defined in the later IEC 60751 standard.

Anyhow, the IEC 60751:2022 standard defines these tolerance classes in section 5.2.3.3 (Marking of thermometers). The tolerance of these sensors is based on the accuracy class B sensor, but both the fixed part (0.3 °C) and the relative part of the error are divided by the given number (3, 5 or 10). So, the tolerance classes of these sensors are:

Pt100 tolerance fraction classes 2022

And of course, a sensor manufacturer can manufacture sensors with their own custom accuracy classes. 

The formulas can be difficult to compare, so in the below table the accuracy classes are calculated according to temperature (°C):

Pt100 all tolerance classes 2022

 

The graphic below shows the difference between these accuracy classes:

Pt100 tolerance classes graphics 2022

 

 

Coefficients

The accuracy classes are commonly used in industrial RTD sensors, but when it comes to the most accurate PRT reference sensors (SPRT, Secondary Standards, etc.), those accuracy classes are not valid anymore. These sensors were made to be as good as a thermometer as possible for the purpose, not to match any standardized curve. They are very accurate sensors with very good long-term stability and very low hysteresis, but each sensors is unique, so each sensor has a slightly different temperature/resistance relationship. These sensors should not be used without their own specific coefficients.  You can find general CvD coefficients for SPRT’s, but that will ruin the performance you’ve paid for. If you just plug in a 100 ohm Secondary PRT sensor like the Beamex RPRT into a device measuring a standard Pt100 sensor, you may get a result that is several degrees wrong. In some cases it doesn’t necessarily matter, but in other cases it may be the difference between a medicine and a toxin.  

To sum up: PRT reference sensors must always be used with proper coefficients.

 

As mentioned before, RTD sensors cannot be “adjusted” to measure correctly. Instead, the correction needs to be made in the device (for example the temperature calibrator) that is being used to measure the RTD sensor.

In order to find out the coefficients, the sensor should first be calibrated very accurately. The calibration provides the coefficients for the desired equation, which can be used to represent the sensor’s characteristic resistance/temperature relationship. The use of the coefficients will correct the sensor measurement and will ensure it measures accurately. There are several different equations and coefficients to calculate the sensor’s resistance to temperature. These are probably the most widespread:

 

Callendar-van Dusen

  • In the late 19th century, Callendar introduced a simple quadratic equation that describes the resistance/temperature behavior of platinum. Later, van Dusen found out that an additional coefficient was needed below zero. It’s known as the Callendar-van Dusen equation (CvD). For alpha 385 sensors, it’s often about as good as ITS-90, especially when the temperature range isn’t very wide. If your certificate states coefficients R0, A, B, C, they are coefficients for the IEC 60751 standard form CvD equation. Coefficient C is only used below 0 °C, so it may be absent if the sensor was not calibrated below 0 °C. The coefficients may also be R0, α, δ and β. They fit to the historically used form of the CvD equation that is still in use. Regardless of being essentially the same equation, their written form and coefficients are different.

 

ITS-90

  • ITS-90 is a temperature scale, not a standard. The Callendar-van Dusen equation was the basis of the previous scales of 1927, 1948 and 1968, but ITS-90 brought significantly different mathematics. ITS-90 functions must be used when realizing the temperature scale using SRPTs, but many lower-alpha PRTs also benefit from it compared to CvD, especially when the temperature range is wide (covers hundreds of degrees). If your certificate states coefficients like RTPW or R(0,01), a4, b4, a7, b7, c7, they are coefficients for ITS-90 deviation functions. The ITS-90 document does not designate numerical notations for the coefficients or subranges. They are presented in NIST Technical Note 1265 "Guidelines for Realizing the International Temperature Scale of 1990" and widely adopted for use. The number of coefficients may vary and the subranges are numbered 1…11.
    • RTPW, R(0,01 °C) or R(273,16 K) is the sensor’s resistance at the triple point of water 0,01 °C
    • a4 and b4 are coefficients below zero, may also be abz and bbz meaning “below zero”, or just a and b
    • a7, b7 and c7 are coefficients above zero, may also be aaz, baz and caz meaning “above zero”, or a, b and c

 

Steinhart-Hart

  • If your sensor is a thermistor, you may have  Steinhart-Hart equation coefficients on the certificate. Thermistors are highly non-linear and the equation is logarithmic. The Steinhart-Hart equation has widely replaced the earlier Beta-equation. Usually the coefficients are A, B and C, but there may also be coefficient D or others, depending on the variant of the equation. The coefficients are usually published by manufacturers, but they can be fitted as well.

 

Finding out the sensor coefficients 

When a Pt100 sensor is sent to a laboratory for calibration and fitting, the calibration points must be selected properly. A 0 °C or 0.01 °C point is always needed. The value itself is needed for fitting but typically ice point (0 °C) or the triple point of water cells (0.01 °C) is used to monitoring the stability of the sensor and is measured several times during calibration. The minimum number of calibration points is the same as the number of coefficients that should be fitted. For example, for fitting ITS-90 coefficients a4 and b4 below zero, at least two known negative calibration points are needed to solve the two unknown coefficients. If the sensor’s behavior is well known to the laboratory, two points might be enough in this case. Nevertheless, it’s a good practice to measure more points than absolutely necessary, because there’s no other way the certificate can tell how the sensor behaves between the calibration points. For example, CvD fitting for a wide temperature range may look good if you only have two or three calibration points above zero, but there may be a systematic residual error of several hundredths of a degree between calibration points that you won’t see at all. This also explains why you may find different calibration uncertainties for CvD and ITS-90 fitting for the same sensor and the exact same calibration points. The uncertainties of the measured points are no different, but the residual errors of different fittings are usually added to the total uncertainty.

 

Download the free white paper

Download our free white paper on Pt100 temperature sensors by clicking the picture below:

Download the white paper

   

Other temperature-related blog posts

If you are interested in temperature and temperature calibration, you may find these other blog posts interesting:

 

Beamex temperature calibration products

Please check out the new Beamex MC6-T temperature calibrator. A perfect tool for Pt100 sensor calibration and much, much more. Click the below picture to learn more:

Beamex MC6-T temperature calibrator

 

Please check what other temperature calibration products Beamex offers, by clicking the below button:

Beamex Temperature Calibration Products

 

Temperature Calibration eLearning

Free eLearning course on industrial temperature calibration.

Master temperature calibration with this free comprehensive eLearning course from Beamex. Deepen your knowledge, pass the quiz, and earn your certificate!

Read more and enroll >

 

Temperature Sensor Calculator

A free tool to easily convert between temperature and electrical signals for thermocouples and RTD sensors. 

https://www.beamex.com/resources/temperature-sensor-calculator/

 

And finally, thanks Toni!

And finally, a special thanks to Mr.Toni Alatalo who is the head of our accredited temperature calibration laboratory at Beamex factory. Toni provided a lot of help and detailed information for this blog post.

 

Finally finally, please subscribe!

If you like these articles, please subscribe to this blog by entering your email address to the "Subscribe" box on the upper right-hand side of the page. You will be notified by email whenever new articles are available.

 

 

Topics: Temperature calibration

How to calibrate pressure instruments [Webinar]

Posted by Heikki Laurila on Mar 20, 2018

How to calibrate pressure instruments - Beamex webinar

In this blog post, I want to share with you a two-part webinar series titled “How to calibrate pressure instruments” we did together with our partner ISA (International Society of Automation).

More information on ISA can be found at https://www.isa.org/

These webinars include following experienced speakers:

  • Hunter Vegas has over 30 years of experience in various automation projects. He currently works for the company Wunderlich-Malec.
  • Ned Espy has worked over 20 years with calibration management at Beamex, Inc. and also has calibration experience from his previous jobs.
  • Roy Tomalino has worked 15 years at Beamex, Inc. teaching calibration management and also has prior calibration experience

To make it easier for you to see if there is anything interesting for you in these webinars, I have done a short table of contents. There you can see the main topics and the times these topics start, so you can quickly jump to the interesting point. In addition to these main topics, there are many other useful topics discussed in the webinars.

Please take a look at the tables of contents and the links to the webinar recordings below:

How to calibrate pressure instruments - Part 1 (1:35:40)

 

Time                   Subject
0:00Introduction
1:45Presentation of the webinar speakers
4:15Webinar agenda
7:30Why calibrate?
9:25Are you under pressure? Investigating pressure.
14:40Tank shape vs. pressure.
16:05Liquid vs. vapor pressure
17:00Pressure types and measurements
22:50Altitude effects on ambient pressure
24:50Pressure scales (types) – absolute, gauge, vacuum
31:00Demonstration - barometric pressure
33:10Questions & Answers
36:00Differential pressure. Orifice, square rooting.
37:30Demonstration – calibration and trimming of a pressure transmitter
56:00Measuring pressure – Air, steam.
59:50 Demonstration – elevated zero calibration demo
1:15:30Questions & Answers

 

Watch the webinar (Part 1) by clicking the picture below:   

How to calibrate pressur einstruments, Part 1 - Beamex webinar

 

How to calibrate pressure instruments - Part 2 (1:37:37)

 

TimeSubject
0:00Introduction
4:15Presentation of the webinar speakers
7:30Webinar agenda
8:50DP calibration of storage tanks. With blanket, comp. leg, bubbler, with seals.
12:45Calibration of a steam drum level
23:30Demonstration – calibration of a pressure/level transmitter
30:40Questions & answers
41:40DP seal assemblies, assembly components, seal internal, fill fluid issues, temperature expansion, capillary issues
53:00DP single seal capillary installations. Pad/Seal, Dual seal
55:30DP seal calibration steps
1:04:20Demonstration – Calibration and trim of a pressure transmitter with capillary
1:19:00Questions & answers

 

Watch the webinar (Part 2) by clicking the picture below:   

How to calibrate pressure instruments, Part 2 - Beamex webinar

 

Other "Pressure" related blog posts

If you are interested in pressure and pressure calibration, you may find also these below-listed blog posts interesting:

 

Other "Webinar" blog posts

   

Topics: Pressure calibration, Webinar

Common Data Integrity Pitfalls in Calibration Processes

Posted by Heikki Laurila on Feb 27, 2018

 Common Data Integrity Pitfalls in Calibration Processes - Beamex blog post

 

In this blog post, I will discuss the most common data integrity pitfalls in the calibration process.

Whether the calibration process is fully paper-based, or partially utilizing documenting calibrators, or even fully paperless, there are places in calibration processes where data integrity most commonly is jeopardized and needs special attention.

Data Integrity in brief

In one sentence: Data integrity is the maintenance of, and the assurance of the accuracy and consistency of the data over its entire life-cycle.

Although the Data Integrity is already a pretty old concept, it has become more acute recently due to new regulation guidelines (from FDA, MHRA, and EMA) and auditors have started to pay more attention to data integrity.

ALCOA and ALCOA Plus are the key acronyms of Data Integrity. Learn more about these acronyms and other general information on data integrity in the earlier blog post titled Data Integrity in Calibration Processes.

 

What are the most common data integrity pitfalls?

Let’s look at a normal calibration process in a pharmaceutical process plant and list some possible and most common places for data integrity issues. It does not matter if the calibration process is fully paper-based or paperless, similar issues need to be taken care of anyhow, either by a manual process or an automated system. Also, data integrity issues are similar in other industries, not only in pharmaceutical.

Some important aspects to be considered include, but not limited to, the following:

  • User rights and passwords for any system is one obvious consideration. Every user needs to have the proper user rights and he needs to be authenticated by the system for access. 
    Some FDA warning letters indicate that there have been shared passwords, which is, of course, not acceptable.
  • An audit trail in an electronic system that keeps track of any modifications and records who did what with the help of an electronic signature that is required for any changes.
  • In any kind of system, it should not be possible to delete, undo, redo or tamper with a single calibration point or the full calibration cycle, without a trace. It may be tempting to delete a calibration point or the whole result, if it was not good enough and try to redo it, but the original data of any calibration event needs to be recorded as it was originally witnessed. Of course, several calibration repeats can be done, if that is specified in the SOP, but none of the points/repeats representing the original data should be deleted. In a paper based system, it may be difficult to fully control that, but also some electronic calibration systems allow the user to delete a bad calibration result and allow him to do it again until he achieves a satisfying result.
  • It must not be possible to edit, modify or tamper any original calibration data after it has been recorded, no matter what kind of system it is.
  • It must not be possible to backdate any actions, neither accidentally or by falsifying the date and time. It may be tempting to backdate a calibration, in case you forgot to do it in time. In a paper-based system, it is difficult to control that you write the correct date, but also some electronic systems don’t have proper control of the date/time settings. Certainly, some users may have the rights to backdate a calibration date, for example in the case when a standard has been sent out for calibration and the calibration date is updated when the unit is received.
  • A manual paper-based system may always have issues, such as a risk for typing errors, false interpretations of poor handwriting, loss of papers, etc. Often you look at a reading of a measuring device and write that reading on paper and then later type that result from paper into an electronic system. There are many places for typing errors in that kind of system and it is also time-consuming to do multiple manual data inputs. Modern electronic documenting calibration equipment will save the measurement results into its memory and transfer the result automatically into a computerized system for archiving, without any manual steps in between.
  • All calibration records need to be archived and sometimes you need to analyze the older results and compare them with the new results. In case of a paper archive, this can be a really big job. With an electronic system, it is typically very easy to search/find the older records.

 

Outsourced calibration

Often some part of calibration work is outsourced to an external company. When calibration is outsourced you don’t typically have the same full control of the calibration process, as you have with internal calibrations, so it is a more uncontrolled process. In terms of data integrity, any part of the calibration work carried out by external resources should naturally follow the same principles as the internal calibration processes. If you send out a process instrument to an external calibration laboratory and get it back with a calibration certificate, you don’t have the same knowledge and control on how the calibration process was actually completed. Many of the calibration service companies don’t follow the same rules, regulations and good practices as, for example, pharmaceutical companies do. Certainly, if the pharmaceutical companies are outsourcing calibration work, they normally do audit the service companies. In some cases, external service companies can access the calibration system/software of a pharmaceutical company and work in the same way as internal workers.

 

Beamex calibration solution

For years, we have worked with multiple pharmaceutical customers and together we have recently enhanced our calibration solution to fulfill the updated regulatory requirements and the requirements and wishes of the customers. Already in the past, the Beamex calibration solution fulfilled the requirements of 21 CFR Part 11 and relevant regulations. 

Now with our latest enhancements, the Beamex calibration solution further lowers the risk of ALCOA violations by identifying users on off-line mobile devices with their electronic signature and secures off-line data against potential data tampering. These mobile off-line devices include our MC6 and MC6-Ex portable documenting multifunction calibrators, and our bMobile 2 application for tablets and mobile phones. With the latest version of the Beamex calibration solution, including the Beamex CMX calibration management software, you can safely use these mobile devices to comply with the updated regulation in the future.

Please visit Beamex web site or contact us for more information on our offering.

 

To download the related white paper "Common Data Integrity Pitfalls in Calibration Processes", please click the picture below:

New Call-to-action

 

 

Topics: Data Integrity

How often should instruments be calibrated? [Update]

Posted by Heikki Laurila on Jan 30, 2018

How often should instruments be calibrated [update] - Beamex blog

How often should instruments be calibrated? That is a question we get asked often. It would be nice to give a simple answer to that question, but unfortunately, that is not possible. Instead, there are several considerations that affect the answer of a correct calibration period. In this post, I will discuss these considerations.

In one of the very first posts in this Beamex blog, I wrote about “How often should instruments be calibrated?” Back then, the idea was to keep the posts very short, and I feel I had to leave out many important things from that post. Therefore, I decided to make an updated version of that post.

The question of how to determine the correct calibration interval remains one of our most frequently asked questions. In this post, I want to discuss both process instruments and reference standards (calibrators).

Of course, it would be nice to give one answer for the calibration interval that would be valid for all situations, but unfortunately, there is not a magic answer. Sure, on many occasions you hear that instruments should be calibrated once a year and while that can be the correct answer in some situations, it may not be fit for all purposes. There is no straight answer to this question. Instead, there are several considerations that affect the answer of the correct calibration period.

Let’s take a look at these considerations:

 

Process tolerance need vs. instrument accuracy


To start with, there is one thing that often bothers me. Let’s say in a process plant you have purchased a number of similar process instruments/transmitters. Sure, it makes sense to standardize the models. Then, you install these transmitters in all the different locations that need this quantity measured. The transmitter has an accuracy specification and it may also have a long-term stability specification from the manufacturer. Then you use the manufacturer’s transmitter tolerance as the calibration tolerance no matter where the transmitter is installed. This is a bad practice. The tolerance requirements of the process should always be taken into account!

 

Measurement criticality


The criticality of measurement should be considered when determining the calibration period.
In some cases, the measurement instruments may be calibrated prior to an important measurement. It may also be calibrated after that measurement, to assure that it has remained accurate throughout the measurement.

Some locations are non-critical and do not require as accurate of a measurement as the transmitter’s specifications are, so these locations can be calibrated less often and the tolerance limit for those locations can be bigger than the transmitter’s specification.

But also, the other way around; some locations are very critical for the process - these locations require very accurate measurements. If the same transmitters are installed in these critical locations, they should be calibrated more often and their calibration acceptance tolerance should be kept tight enough for the critical location. The calibration tolerance can be even tighter than the transmitter's specifications, but then you need to calibrate it often enough and follow that it remains within these tight tolerance limits.

Of course, you could also buy different accuracy level transmitters for different process locations, but that is not very convenient or practical. Anyhow, the persons in the plant that best knows the accuracy requirements of different locations in the process should be consulted to make the right decision.

The tolerance of measurements should be based on process requirements, not on the specifications of the transmitter that is installed.

 

How accurate is accurate enough?


In the previous chapter, we discussed process instruments. The same consideration is valid also for the reference standards or calibrators.

This also works both ways; before you buy any calibrator or reference standard, you should make sure that it is accurate enough for all your most critical calibration needs. Not only for today but also in the years to come. There is no point in purchasing a calibrator that won’t be accurate enough next year or that does not suit the job; it’s just money wasted.

On the other hand, you don’t always need to buy the most accurate device in the universe. Depending on your accuracy needs, the reference standard needs to be accurate enough, but not an over-kill. Metrologically, of course, it is not harmful to buy a reference standard that is too accurate, but it may be on the expensive side.

The usability of the standard is one thing to consider. Also, some references may have multiple quantities, while others have only one.

 

Manufacturer’s recommendation


For many instruments, the manufacturers have a recommendation for calibration period. This is especially the case for reference standards and calibrators. Often, manufacturers know best about how their equipment behaves and drifts over time. Also, manufacturers often have specified a typical long-term stability for a given time, like for one year.

So, the manufacturer’s recommendation is an easy and good starting point when deciding the initial calibration period. Of course, over time, you should follow the stability of the device and adjust the calibration interval accordingly.

Also, depending on how good the accuracy specification of the reference standard is, you may alter the manufacturer’s recommendation. I mean that if the reference standard has a very good accuracy compared to your needs, you may calibrate it less often. Even if it fails to stay within its specifications, it may not be that critical to you. Also, the other way around – if the reference standard is on the limit of being accurate enough for you, you may want to calibrate it more often than the manufacturer recommends, as you may want to keep it in tighter tolerance than the manufacturer's specifications are.

 

Stability history


The stability history of any measurement device is very precious information. You should always follow the stability of your measurement devices. In case the device needs to be adjusted during a recalibration, you should always save the calibration results before (As Found) and after (As Left) adjustment. If you only adjust the instrument and make a new calibration certificate, it will look like the instrument was very stable and there was drift, although that is not the truth.

If you send your instrument out for recalibration, make sure you get the calibration results before and after adjustment, if an adjustment was made. Also, make sure you know if it was adjusted.

After you acquire a longer stability history of the measurement device, you may start making changes to the calibration period. If the instrument drifts too much and often fails to meet the tolerance in recalibration, then you naturally need to make the calibration period shorter. Also, if it clearly meets the tolerance limit in every recalibration, without any need for adjustment, you can consider making the calibration period longer.

You should have an accepted, written procedure in your quality system for changing calibration periods, and also defined responsibilities.

Typically, if the stability of an instrument looks good in the first recalibration, you should still wait for a few recalibrations before making the period longer. If you plan on making the period longer, the costs for a failed recalibration should be also considered. With some industries (like pharmaceutical) or with some critical measurements the costs of a failed recalibration are so high that is much cheaper to calibrate “too often.”

On the other hand, if a recalibration fails, you should shorten the calibration period immediately. Naturally, that also depends on how much it fails and how critical it is.

If you use the Beamex CMX Calibration Manager software, it will generate you history trend graphics automatically with a push of a button.

 

Previous experience


In the previous chapter, the stability history was discussed as an important consideration. Sometimes you already have previous experience with the stability of the instrument type that you need to set the calibration period for. Often the same types of instruments have similarities in their stability and long-term drift. So, the past experience of similar measuring instruments should be taken into account.

Similar types of instruments can have similar calibration periods, but this is not always true, as different measurement locations have different criticality, different needs for accuracy and may also have different environmental conditions.



Regulatory requirements, quality system


For certain industry measurements, there can be regulatory requirements, based on a standard or regulation, that stipulate the accepted length of the calibration period. It is difficult to argue with that one.

I've heard customers say that it is difficult to change the calibration period as it is written in their quality system. It should, of course, be possible for you to change your quality system if needed.

 

The cost of an out-of-tolerance (fail) calibration


A proper risk analysis should be performed when determining the calibration period of instruments. 
One thing to consider when deciding on a calibration period of any instrument is the cost and consequences if the calibration fails. It is to find a good balance between the costs of the calibration program versus the costs of not calibrating enough. You should ask yourself “what will happen if this instrument fails the recalibration?”

If it is the case of a non-critical application and a fail in recalibration is not that important, then it is ok that the calibration fails from time to time. Sure, you should still adjust the instrument during the calibration to measure it correctly and to have more room for drift before the next recalibration.

If it is a critical measurement/instrument/application, then the consequences of a failure in recalibration can be really large. In the worst case, it may result in a warning letter from a regulatory body (like the FDA in the pharmaceutical industry), loss of license to produce a product, negative reputation, loss of customer confidence, physical injury to persons on the job or to those who receive a bad end product and so on. Also, one really alarming consequence is if you need to recall delivered products from the market because of an error found in calibration. For many industries, this kind of product recall is obviously a very big issue.

As an example, with the heat treatment industry, you don’t easily see if the final product is properly heat treated, but a fault in heat treatment can have a dramatic effect in the properties of the metal parts, that typically go to aerospace or automobile industry. An erroneous heat treatment can cause very severe consequences.

Certainly, pharmaceutical and food industries will also face huge consequences if poor quality products are delivered because of poor calibration or lack of calibration.

 

Other aspects that effect the calibration period


There are also many other aspects that will influence the calibration period, such as:

  • The workload of the instrument: if the instrument is used a lot, it should be calibrated more often than one that is being used very seldom.
  • Environmental conditions: an instrument used in extreme environmental conditions should be calibrated more often than one used in stable conditions.
  • Transportation: if an instrument is transported frequently, you should consider calibrating it more often.
  • Accidental drop/shock: if you drop or otherwise shock an instrument, it may be wise to have it calibrated afterward.
  • Intermediate checks: in some cases, the instrument can be checked by comparing it against another instrument, or against some internal reference. For example, for temperature sensors, an ice-bath is a way to make relatively accurate one-point check. This kind of intermediate checks between the actual full recalibrations ads certainty to the measurement and can be used to extend the calibration period.

 

Traceability and calibration uncertainty

Finally, a couple of vital things you should remember with any calibration are traceability and uncertainty.

Shortly said, traceability means that all your calibrations (measurement instruments) must have a valid traceability to relevant national standards.

Whenever you make a measurement, you should be aware of the uncertainty related to that measurement.

If the traceability and uncertainty are not considered, the measurement does not have much value.

 

For more detailed information on traceability and uncertainty, please take a look at the below-mentioned blog posts:

More information on metrological traceability in calibration:

Metrological Traceability in Calibration – Are you traceable?


More information on calibration uncertainty:

Calibration uncertainty for dummies - Part 1

Also, please check out the article What is calibration on our web site.

 

Subscribe 

PS: If you like these articles, please subscribe to this blog by entering your email address to the "Subscribe" box on the upper right-hand side. You will be notified when new articles are available, no spamming!

 

 

Topics: calibration period

Ohm’s law – what it is and what an instrument tech should know about it

Posted by Heikki Laurila on Dec 20, 2017

Ohms law - Beamex blog post  

In this post, I would like to talk you about the Ohm’s law. Why? Because it is helpful in many practical every-day situations, especially if you are an instrument technician. We often get questions that can be answered with an answer derived from the Ohm’s law.

Update November 1, 2018: Pictures has been replaced to include engineering units instead of quantities.

Although it is called “Ohm’s law” – don’t worry, this is not going to be any boring legal stuff… ;-)

First, I would like to talk a little about the theoretical side of it, and then take some practical instrumentation examples where you find this useful.

So, let’s take a look at this law...

 

Background

Let’s start with the compulsory facts:

Back in 1827, a German physicist Georg Ohm published this law. He discovered that when electric current goes through a resistor, the current is proportional to the voltage drop over the resistor and inversely proportional to the resistance of the resistor. The relationship between current, resistance and voltage is the Ohm’s law.

The Ohm’s law is often presented as a triangle (below triangle is shown with the engineering units):

Ohms law triangle - Beamex blog post

 From the triangle, you can calculate each component and you get these three formulas:

  • Resistance (Ohms) = Voltage (V) / Current (A)
  • Current = Voltage / Resistance
  • Voltage = Current * Resistance

 

Note: As the symbol for the quantity of voltage in the below examples, I will be using the symbol "U", according to the International SI System. I know that sometimes different symbols is being used for Voltage in different regions, such as E or V.  The purpose of this post is not to try to standardize symbols, but to give practical education on the use of Ohm's law. So please don't get offended if you don't like the "U".

Please note that the mA current must be converted to Amps for the calculation.

Please also note, that for keeping the formulas simple and easy to read, I have not always used the mathematically correct number of significant figures/numbers. This post is anyhow more for technical people, not for mathematicians...

 

Simplified example

Let’s look at the most simplified possible circuit:

Ohm’s law – what it is and what an instrument tech should know about it. Beamex blog post.

 

In the above example, we have a 24 VDC supply voltage and we have connected a 1200 ohms resistance into it. There is a 20 mA (0.02 A) current going through the circuit.

If we add a 1200 ohms resistor into the 24 V supply and we want to know what current goes in the circuit, we can calculate it easily with the Ohm’s law:

I = U / R = 24 V / 1200 ohms = 0.02 A (= 20 mA)

If we know the voltage is 24V and we want a current of 20 mA, we can calculate what resistor is needed:

R = U / I = 24 V / 0.02 A = 1200 ohms

Or, if we have a 1200 ohms resistor and we want to get 20 mA current, how much voltage do we need to apply:

U = I * R = 1200 ohm * 0.02 A = 24 V

Where:

U = Voltage [V]

I = Current [A]

R = Resistance [Ohm]

 

Consequently, if we have the 24 V loop supply and we want to get 4 mA current, we need to add a bigger resistance: 

R = U / I = 24 / 0.004 A = 6000 ohms.

So, we need to add a 6000 ohms (6 kohms) resistor to get a 4 mA current.

 

Practical examples

Example 1 - A 250 ohms HART resistance

We have a normal circuit where the transmitter is supplied with a 24 V supply, and we have a 250 ohms resistor in series with the transmitter in order to use the HART communication:

Ohm’s law – what it is and what an instrument tech should know about it. Beamex blog post.

 

As the current goes through the 250 ohms resistor, there is a voltage drop coming over that resistor, so some voltage is lost there. How much supply voltage comes to the transmitter when the current is 20 mA?

When the current is 20 mA we can calculate that over the 250 ohms resistor there will be a voltage drop of:

U =  I * R  =  0.02 A * 250 ohms = 5 V

This means there is a voltage drop of 5 volts over the 250 ohms resistor, so we have 19 volts left to the transmitter, which is of course enough for the transmitter to work. But if we had a much lower loop supply voltage, say 17 volts, to start with, there would be only 12 volts left to the transmitter, which is on the limit for it to work.

 

Ex 2 - Measuring transmitter’s current with a resistor in series

If you don’t want to break the loop or open the transmitter’s cover to measure the current, you can install a precision resistor in series with the transmitter and then measure the voltage drop over the resistor to calculate the current.

Ohm’s law – what it is and what an instrument tech should know about it. Beamex blog post.

 

The voltage drop over the resistor depends on the resistance value and on the current going through it. For example, if you install a 100 ohms resistor in series with the transmitter, the voltage drop over it will be:

At 4 mA => 0.004 A * 100 ohms = 0.4 V

At 20 mA => 0.02 A * 100 ohms = 2.0 V

Of course, the resistance needs to be very accurate and stable, because any error in the resistance value will give a similar error in the calculated current.

The bigger the resistance is, the bigger voltage you get. It is good to remember that if the resistance is very big, then you will lose a lot of supply voltage over the resistor.

 

Ex 3 - mA meter’s resistance with transmitter’s test diode connection

This is a topic I discussed in an earlier blog. In that example, understanding of the Ohm’s law was also needed to understand the issue.  You can find that blog post in the below link:

Measuring current using a transmitter’s test connection – don’t make this mistake!

 

Ex 4 - Supply for high resistance circuit

You may have a circuit where the instrument has a high internal resistance. Let’s say an old I/P converter that has 800 ohms resistance. You need to generate a 4 to 20 mA signal to control the converter. How much supply voltage would you need to do that?

Well, in order to generate a current of 20 mA over that 800 ohms circuit, you will need:

U = I * R = 0.02 A * 800 ohms = 16 Volts.

So, you will need a loop supply that has a voltage of at least 16 volts.

 

Ex 5 - Too much resistance in the supply line

If there is too much resistance in the supply line to transmitter, the loop supply to the transmitter can be on the edge of being too small, it may happen that the transmitter works perfectly with lower mA signal, but when it needs to deliver high current (for example over 18 mA), the voltage drops too low and the transmitter will switch itself off. This is simply because the voltage drop in the connection resistances becomes bigger as the current become bigger. It can happen that with a small current the voltage is acceptable and the transmitter gets enough supply voltage, but with higher current there is too much voltage drop in the connections and the transmitter does not get high enough voltage, and switches off.

When the transmitter goes off, the current drops and the supply voltage jumps back up again and the transmitter starts to work again normally. This kind of intermittent faults are very difficult to find.

 

Ex 6 - mA meter / Volt meter

It is also good to remember that in practice, a mA meter’s internal resistance is not zero ohms, but it has a certain internal resistance (a few ohms or tens of ohms). So, there will be some voltage drop over the mA meter in practice.

Also, a voltage meter does not have infinite resistance, but it has a certain internal resistance (megaohms). These resistances may cause some unwanted effects when you make your measurements. So, the voltage meter will put some load on to the measured circuit, although this is an issue only valid in certain sensitive circuits/ applications. This is especially important when you are measuring a low voltage (tens or hundreds of millivolts) signal in a high resistance circuit and you have a high accuracy requirements (± a few microvolts). If the voltage meter has too small resistance, the measured voltage will drop as soon as you connect the voltage meter, so you don’t get accurate results. In some case connecting voltage meter with too low internal resistance may cause the circuit to trip as soon as you connect the meter.

 

Conclusion

The Ohm’s law is pretty simple and easy to understand. It has many applications if you work with electrical circuits. It is often very useful also in the instrumentation world, where you work with loop supply, current signals and resistances.

I hope this post was easy and practical enough to give you some useful tips for your work.

 

Download the related white paper by clicking the below picture:

New Call-to-action

 

Thanks! 

 

Topics: Ohm's law

How to avoid common mistakes in field calibration [Webinar]

Posted by Heikki Laurila on Nov 23, 2017

How to avoid the most common mistakes in field calibration - Beamex webinar

This post includes the recordings of the two-part webinar series titled “How to avoid the most common mistakes in field calibration” we recently did together with our partner ISA (International Society of Automation).

ISA develops standards, certifies industry professionals, provides education and training, publishes books and technical articles, and hosts conferences and exhibitions for automation professionals. More information on ISA can be found at https://www.isa.org/

The webinar speakers have a lot of experience in automation and calibration:

  • Hunter Vegas has over 30 years of experience in various automation projects. He currently works for the company Wunderlich-Malec.
  • Ned Espy has worked over 20 years with calibration management at Beamex, Inc. and also has calibration experience from his previous jobs.
  • Roy Tomalino has worked 15 years at Beamex, Inc. teaching calibration management and also has prior  calibration experience.

To make it easier for you to find out what was discussed in the webinars, I have done a short table of contents with the starting times of each topic. This makes it easier for you to see if there are some interesting sections and to quickly jump straight into that section. Please note that this table of contents does not list all the things discussed, so I recommend checking the videos fully through. The videos include a lot of practical issues and considerations.

Please see the contents and the links to the webinar recordings below:  

How to avoid the most common mistakes in field calibration – Part 1 (1:30:35) 

 

TimeSubject
0:00Introduction to the webinar
1:55Harvard calibration event video
5:00Introduction of the webinar presenters
7:10 Webinar agenda
8:08Introduction/background. Measurement in practice. Traceability & Test Ratios (TUR)
12:00Poll Question - Do you need to connect in series for a field mA measurement?
13:20mA measurement - No trips allowed!
15:20 Using transmitter’s test diode connection and related concerns
18:30Poll Question - When hooking up a mA meter in series, do you hook it upstream or downstream of the transmitter?
19:10Explanation of the poll question. Downstream avoid fuse blows. Ground leaks in transmitter.
22:15Where does the 250 ohms resistor go and what does it do? How to connect a HART communicator.
25:30Live demo. Calibration and trim of a HART temperature transmitter.
31:00Questions & Answers
49:45  Absolute pressure transmitter issues.
54:00DP calibration of storage tanks – Bubbler.
56:05 Calibration of a steam drum level
59:40 Resistance measurement; 2, 3 or 4 wire connection.
1:04:00Poll Question – How to test a thermocouple & transmitter loop?
1:06:40Loop calibration basics
1:13:30Live demo. Pressure transmitter calibration – calibrate mA output and control room display.
1:24:00Questions & Answers

 

Watch the webinar (Part 1) by clicking the picture below:           

How to avoid the most common mistakes in field calibration, Part 1 - Beamex webinar

 

How to avoid the most common mistakes in field calibration – Part 2 (1:24:04) 

 

TimeSubject
0:00Introduction to the webinar
1:40Introduction of the webinar presenters
3:55Webinar agenda
4:55Setting field parameters, common errors.
6:30Poll Question – How difficult are capillary seal systems for you?
7:30DP seal assemblies. Seal fluid, temperature and capillary issues. Single/dual pad seal installation.
13:30 Live demo – Capillary seal transmitter issues.
15:00Poll Question – Most is the most common error tolerance in your plant?
16:30Drivers for calibration. Testing parameters. Optimum testing interval.
27:20Live Demo – Calibrating a square rooting DP transmitter, tolerance consideration.
32:50Questions & Answers
44:45Poll Question – What is the most common method for testing and calibrating valves in your plant?
45:40Control valves. Valve types. Positioner. Linearity. Valve testing.
53:00Testing and “Calibrating” valves. Best practice.
59:00Review - Optimum testing interval. Calibration trend analysis.
1:02:40ISA recommended practice RP105 – Management of a calibration program for industrial automation and control systems.
1:05:20Live demo – Control valve test.
1:10:00Questions & Answers

 

Watch the webinar (Part 2) by clicking the picture below:           

 How to avoid the most common mistakes in field calibration, Part 1 - Beamex webinar

 

 

Topics: Webinar, Field calibration

Calibration in a hazardous area

Posted by Heikki Laurila on Oct 31, 2017

Calibration in hazardous area - Beamex blog post

This post discusses calibration in hazardous area in the process industry. 

There are many things you need to be aware before you can go to a hazardous (Ex) area to perform calibrations. There are different levels of hazardous areas and also different levels of calibration equipment.

First we look at some fundamental considerations, theory and history. Then we look at different techniques to make calibration equipment suitable for an Ex area.

The downloadable white paper discusses this topics in more details:

Calibration in hazardous area - Beamex blog post

 

What is a hazardous area?

A hazardous area is an area (indoors or outdoors) that contains, or may contain, flammable substances. The flammable substance may be a liquid, gas, vapor or dust. Depending on the hazardous area classification, the area may contain a flammable substance all the time, or it is likely to be a certain ratio of the time, or only in specific situations, such as during shutdowns or accidents.


Explosion Triangle

In such a hazardous area, an explosion or fire is possible if all three conditions of the “Explosion Triangle” (see picture below) are met. These three conditions are fuel (a flammable substance), source of ignition (or heat) and oxygen (air).

The situation is often presented as a triangle; hence the name Explosion Triangle.

 Explosion triangle

 

Eliminating one element from the explosion triangle

Keeping in mind the Explosion Triangle, we can conclude that one or more of the three elements must be eliminated, to make it safe. Many times, in an industrial application eliminating the flammable substance is not possible (you can’t eliminate fuel in a refinery), and therefore the oxygen (air) or source of ignition has to be eliminated.

However, it is also often very impractical or impossible to eliminate air. Therefore, the most practical solution is to eliminate the source of ignition, being spark or heat. 

In the case of electrical calibration equipment, the device can be specially designed, so that it can be used safely in hazardous areas. There are many ways to design electrical equipment suitable for hazardous areas and this topic will be discussed later.

Calibration equipment is often designed in such a way that it cannot provide enough energy to cause the source of ignition, spark or heat.

 

Brief history of hazardous areas

Some of the first hazardous areas were discovered in early coal mines. Being flammable substances, both the coal dust and the methane absorbed created a hazardous area. The lighting in early mines was produced by candles and torches, generating a source of ignition.

This led to many accidents. Later, when miners began to use electrical equipment (lighting, tools), many accidents occurred due to sparking or heating. Eventually, design standards were developed to guide the design process to prevent the sparking and heating of electrical equipment.

This was the first “intrinsically safe” electrical equipment and it led the way to the standards compiled for equipment used in hazardous areas today.

Typical industries with hazardous areas

There are many industries that have hazardous areas. Some plants have large hazardous areas, while others have only small sections classified as hazardous areas. Typical industries with hazardous areas include chemical and petrochemical industries, offshore and on-shore oil and gas, oil refining, the pharmaceutical industry, food and beverage, energy production, paint shops and mining.

Since a flammable substance may be a liquid, gas, vapor or dust, there are surprisingly many different industries that may have some areas in the plant where these substances may be present during the normal operation or during shut-down. Even some seemingly safe industries may have hazardous areas.

In plants, all areas classified as hazardous should be clearly marked with the Ex logo:

 Hazardous area

 

Flammable and combustible liquids


What are flammable and combustible liquids? Generally speaking, they are liquids that can burn. They may be gasoline, diesel fuel, solvents, cleaners, paints, chemicals, etc. Some of these liquids are present in many workplaces. 

Flashpoint and autoignition temperatures are also often discussed.

Flashpoint is the lowest temperature of a liquid at which it produces sufficient vapor to form an ignitable mixture with air. With a spark or enough heat, it will ignite. 

Autoignition temperature is the lowest temperature at which a liquid will ignite even without an external source of ignition. Most commonly, flammable and combustible liquids have autoignition temperatures in the range of 572 °F to 1022 °F (300 °C to 550 °C). However, there are liquids that have an auto-ignition temperature as low as 392 °F (200 °C) or less.

Based on their flashpoint, liquids are classified as flammable or combustible. Flammable liquids may ignite at normal working temperatures, while combustible liquids burn at higher temperatures. Often 100 °F (37.8 °C) is considered as the temperature limit. Flammable liquids have a flashpoint below 100 °F and combustible liquids above. To be more precise, flammable and combustible liquids themselves do not burn, it is the vapors that burn. More precisely, it is the mixture of the vapors and air that burns.

There are also limits of the concentration within which the mixture can burn. If the concentration of the mixture is too low (too thin) it will not burn; the same is true if the concentration is too high (too rich). The limits are known as lower and upper explosive limits (LEL and UEL).

It is good to remember that some liquids may have a rather low flashpoint. For example, gasoline has a flashpoint as low as c. -40 °F (-40 °C). It produces enough vapors in normal environmental conditions to make a burnable mixture with air. Combustible liquids have a flashpoint way above normal environmental conditions, and therefore they have to be heated before they will ignite. 

Some examples of flashpoints and autoignition temperatures:

Flashpoints and autoignition temperatures

 

Various protection techniques

As mentioned earlier, to prevent an explosion, one of the three elements of the Explosion Triangle should be eliminated. In practice, eliminating the source of ignition would be the most sensible. There are various techniques in electrical equipment that make them safer for use in hazardous areas.

These different techniques fall into two main categories: eliminate the source of ignition (Exe, Exi) or isolate the source of ignition (Exd, Exp, Exq, Exo, Exm).

Ex protection techniques

 

Intrinsically safe technique

The Exi “Intrinsically Safe” technique is the most commonly used and most suitable protective technique for electrical calibration equipment. Intrinsically safe equipment is designed so that it will not provide enough energy to generate sparks and excessively high surface temperatures, even in the case of a fault in the device. The equipment is designed to be intrinsically safe.

Inside an Exi device, the Exm (“Encapsulated”) technique may also be used for certain parts of the equipment (as in a battery pack).

“Hot work permit”

Using non-Ex calibration equipment in a hazardous area may be possible, but it requires special approval from the safety personnel in the factory. Oftentimes, this also involves the use of safety devices, such as personal portable gas detectors, to be carried in the field while working.

Using equipment rated for use in Ex area is easier, as it does not require any special approvals. Naturally, the rating of the Ex-rated calibration equipment must be suitable for the rating of the hazardous area to which it is taken to. 

 

Read more in the downloadable white paper

Additional topics discussed in the white paper are:

  • International / North American legislation and differences
  • International IEC standards, IECEx scheme and ATEX Directive
  • Hazardous zones classification
  • Product category and Equipment Protection Levels (EPL)
  • Equipment grouping
  • Temperature Class
  • North American legislation differences: Divisions
  • Explosion group
  • Environmental conditions
  • Example of equipment marking

Click the image below to download the free white paper:

Calibration in hazardous area - Beamex blog post

 

New Beamex MC6-Ex calibrator for hazardous areas

Last, but not least: We have recently introduced the MC6-Ex, a new calibrator & communicator that can be used in any hazardous zone/division. Although it is an Ex calibrator, it does not have compromises typically found in Ex products.

Check out the MC6-Ex product page on Beamex web site:   

Beamex MC6-Ex

This is what the beauty looks like:

 Intrinsically safe calibrator and communicator - Beamex MC6-Ex

 

Topics: Calibration, hazardous area, intrinsically safe

Thermocouple Cold (Reference) Junction Compensation

Posted by Heikki Laurila on Sep 19, 2017

Thermocouple Cold (Reference) Junction Compensation - Beamex blog post

In this blog post, I will take a short look on thermocouples and especially on the cold junction and the different cold junction compensation methods.

During the many years of working with process instrument calibration, it often surprises me that even people who work a lot with thermocouples don’t always realize how the thermocouples, and especially the cold (reference) junction, works and therefore they can make errors in measurement and calibration.

To be able to discuss the cold junction, we need to take first a short look into the thermocouple theory and how a thermocouple works. 

I won’t go very deep in the theoretical science but will stick more with practical considerations, the kind of things you should know when you work with thermocouple measurements and calibrations in a typical process plant.

Download this article for free as pdf file: 

Thermocouple cold junction compensation - Beamex white paper

 

Terminology: Cold junction or Reference junction

Thermocouple “cold junction” is often referred to as a “reference junction”, but it seems to me that people use “cold junction” more often, so I will use that one in this text.

 

Thermocouples

Thermocouples are very common temperature sensors in process plants. Thermocouples have few benefits that makes them widely used. They can be used to measure very high temperatures, much higher than with RTDs (Resistance temperature detector). The thermocouple is also a very robust sensor, so it does not break easily. Although thermocouples are not as accurate as RTD sensors, they are accurate enough in many applications. Thermocouples are also relatively cheap sensors and the thermocouple measurement circuit does not require excitation current like an RTD circuit does, so the circuit is in that sense, more simple to make. There are many different thermocouple types optimized for different applications.

A thermocouple sensor seems very simple to use – just two wires – what could possibly go wrong?

But considering the cold junction, and all the junctions in the measurement circuit, it is not always as simple as it seems.

Let’s start working our way towards the cold junction discussion, but before that, a few more words on the thermocouple theory to help better understand the cold junction discussion. 

 

How does a thermocouple work?

Let’s look at how a thermocouple works. A thermocouple consists of two wires made of different electrical conductors that are connected together at one end (the “hot” end), that is the end you want to use to measure the temperature with.

As discovered back in 1821 by Thomas Johann Seebeck, when the connection point of these wires is taken into different temperatures, there will be a thermo-electric current generated, causing a small voltage between the wires in the open end. The voltage depends on temperature and on the materials of the conductive wires being used. This effect was named as Seebeck effect.

 

Simplified principle picture of a thermocouple:

 Thermocouple Cold (Reference) Junction Compensation - Beamex blog post

 

In the above picture: the “Thermocouple material 1 and 2” represent the two different materials the thermocouple is made of. “T1” is the hot end of the thermocouple, i.e. the point that is used to measure temperature. The two “Tcj” are the temperatures of the cold junctions.

The above explanation is somewhat simplified, as the thermovoltage is actually generated by the temperature gradients in the thermocouple wire, all the way between the “hot” and “cold” junctions. So, it is not the junction points that actually generate the voltage, but the temperature gradient along the wire. It is easier to understand this by thinking that the thermovoltage is generated in the junctions, hot and cold ones. Maybe more scientific thermocouple theory can be provided in some other post later on, but in this one, let’s stick with the practical considerations.

 

Thermocouple types and materials

There are many types of thermocouples being manufactured from different materials and alloys. Different materials will cause different sensitivity, different amount of thermovoltage being generated at the same temperature, and will affect other characteristics such as max temperature.

Several various thermocouple types have been standardized and names are given for specified materials being used. Names are typically very short names, often just one letter, such as type K, R, S, J, K, etc.

Some of the most common thermocouples and their materials are listed in the below table:

Thermocouple type table v2

 

Wire colors

Good news is that the thermocouple wires are color coded for easier recognition.

Bad news is that there are many different standards for the color codes and they differ from each other.

The main standards are the IEC60584-3 (International) and ANSI (United States), but there are also many others, such as Japanese, French, British, Netherland, German, etc. standards. So unfortunately, it is a bit complicated to recognize the type by the color. 

 

Thermocouple’s thermovoltage

As different thermocouples are made of different materials, the thermovoltage is also different, this is illustrated in below picture. There is a big difference in the voltage being generated in the same temperature between the different types.

Thermocouple Cold (Reference) Junction Compensation - Beamex blog post 

If you want to measure a lower temperature, it is obviously better to use the more sensitive types as they give a higher voltage which is easier to measure. But if you need to go to high temperatures, you need to choose some of the less sensitive types that can be used in such high temperatures.

The Seebeck coefficient tells how much the thermocouple’s voltage changes compared to a change in temperature. More on that later.

The above picture illustrating the different sensitivities between different thermocouples also explains why a thermocouple calibrator typically has different accuracy specifications for different thermocouple types. A measurement device, or calibrator, normally has the voltage measurement accuracy specified in voltage. For example, it can have an accuracy of 4 microvolts. This 4 microvolt accuracy equals a different temperature accuracy depending on the thermocouple type, due to the different thermocouple sensitivities.

 

Measurement device (calibrator) example

Let’s look at the two extremities: the E and B type at 200 °C temperature. The sensitivity (Seebeck coefficient) of type E at 200 °C is about 74 µV/ °C, while the coefficient for B type at 200 °C is about 2 µV/ °C. So, there is a difference of 37 times between these two.

For example, if your measurement device can measure with an electrical accuracy of 4 µV, that means that it offers accuracy of about 0.05°C (4 µV divided by 74 µV/°C) for the E type at 200°C, and accuracy of 2°C (4 µV divided by 2 µV/°C) with B type at 200°C.

So, we can see why there are often very different accuracy specifications for a thermocouple measurement device/calibrator for different thermocouple types.

Calibrator accuracy

If you see a data sheet of a temperature calibrator and it has the same accuracy specification for all thermocouple types, be careful! Normally this means that the specifications / data sheet has been done in the marketing department and not in the technical department… ;-) 

This is just not very realistic.

Standards

There are also some standards (for example AMS2750E) that require the same accuracy for all thermocouple types, and this does not make very much sense in practice, due to this huge difference in sensitivity with different types.

 

Seebeck coefficients

I already mentioned the Seebeck coefficient earlier. This is the sensitivity of the thermocouple, i.e. it explains how much voltage is generated per temperature change.

The below picture shows Seebeck coefficients for some different thermocouples:

Thermocouple Cold (Reference) Junction Compensation - Beamex blog post

 

Cold junction

Now, lets start diving into the "cold junction"...

Earlier, I showed the picture of the simplified thermocouple principle showing that the thermovoltage is generated in the “hot” end connection, where the two different conductors are connected together. The big question you should be asking here is: But what about the other end of the wires?

What a good question! I’m glad you asked… ;-)

When you measure the voltage of the thermocouple, you could connect the thermocouple wires into a multimeter, simple right? Not really! The multimeter connection material is typically copper or gold plated, so it is a different material than the thermocouple material, meaning you create two new thermocouples in the multimeter connections!

Let’s illustrate that with a picture:

 

Thermocouple Cold (Reference) Junction Compensation - Beamex blog post

 

In the above picture, the material 1 and material 2 are the two thermocouple materials that form the thermocouple. The “hot end” is the point where they are welded together and that is the point that measures process temperature, this is where the voltage U1 is generated. This U1 is what we want to measure. In the “cold junction” points, the thermocouple is connected to the voltage meter that has connections made of different material, material 3. In these connections, thermovoltage U2 and U3 are being generated. It is these U2 and U3 voltages that we do not want to measure so we want to get rid of these or to compensate them.

As we can see in the above picture, you are actually measuring the voltage of three (3) thermocouples connected in series. You would obviously like to measure only the voltage / temperature of the “hot” junction only and not the other two junctions.

So, what can you do?

You need to somehow eliminate or compensate for the thermocouples created in the cold junctions. There are different ways to do that. Let’s look at those next.

 

Cold junction options and compensation methods

 

1. Cold junction in ice-bath

By its nature, a thermocouple junction does not generate any thermovoltage when it is in 0°C (32°F) temperature. So, you could make the cold junction at that temperature, for example in an ice-bath or an accurate temperature block. You can connect the thermocouple wires into copper wires in the ice-bath, and there is no thermovoltage generated in that connection. Then you would not need to worry about the cold junction at all.

The connections need to be electrically isolated from the water in the ice-bath to avoid any leak currents causing errors, or possible corrosion being generated.

This is a very accurate way and it’s something calibration laboratories typically do. It is anyhow not very practical on a process plant floor, so it is not normally used in process plants.

 

Thermocouple Cold (Reference) Junction Compensation - Beamex blog post

Example:

Type N thermocouple is connected as presented in the picture. Voltage meter shows 20808 µV. What is the measured temperature?

E = EN(tU1) – EN(tr)

Where:

  • E = measured voltage = 20808 µV
  • EN(tU1) = voltage generated in hot junction
  • EN(tr) = voltage generated in the cold (reference) junction = 0 µV (IEC 60584 type N, 0 °C)
  • EN(tU1) = E + EN(tr) = 20808 µV + 0 µV = 20808 µV = 605 °C (IEC 60584 type N, 20808 µV)

So, the temperature is 605 °C.

 

2. Cold junction in a known, fixed temperature

Since the ice-bath was found to be impractical, you can also do the cold junction connection in some other known, fixed temperature. You can have a small connection box that has a temperature control keeping the box always at a certain temperature. Typically, the temperature is higher than environment temperature, so the box needs only heating, not cooling.

When you know the temperature that your cold junction is in, and you also know the type of your thermocouple, you can calculate and compensate the cold junction thermovoltage.

Many measurement devices or temperature calibrators have a functionality where you can enter the temperature of the cold junction and the device will do all the calculations for you and make the compensation.

 

Thermocouple Cold (Reference) Junction Compensation - Beamex blog post

Example:

Type N thermocouple is connected as presented in the picture. Voltage meter shows 19880 µV. The temperature of the cold (reference) junction is 35 °C. What is the measured temperature?

E = EN(tU1) – EN(tr)

Where:

  • E = measured voltage = 19880 µV
  • EN(tU1) = voltage generated by the hot end
  • EN(tr) = voltage generated in reference (or cold) junction = 928 µV (IEC 60584 type N, 35 °C)
  • EN(tU1) = E + EN(tr) = 19880 µV + 928 µV = 20808 µV = 605 °C (IEC 60584 type N, 20808 µV)

So, the measured temperature is 605 °C.

Please note that thermocouple calculations must always be made in voltage. A common error is to look for the table value for the measured voltage and add the cold junction temperature. In this case, the corresponding temperature for the measured 19880 µV according to IEC 60584 standard is 581.2 °C. Calculation using temperature values would give 581.2 °C + 35 °C = 616.2 °C. The error is + 11.2 °C.

 

3. Measure the temperature of the cold junction

If you don’t adjust the cold junction temperature like in the previous example, you can anyhow measure the temperature of the cold junction with a temperature probe. You can then compensate the cold junction effect, but the compensation is a little bit more difficult as you need to measure the cold junction temperature all the time, and knowing your thermocouple type, make calculations to know the effect of the cold junction.

Luckily, many temperature calibrators provide a functionality to use a temperature probe to measure the cold junction temperature and the device makes all the compensations and calculations automatically.

 

Thermocouple Cold (Reference) Junction Compensation - Beamex blog post

 

4. Automatic on-line compensation in the measuring device

I mentioned that the previous example was difficult as you need to calculate the compensation at all times, but you could leave that to the measuring device to do it automatically. The measuring device (being a transmitter, DCS input card or temperature calibrator) can be measuring the temperature of the cold junction all the time and automatically perform an on-line compensation of the cold junction error. Since the measuring device also knows the thermocouple type (you select that in the menu), it can make the compensation automatically and continuously.

This is naturally the easiest and most practical way to compensate the cold junction in normal measurements and calibrations, as you don’t need to worry about the cold junction and leave for the equipment to take care of. You just plug in the thermocouple wire into the device.

The Beamex temperature calibrators are also supporting this kind of automatic compensation.

 

Thermocouple Cold (Reference) Junction Compensation - Beamex blog post

 

Download free white paper

Download this article for free as pdf file:

Thermocouple cold junction compensation - Beamex white paper

 

Related Beamex products

Please take a look at the Beamex MC6-T temperature calibrator. It can be used also for calibrating thermocouples and it has an automatic cold junction compensation. It also offers a versatile connector where you can connect different thermocouple connectors, or bare thermocouple wires.

Click the below picture to learn more about Beamex MC6-T:

Beamex MC6-T temperature calibrator

Also, take a look at the Beamex MC6 calibrator for reference. 

 

Temperature Calibration eLearning

Free eLearning course on industrial temperature calibration.

Master temperature calibration with this free comprehensive eLearning course from Beamex. Deepen your knowledge, pass the quiz, and earn your certificate!

Read more and enroll >

 

Temperature Sensor Calculator

A free tool to easily convert between temperature and electrical signals for thermocouples and RTD sensors. 

https://www.beamex.com/resources/temperature-sensor-calculator/

 

 

Topics: Temperature calibration, Thermocouple

Resistance measurement; 2, 3 or 4 wire connection – How does it work and which to use?

Posted by Heikki Laurila on Aug 28, 2017

Resistance measurement; 2, 3 and 4 wire connection – How does it work and which to use?

In this blog post, I explain how a resistance or RTD meter works and the difference between the 2, 3, and 4 wire connections.

Maybe you know that in resistance and RTD (Resistance Temperature Detector) measurement you can use 2, 3 or 4 wires, but maybe you don’t really remember what the difference is between them, or how these connections really work. It’s embarrassing to admit that, I know. But don’t worry - I will explain how these things work. Read this and then you will know. We don’t have to tell anybody that you learned it from me, let’s keep that between us... ;-)

In this blog post, I explain shortly & simply how a resistance or RTD meter works and the difference between the 2, 3, and 4 wire connections. I hope this info helps you in practice in your job.

Download this article as pdf file >>

 

Let’s start digging – how does a resistance/RTD meter work?

Let’s start building this from the foundation up. Before talking about the number of wires, let’s first look at how a resistance meter works.

To begin: a resistance meter does not actually directly measure resistance. What?

 

The way a resistance meter normally works is by sending a small, accurate current through the resistance to be measured and then it measures the voltage drop formed over the resistance. After it knows the current and voltage, our good old friend, Ohm’s law, solves the rest. Ohm’s law says that resistance is voltage divided by current, or R = U/I.

For example, if there is a 1 mA (0.001 A) current going through a resistor and there is a voltage drop of 0.100 V over the resistor, then the resistor is R=U/I = 0.100 V / 0.001 A = 100 ohm.

So, the resistance meter actually measures the resistance via the current and voltage measurement.

Typically, the measurement current is around 1 mA, so if you are measuring a resistance of 100 ohms, there will be a 0.1 V voltage-drop over the resistance. The higher resistance ranges will use smaller measurement currents. Often, temperature transmitters use a current of about 0.2 mA. I’ve seen transmitters with current from 0.1 mA up to several mA. And the current is not always a DC current, but it can be pulsed.

The measurement current will cause self-heating in an RTD probe, due to power dissipation, especially in small RTD elements that have a poor thermal connection with its surroundings. Therefore, the measurement current should be kept low. More on RTD probes in another post…

The resistance measurement device itself must of course know exactly what current it is using to make the calculation correctly.

 

Maybe it is time for a diagram to explain this:

Resistance measurement; 2, 3 and 4 wire connection – How does it work and which to use?

 

In the above picture, the box “Resistance meter” corresponds a resistance (or RTD ) meter. The two black dots and the connections, and the “R” is the resistance you want to measure.

The above picture is using a 2-wire connection, as there are only two wires (test leads) being used to connect the resistance. In the above picture, the wires are ideal with no resistance in them. But in practice, all wires and test leads have some resistance and the contacts will always have resistance too.

 

So, if we illustrate a practical two-wire connection considering the resistance of the wires and connections (Rw), we get the following practical schematic below:

Resistance measurement; 2, 3 and 4 wire connection – How does it work and which to use?

 

In practice, the big problem here is that the resistance meter will now measure the total resistance being a combination (series connection) of the “resistance to be measured” and all the resistance in the wires and connections.

What the meter sees is the sum of Uw + Ur + Uw, although it would like to see only the Ur. As the result of this, the resistance meter indicates a resistance that is a combination of the R and all the connection resistances. 

Therefore, there is an error in the result.

 

The measurement is always too high. Depending on the wires & connections, this can cause a huge error in the measurement. In the case of long wires and poor connections, the error can be several ohms (or even infinite). But even when using high-quality test leads and clips, there will always be some error.

If you want to make reliable and accurate resistance (or RTD) measurements, never use a 2-wire connection.

 

How do you get rid of these errors from a 2-wire measurement?

The best answer is to use a 4-wire connection. Let's look at that next.

 

4-wire resistance measurement

With the 4-wire connection, the idea is to have separate wires to deliver the measurement current and to measure the voltage drop over the resistance.

For this kind of connection, 4 wires are needed, leading to the name. Pretty logical…

 

Let’s look at a picture to illustrate a 4-wire connection:

Resistance measurement; 2, 3 and 4 wire connection – How does it work and which to use?

 

You may wonder, “what difference does this make compared to a 2-wire connection?” Well, it would not make any difference with ideal wires and connections, but it’s pretty difficult (impossible) to get ideal wires in real world. So, in practice, with all the unknown varying resistances in the wires and connections, this will make all the difference. 

Why is that? Well, that’s what I’m here for:

There are now separate dedicated wires that will deliver the accurate current through the resistance. If there is some resistance in these wires and connections, it does not matter, because the fixed current generator will still generate the same accurate current and the current does not change as it passes through these connection resistances.

Also, there are separate wires for the voltage measurement which are connected directly to the legs of the resistance to be measured. Any resistance in these voltage measurement wires does not have any effect on the voltage measurement because it is a very high impedance measurement. There is practically no current in these wires and even though there would be resistance, it does not cause any voltage drop, so there is no error.

As a result of the above, the 4-wire connection can measure the connected resistance (R) accurately, even there would be resistance in all the connection wires and connections.

Therefore, 4-wire connection is the best and most accurate way to measure resistance or RTD sensor. 

 

A practical schematic of the 4-wire measurement diagram from earlier would look something like the below picture, with wire and connection resistances (Rw) added:

Resistance measurement; 2, 3 and 4 wire connection – How does it work and which to use?

 

3-wire resistance measurement

In practice, having to use/install 4 wires can be a bit time consuming/expensive. There is a simplified modification of the 4-wire connection, which is a 3-wire connection. Yep, you guessed right, it uses 3 wires.

Although the 3-wire connection is not quite as accurate as the 4-wire one, it is very close if all 3 wires are similar and it is far better than the poor 2-wire connection. Therefore, the 3-wire connection has become the standard in many industrial applications.

In a 3-wire connection, the idea is that we remove one of the wires and assume that all the wires are similar in resistance.

 

Now, let’s look at the schematics of a 3-wire connection, with wire resistances included:

 

Resistance measurement; 2, 3 and 4 wire connection – How does it work and which to use?

 

In the above schematic, the lower part has only one wire. So, the lower part connection reminds us of the 2-wire connection, while the higher connection is like the 4-wire connection. In the higher part, the meter can compensate for the wire resistance, like in 4-wire connection. But in the lower part, it has no means to compensate for the wire (Rw3) resistance.

So, how does the connection work?

The resistance meter has internal switching, so it can first measure just the resistance of the upper loop (summary of Rw1+Rw2), then it divides that result by 2 and gets the average resistance of these two wires. The meter then assumes that the third wire (Rw3) has the same resistance as the average of Rw1 and Rw2. Then it switches to a normal connection (as shown in the picture) to measure the connected impedance R, and it uses the results of the earlier measured wires resistance in the measurement result.

It’s good to remember, that the 3-wire connection is accurate only if all the 3 wires and connections have the same resistance. If there are differences in the wire and connection resistances, then the 3-wire connection will result in an erroneous measurement result. The error in 3 wire measurement can be either way (too high or too low) depending on the resistance difference between the cables and connections.

In industrial applications, the 3-wire connection is often a good compromise; it is accurate enough and you need to use one less wire than with the perfect 4-wire measurement.

 

Conclusion

A few things to remember:

  • When calibrating the resistance of an RTD, use a 4-wire connection, if possible.
  • Of course, when you calibrate an RTD temperature transmitter that is configured for 3-wire measurement, you need to use 3 wires. Make sure you use 3 similar wires and that you make good contacts.
  • When using a 3-wire RTD probe in process, connected to an RTD transmitter, make sure you make good contacts to the transmitter screws for all 3 wires.
  • When using a RTD reference probe in calibration, make sure you always use a 4-wire connection.
  • Never use 2-wire resistance measurement for anything that you need to be accurate. Sure, it can be used for troubleshooting and for rough measurements.

 

I hope you found some of this post useful.

As always, please comment and send us ideas for good calibration related topics that you would like to read about in this blog in the future.

 

Download this article

Download this article as a pdf file by clicking the picture below:

Resistance measurement - Beamex blog post

 

And finally…

Ooops, I forgot to mention Beamex. Please go and buy some Beamex stuff right now! … ;-)

Check out Beamex web site

 

Thanks, 

Heikki

 

Topics: Measurement, resistance measurement

What is barometric pressure?

Posted by Heikki Laurila on Jul 18, 2017

What is barometric pressure - Beamex blog post

We regularly get asked, “what is barometric pressure?” So I decided to make a short blog post to answer this. 

Please note that barometric pressure is sometimes also referred to as atmospheric pressure.

Very shortly, we can say that: Barometric pressure is the pressure on earth that is caused by the weight of the air above us.

That definition sounds pretty simple, but let’s anyhow take a deeper look into this subject…

What is pressure?

To start with, what is pressure? Pressure is defined as force per area (p=F/A), which means that pressure is a certain amount of force affecting an area. The International SI defines the base unit for pressure as a Pascal, where 1 Pascal equals 1 Newton per square meter (N/m2).

Whether we’ve been thinking it or not, many commonly used pressure units indicate the force and the area in their name. For example, psi is pound-force per square inch, or kgf/cm2 is kilogram-force per square centimeter. However, most pressure units don’t include this principle right in their name.

To learn more about pressure and the different pressure units being used, please take a look at the blog post Pressure units and pressure unit conversion.

 

Absolute pressure

Barometric pressure is a so-called absolute pressure type. When measuring absolute pressure, the measured pressure is being compared to a perfect (absolute) vacuum, where there are no air molecules left and therefore no pressure.

In comparison, the common gauge pressure is referred to as current barometric/atmospheric pressure.

For more detailed information on the different pressure types, please have a look at the blog post Pressure calibration basics – Pressure types.

 

Barometric pressure

As mentioned, barometric pressure is the pressure caused by the weight of the air above us. The earth’s atmosphere above us contains air, and although it is relatively light, having that much of it, it starts to have some weight as gravity pulls the air molecules.


When I say “air,” it means the air around us, comprised of about 78% of nitrogen, 21% of oxygen, under 1 % argon and a small amount of other gases. The air gets thinner as we go higher because there are fewer molecules.

Approximately 75% of the atmosphere’s mass is below the altitude of about 11 km (6.8 miles, 36,000 feet) thick layer on earth’s surface. The border where atmosphere turns into outer space is commonly considered to be about 100 km (62 miles) above the earth’s surface.

We can illustrate the column of air above us, being pulled by gravity, causing barometric pressure with the picture below:

What is barometric pressure - Beamex blog post

 

The nominal barometric pressure on earth is agreed to be 101.325 kPa absolute (1013.25 mbar absolute or 14.696 psi absolute) which means that there is about 1.03 kilogram-force per every square centimeter ( 14.7 pound force per every square inch) typically on earth’s surface caused by the weight of the air.
In practice, the barometric pressure very rarely is exactly that nominal value, as it is changing all the time and varies at different locations.

Barometric pressure depends on several things like weather conditions and altitude.

For weather example: during a rainy day, the barometric pressure is lower than it would be on a sunny day.

The barometric pressure also varies based on altitude. The higher you are, the smaller the barometric pressure, which makes sense because when you move to a higher altitude, there is less air on top of you. The air at higher altitudes also contains fewer molecules, making it lighter than it would be at a lower altitude. The gravity also decreases at these heights. Due to these reasons, the barometric pressure is smaller at higher altitudes.

 

Actually, you can use a barometric pressure meter to measure your altitude, which is how airplanes can measure their height. The pressure drops as you go higher, it does not anyhow drop exactly linearly.

When you go all the way up to space, there is no pressure, and it is a perfect vacuum with no air molecules left.

The pictures below illustrates how barometric pressure changes when altitude changes. First with kPa versus meters, second with Psi versus feet:

 

Barometric (atmospheric) pressure units

There are a few pressure units that have been created specifically to measure barometric pressure.

One of these units is standard atmosphere (atm) which equals 101325 Pascal. There is also a unit called technical atmosphere (at) which is not exactly the same as atm (1 at = 0.968 atm).

Torr is also used to measure barometric pressure, originally being equal to millimeter of mercury, but later it is defined as slightly different. Some SI units are also used such as hPa (hectopascal), kPa (kilo pascal) or mbar (millibar).

It is important to remember that we always talk about absolute pressure when we talk about barometric pressure.

More on pressure units in the blog post Pressure units and pressure unit conversion.

 

Some practical considerations

We can easily feel the barometric pressure changing when we travel in an airplane. Even though there is pressure generated inside of the airplane, the pressure still drops as the plane goes higher. You can especially feel the growing pressure in your ears as the plane starts to land and comes to lower altitude. The change is so rapid that your ears don’t always settle fast enough.

You may have also noticed how a yogurt cup is somewhat swollen when you are up in the air. The cup swells because it was sealed on the ground at a normal barometric pressure. As the plane ascends, the pressure inside the plane cabin decreases, causing the swelling as the pressure inside the cup is higher.

Some people can feel the change in the barometric pressure in their body; experiencing headaches or aching in their joints.

 

Related blog posts

Pressure units and pressure unit conversion

Pressure calibration basics – Pressure types

 

For pressure unit conversions, please visit the online converter on Beame web site: 

Pressure Unit Converter 

 

 

Topics: Pressure calibration, Barometric pressure

Measuring current using a transmitter’s test connection – don’t make this mistake!

Posted by Heikki Laurila on Jun 13, 2017

 Measuring current using a transmitter’s test connection - Beamex blog

 

If I had to summarize the content of this post into one sentence, it would be:

Using a mA meter with an internal impedance that is too high to measure current over the transmitter’s test connection will result in erroneous measurement results!

Lately, I have seen several people who have made this mistake, so I decided to write a short blog post on it. Hopefully, it will save some of you from making the same mistake.

The main point is that it is pretty easy to have erroneous mA measurement results when you are measuring the transmitter current through the test connection. The dangerous part is that you won’t necessarily even realize it.

Let’s take a look at what the mistake is and how to avoid it.

I also want you to understand how this system works, therefore some background information and educational theory behind it is in place.

Ready? Let’s go….

 

Transmitter’s test connection

Many process transmitters, specifically pressure transmitters, have a “test connection” in the connection panel. It is typically marked with “TEST” text and it is located next to the normal mA loop connections.

I’m sure you have seen it; in one transmitter, it looks like this:

Measuring current using a transmitter’s test connection - Beamex blog

 

Purpose of the test connection

The purpose of the test connection is to be able to easily measure the loop current going through the transmitter, without the need to disconnect wires or break the current loop. You just connect your mA meter to the TEST connection and you can see the current that is going through the transmitter, as all the current now goes  through your current meter. 

When you disconnect your current meter, all the current starts to again go through the internal diode (I will explain diodes soon) in the test connection. At any point, there is no cut in the current loop.

 

Schematics

As engineers, we just love schematic diagrams, so I need to add some here also.

In the transmitter, there is a diode inside the transmitter connected between the test connections. One end of the diode is connected to one of the “loop” connections, and other end of the diode connected to the test connection. Sounds complicated when you read it, but it is very simple. I’m sure a picture will help you to understand this…

With a schematic diagram, it typically looks like this:

Measuring current using a transmitter’s test connection - Beamex blog

 

What is a diode and does it work?

To better understand this phenomenon, we need to look at what a diode is and how it works.

A diode is a small electronic semiconductor component made with P and N materials. Most electronic devices have many diodes inside, even calibrators… ;-)

An ideal diode will conduct DC current only in one direction. An ideal diode would always conduct current when the voltage over the diode is the right way around. In practice, it is a bit more complicated and the diodes are not ideal.

Here’s characteristics of an ideal diode (left) and a realistic diode (right):

Measuring current using a transmitter’s test connection - Beamex blog Measuring current using a transmitter’s test connection - Beamex blog

 

As we can see in the characteristics of a diode (the real one, not the ideal one), the forward current starts flow when the voltage over the diode is large enough and exceeds the threshold voltage. Typically, with a silicon diode the threshold voltage is about 0.6 V. When the voltage is more than this threshold, the diode is “open” and the current goes through it. When the voltage is less than the threshold, the diode is “closed” and no current goes through it.

 

Current through a transmitter

In normal use of a transmitter, the loop supply effects the diode, so the diode is fully open and all loop current goes through the diode. So actually, the diode doesn’t really do anything, it is not even needed in normal operation and could be replaced with a short-circuit.

But when you connect an mA meter over the diode, all the current starts to go through the mA meter and none of it goes through the diode anymore. Magic!? Well, no magic, just electronics.

Pictures below show how the current goes through test diode (above) or mA meter (below):

 

Measuring current using a transmitter’s test connection - Beamex blog

 

Measuring current using a transmitter’s test connection - Beamex blog

 

Well, this is how it should work, but it does not always work like that in practice. Read on…

 

How does an mA meter work?

Why am I talking about impedance of a mA meter? What is that impedance?

The way mA meters are normally built, there is an accurate shunt resistor, a few ohms, that the current goes through (R in the picture below). This current causes a voltage drop over the shunt resistor and by measuring this voltage with an A/D converter (V in the picture) we can calculate the current.

The rest is simple mathematics, per the Ohm’s law: I = U/R (Current = Voltage / Resistance).

Measuring current using a transmitter’s test connection - Beamex blog

 

Unfortunately, some mA meters/calibrators have impedance that is a bit too high, which causes the voltage drop over the resistor to be larger. In most applications, larger impedance is not critical, but with the transmitter’s test connection it is. When the voltage drop becomes larger it causes the test diode to start opening either slightly causing small leakage current, or all the way open.

Why would you put higher impedance in a mA meter? It may be easier to design a mA meter using a bit higher impedance, as then the voltage drop becomes higher and it is easier to measure it internally with A/D converter, since the voltage signal is higher.

For example, if the internal impedance of an mA meter would be as high as 50 ohms, then with a 20 mA current this means that the voltage drop over the mA meter (and test connection’s diode) would be 1 V causing the test diode to be fully open (threshold 0.6 V). This would mean that your mA meter would show hardly any current, although there is a 20mA current going through the transmitter, as all current goes though the test connection.

The above example’s kind of huge error would be easy to notice in practice. But there are also some mA meters with say about 30 ohm internal impedance. This means that with a smaller current the measurement works okay, but when getting closer to 20 mA the voltage drop gets closer to 0.6 V, and the test diode starts to leak and part of the current goes through the diode. This may be difficult to realize, resulting in you trusting the faulty measurement result of your mA meter.

The drawing below shows how the current goes partly through mA meter and partly through test diode, if the mA meter’s impedance is too high:

 

Measuring current using a transmitter’s test connection - Beamex blog

 

As the current splits between mA meter and diode, the mA meter is showing only part of the current, so it is displaying a wrong result.

 

A practical test with a pressure transmitter

I tested the characteristics of the test connection/diode with one popular brand of a pressure transmitter.

The purpose of the test was to see how the current of the test diode changes when the voltage changes.

You can see the result of that test in the graphic below, and in the text and table after that.

Measuring current using a transmitter’s test connection - Beamex blog

 

A table of the data:

Measuring current using a transmitter’s test connection - Beamex blog

 

We can see in the results for example that:

  • If you want to have smaller than a 0.01 % error, you better stay below about 275 mV - or have impedance less than 13.75 ohms.
  • If you want to have smaller than a 0.05 % error, then you need to stay under 375 mV (or 18.75 ohms).
  • At 400 mV the leak, current starts to grow quickly (equals 20 ohm mA meter impedance).
  • At 500 mV the leak, current is 0.2 mA which equals over 1 % error when measuring 20 mA current (equals 25 ohm mA meter impedance).

As a result/summary of this test, I can say that:

  • As long as your mA meter has impedance less than 15 ohms, you are good to go.
  • If the impedance is 25 ohms, you will introduce over 1 % error in the measurement.
This test was done at room temperature. At higher temperature, the leak current of a diode is typically higher, but I did not test it here.

 

Accuracy effect of using the TEST connection

We can say that the accuracy using the test connection is good enough as long as you have mA meter with small enough impedance. If you don’t know the impedance of your meter, it may be risky to use the test connection.

Different transmitter models may have different characteristics than the one I tested.

 

How do you check the impedance of your mA meter?

How can you check the mA meter, or calibrator, you are using to see what impedance it has? Try checking the specification data sheet first, as it is often mentioned there. If the impedance is not specified, sometimes the voltage drop (or “Burden Voltage”) is specified as a certain voltage at a certain current. With this, you can calculate the impedance (U/I). For example, one device has a specification of 400mV at 20 mA, so you know the impedance is 20 ohms. Which means that it will add more than 0.1 % of error in 20 mA.

Sometimes, the impedance is anyhow not specified.

If it is not mentioned in the specs, then you can find it out in different ways:

  • First, simply use a resistance meter and measure the impedance of the mA meter.
  • Second, you can set a known current to go through the mA meter and then measure the voltage drop over it and calculate impedance/resistance (R= U/I). For example, if 20mA goes through the meter and the voltage drop is 0.2 V, then it has 10 ohm impedance (0.2 V / 20 mA = 10 ohm).

 

Transmitter manuals

With a quick search of pressure transmitter’s user manuals, I only found one popular pressure transmitter’s manual with a comment that the mA meter used in the test connection should have an impedance 10 ohms or less.

Yes, I do sometimes read manuals… if I really have to… ;-)

But for some reason I feel that generally the transmitter manufacturers do not mention this enough, or I have just missed that info (wouldn’t be the first time I miss something…).

 

Other ways to measure mA

Of course, there are also other ways to measure transmitter current, other than using the test connection.

For example:

  • Break the current loop and add a mA meter in series with the transmitters. This is the most accurate way and any test diode leaks will not have any effect.
  • I’ve seen people installing a precision resistor in series with the transmitter and then measuring the voltage drop over the resistor. You can then calculate the current without breaking the loop. Of course, the accuracy of the resistor will have an effect on the result.
  • You can also use a clamp meter to measure the current in the loop. Most often the clamp meters are not very accurate though.
  • You can also connect an external diode in series with the transmiter and use this the same way as the test connection is being used. You can add several diodes in series if you need it to work with a higher impedance mA meter.

 

So how about Beamex calibrators?

In Beamex calibrators, the impedance of the mA measurement has always been less than 10 ohms, typically around 7.5 ohms, so you can use them safely in the transmitter’s test connection.

However, please pay attention as there are also some calibrators, of well-known brands, in the market that have too a mA impedance which is too high for this application and will cause these issues.


Conclusion

I wrote this post as I have met this issue a few times with people. I would assume that there are more out there that would appreciate this information.

Well, at least it is now easier for me to answer this question when it gets asked next time. I will just ask them to read the answer in the blog … ;-)

Please let me know by comments if you found this information useful?

 

And finally, before you leave…

If you would like to get a short email when new posts are published in this blog, please subscribe by entering your email to the right-hand side “subscribe” field. Don’t worry, you won’t get emails more than roughly once in a month.

To check out the calibrators Beamex offers, please have a look at the below link:

 Beamex Calibrators  



 

Topics: Transmitter

Weighing scale calibration - How to calibrate weighing instruments

Posted by Heikki Laurila on May 16, 2017

Weighing scale calibration - how to calibrate weighing instruments. Beamex blog post.

In this article, I look at the practical considerations and the different tests you should perform when calibrating your weighing scales/instruments/balances.

Update February 2019: We made an educational video on weighing scale calibration, you can find that in this link:

Weighing Scale Calibration Video

 

Weighing scales, weighing instruments, weighing balances…

Different resources are using different terminology. I will be mainly using the term “weighing instrument” in this article.

Weighing instruments/scales/balances, are widely used in industry for various measurements. Some weighing instruments are small laboratory instruments measuring a few grams and are very accurate. While some industrial weighing instruments are very large ones that measure, for example, the mass of trucks. We all see weighing instruments in our everyday life around us, for instance, when we visit a grocery store and weigh vegetables. 

As with any measurement instruments, weighing instruments should also be calibrated regularly to assure that they are measuring correctly and accurately. A proper metrologically traceable calibration is the only way to know how accurately weighing instruments are measuring. 

Many weighing instruments are used for legal measurements or measurements used as the basis for the monetary transfer and these are part of a legal or statutory verification program based on legislation. Often the calibration of weighing instruments is based on a quality system (such as ISO9000), health care, traffic (air, marine) safety or forensic investigation. 

There are dedicated regulations for weighing instruments and their calibration (EURAMET Calibration Guide, NIST Handbook 44, OIML); more on those later in the article.

In this article, the main focus is to look at the practical considerations and the different tests you should perform when calibrating your weighing instruments.

To download this article in as pdf file, please click the picture below:
Weighing scale calibration - How to calibrate weighing instruments. Beamex white paper.

 

Table of content

Calibrating weighing scales/instruments

1. Preparations before calibration

2. Eccentricity test

3. Repeatability test

4. Weighing test

5. Minimum weight test

6. Other tests

Related Beamex products

Related references

 

Here is a video on how to calibrate weighing scales:

 

Calibrating weighing scales/instruments

 

Weighing scale calibration - how to calibrate weighing instruments. Beamex blog post. 

Let’s start by looking at some of the preparations you should make before the calibration and then look at the different tests you should be doing.

Back to top ⇑

1. Preparations before calibration

Before you can start the calibration of the weighing instrument, you should clarify a few things and get prepared.

You should find out the technical characteristics of the weighing instrument (max weight, d value), the accuracy requirement (max error allowed and uncertainty) and what to do if the calibration fails (adjustment).

Typically, the whole measurement range is calibrated and the calibration is performed in the location where the instrument is being used. Make sure you have enough weights for the calibration procedure available.

The weighing instrument should be switched on at least 30 minutes before the calibration. The temperature of the weights should be stabilized to the same temperature where the calibration is to be done.

The weighing instrument should be at a horizontal level, especially for small and accurate weighing instruments. Perform a few pre-tests by placing weights close to the maximum of the range on the instrument and to ensure it works normally.

In case the weighing instrument fails in calibration and it is adjusted, you should make an “as found” calibration before adjustment and an “as left” calibration after adjustment.

Next, let’s take a look at the different tests that should be done during the calibration.

Back to top ⇑

2. Eccentricity test

In normal use of a weighing instrument, the load is not always placed perfectly on the center of the load receptor. Sometimes the results of a weighing instrument can vary slightly depending on if the load is placed in different locations on the load receptor. In order to test how much effect the location of the load has, the eccentricity test is performed.

In the eccentricity test, the reference load is placed in a few different specified locations on the load receptor. First, the load is placed in the center of the load receptor (the load’s center of gravity) and the result is observed. Next, the load is placed in four different sectors of the load receptor, as illustrated in the picture below.

Eccentricity test. Weighing scale calibration - how to calibrate weighing instruments. Beamex blog post.

 The above picture is for rectangular and round load receptors, but naturally, in practice, there are many different shapes of load receptors and the location of the load will vary. Standards OIML R76 and EN 45501 will give guidance for different load receptor shapes.

The calibration procedure should specify where to place the load during the test and calibration results (in certificate format) should also document the locations.

The test load used in an eccentricity test should be at least one-third (1/3) of the max load of the weighing instrument. The test should preferably be done using just one test load, if possible. That way it is easier to be sure that the load’s center of gravity is in the specified location. For a weighing instrument with multiple ranges, the eccentricity test should be done with the highest range.

As the aim of the eccentricity test is to find out the difference caused by the location of the load, it is not necessary to have an accurate calibrated load. It is naturally important to use the same load through the test.

If the eccentricity test is used also to determine the errors of the indication, then a calibrated load should be used.

 

Procedure for the eccentricity test

The indication is zeroed before the test. The test load is placed to location 1 and indication is recorded. The test load is then moved to locations 2 to 5 and indication is recorded in each location. Finally, the test load is placed again to location 1 to check that the indication has not drifted from the earlier indication in location 1.

The zero may be checked between each location to see that it has not changed. If necessary, the instrument can be zeroed in between each test.

Alternatively, you may also tare the instrument when the load is in location number 1, as this makes it easier to see any difference between locations.

Back to top ⇑

3. Repeatability test

As any instrument, also weighing instruments may suffer from repeatability issues. This means that when the same load is measured several times, the result is not always exactly the same. To find out the repeatability of the instrument, a repeatability test is done.

The repeatability test is performed by replacing the same load on the same place on the load receptor (to avoid any eccentricity error) multiple times. The test should be done in identical and constant conditions and with identical handling.

The load used should be close to the maximum load of the instrument. Often a repeatability test is done with one load only, but it can be done also with several different load values separately.

The load does not necessarily need to be a calibrated load, as the aim is to find out the repeatability. If possible, the load used should be a single load (not several small loads).

A repeatability test is normally done by repeating the measurement at least 5 times in a row. For instruments with a high range (over 100 kg / 220 lbs), it should be done at least 3 times.

In the repeatability test, the instrument is first zeroed, then the load is placed on the load receptor and indication is recorded once it is stabilized. Then the load is removed and zero indication is checked and zeroed if necessary. Then the load is placed again, and so on.

For a multi-range instrument, a load close but below the first range max is often sufficient.

Back to top ⇑

4. Weighing test

The purpose of the weighing test is to test the accuracy (calibrate) of the weighing instrument throughout its whole range in several steps, with increasing and decreasing weight.

The most common practice is the following: start with zeroing the instrument without any load. Set loads of the first test point, wait for stabilization and record the indication. Continue increasing the loads through all the increasing test points. Once the maximum load is recorded, start decreasing the loads through the decreasing test points.

In some cases, the weighing instrument may be calibrated with increasing loads only or decreasing loads only. Typically, 5 to 10 different loads (test points) are used. The highest load should be close to the maximum of the instrument. The smallest test load can be 10% of the maximum load, or the smallest weight normally used.

Generally, the test points are selected so that they are equally distributed throughout the range. More test points can be used for the typical range of usage of the instrument.

With multi-range instruments, each range is to be calibrated separately.


Linearity

In a weighing test, using multiple points through the measurement range of the instrument helps to reveal any issues with linearity. Linearity issues mean that the instrument does not measure equally accurately throughout the range. Even the zero and full span are correct, there may be errors in the middle of the range, which is referred to as linearity errors, or unlinearity (or nonlinearity). 

The below picture is a general illustration of nonlinearity. Even instrument’s zero and full range are adjusted correctly, there is an error in the midrange due to nonlinearity of the instrument:

Nonlinearity: Weighing scale calibration - how to calibrate weighing instruments. Beamex blog post.

 

Hysteresis

Hysteresis is the difference in the indication when a test point is approached with increasing or decreasing weight. To find out any hysteresis issues in the instrument, you need to calibrate with increasing and decreasing points.

In a weighing test, when increasing or decreasing the load, it is important not to overshoot or undershoot. This means that when you increase the load, you must approach each test point with increasing weight. You should not add too much weight and then remove it, because then you lose the hysteresis information.

Likewise, with decreasing points, make sure that you approach each point with decreasing weight. Obviously, in order to be able to do this, the usage of the test loads should be well planned in advance. 

The picture below is a general illustration of hysteresis. When the instrument is calibrated, the results are different with increasing and decreasing calibration points:

Hysteresis: Weighing scale calibration - how to calibrate weighing instruments. Beamex blog post.

 

Back to top ⇑

5. Minimum weight test

A minimum weight test is a test that is not always required to be done. This test is anyhow required within some industries, like the pharmaceutical industry.

The purpose of the minimum weight test is to find the smallest load that can be measured while still achieving reliable measurement results and fulfilling the accuracy requirements. When the measured value gets smaller, typically the relative error of the reading becomes higher. The weighing instrument should not be used to measure any loads smaller than the minimum load.

For the minimum weight test, the two main standards have a different approach. Let’s take a quick look at those:


The US Pharmacopeia (Chapter 41)

  • After the recent changes in the standard, it does not refer to a minimum weigh test anymore, this has been replaced by the requirement to determinate the instrument’s minimum operating range by finding the point where the instrument’s repeatability (2 times the standard deviation) is 0.10% of reading.
    In practice, in some cases, the standard deviation can be very small, but the minimum weight to be measured should anyhow not be smaller than 820 times the actual scale interval (d).


EURAMET Calibration Guide 18 (Appendix G)

  • Has the principle that you calculate the measurement uncertainty for each calibration point and the smallest usable load is the point where the uncertainty is still small enough for the requirements for the instrument.

 

Back to top ⇑

6. Other tests


There are also some other tests specified in the standards, although these are typically not done during a normal calibration, but can be done as a type of approval test or in the initial verification.

Example of these tests are:

  • Tare test
  • Discrimination test
  • Variation of indication over time
  • Test of magnetic interaction

 

 

Additional topics in the related white paper

To avoid this blog post coming way too long, please download the related white paper to read more on this subject. The white paper discusses the following additional subjects:

Weights

  • Handling of weights
  • Nominal mass / Conventional mass
  • Calibration of weights
  • Local gravity
  • Air buoyancy
  • Effect of convection
  • Substitution load

Calibration certificate

  • What information should the calibration certificate include.

Uncertainty

  • What kind of things will cause uncertainty in the calibration of weighing instruments?

Instrument classes, Tolerance classes, Max permissible error

 
To download the related free white paper, please click the picture below:

Weighing scale calibration - How to calibrate weighing instruments. Beamex white paper.

Back to top ⇑

Related Beamex products

Among a lot of other functionality, the Beamex CMX Calibration Management Software has a dedicated functionality for the calibration of weighing instruments. It has been around for more than 10 years already. CMX supports various tests such as: Eccentricity Test, Repeatability Test, Weighing Tests and Minimum Weighing Test. Both OIML and NIST Handbook (including latest USP 41 updates) accuracy classes are supported. The functionality can be used either with a computer or with a mobile device.

For more information on Beamex CMX and its weighing instrument calibration module, please visit the CMX product page and read the brochure, or contact Beamex.

 Visit Beamex CMX Calibration Software product page  

 Visit Beamex Worldwide Contacts page  

Back to top ⇑

Related references

The most relevant references for this subject include, but not limited to, the following:

  • EURAMET Calibration Guide No. 18, Version 4.0 (11/2015)
  • EN 45501:2015 - Metrological aspects of non-automatic weighing instruments
  • NIST Handbook 44 (2017 Edition) - Specifications, Tolerances, and Other Technical Requirements for Weighing and Measuring Devices
  • U.S. Pharmacopeia Convention "Chapter 41 Balances" (2014)  (abbreviation "USP 41" used in blog text)
  • EA-4/02 (2013) - Evaluation of the Uncertainty of Measurement in Calibration
  • JCGM 100:2008 - Evaluation of measurement data — Guide to the expression of uncertainty in measurement
  • JCGM 200:2008 - International vocabulary of metrology — Basic and general concepts and associated terms
  • OIML R76-1 - Non-automatic weighing instruments Part 1: Metrological and technical requirements - Tests
  • OIML R 111 - OIML R111: Weights of classes E1, E2, F1, F2, M1, M1-2, M2, M2-3 and M3
  • DIRECTIVE 2009/23/EC (2009) - Non-automatic weighing instruments

 

Topics: weighing instrument, weighing scale

How to calibrate pressure gauges - 20 things you should consider

Posted by Heikki Laurila on Apr 05, 2017

How to calibrate pressure gauges - Beamex white paper

Pressure Gauge Calibration

20 things you should consider when calibrating pressure gauges 

Pressure gauges are very common instruments in the process industry. As with any measurement device, pressure gauges need to be calibrated at regular intervals to assure they are accurate. There are many things to consider when calibrating pressure gauges. This article lists 20 things you should consider when calibrating pressure gauges.

Please download related free white paper by clicking the below picture:

How to calibrate pressure gauges - Download White Paper

 

Content - The 20 things you should consider

The 20 things discussed in the article are the following:

1. Accuracy classes
2. Pressure media
3. Contamination
4. Height difference
5. Leak test of piping
6. Adiabatic effect
7. Torque
8. Calibration / mounting position
9. Generating pressure
10. Pressurizing / exercising the gauge 
11. Reading the pressure value (resolution)

Remaining topics in the free white paper:

12. Number of calibration points
13. Hysteresis (direction of calibration points)
14. “Tapping” the gauge
15. Number of calibration cycles (repeatability)
16. Adjustment / correction
17. Documentation – calibration certificate
18. Environmental conditions
19. Metrological traceability
20. Uncertainty of calibration (TUR/TAR) 

 

What is pressure?

Before we discuss each one of the things to consider when calibrating pressure gauges, let’s take a quick look into a few more basic concepts.

What is pressure? Pressure is the force that is perpendicular to the surface divided by the area it is effecting. So pressure equals force per area, or p = F / A.

There are a large number of different pressure units used around the world and this can be sometimes very confusing. The engineering unit for pressure, according to SI system, is Pascal (Pa), being a force of one Newton per one square meter area, 1 Pa = 1 N / m2. Since Pascal is a very small unit, it is most often used with coefficients, such as hecto, kilo and mega. A large number of different pressure units are being used around the world. For more information on pressure and different pressure units and their background, please see the blog post Pressure units and pressure unit conversion.

For an on-line pressure unit conversion tool, please visit the page Pressure unit converter.

Pressure types

Several different pressure types exist including gauge, absolute, vacuum, differential and barometric. The main difference of these pressure types is the reference point against which the measured pressure is being compared. Pressure gauges also are available for all of these pressure types. Also, compound gauges are available, including a combined scale for both positive gauge pressure and vacuum (negative gauge) pressure.

For more detailed information on different pressure types, please see post Pressure calibration basics – pressure types

Pressure gauges

When talking about pressure gauges, it is normal to refer to analog pressure indicators which are provided with a pointer needle and a pressure scale. These are normally manufactured according to EN 837 or ASME B40.100 standard.

Often these kind of analog pressure gauges are built with a Bourdon tube, diaphragm or capsule. There is a mechanical structure that moves the pointer as pressure increases causing the pointer move across the scale.

Pressure gauges are divided into different accuracy classes that specify the accuracy of the gauge as well as other attributes. Available pressure ranges are typically divided in steps with coefficients 1, 1.6, 2.5, 4, 6 continuing into the next decade (10, 16, 25, 40, 60) and so on. The different gauge diameters (of scales) are typically 40, 50, 63, 80, 100, 115, 160 and 250 mm (1 ½, 2, 2 ½, 4, 4 ½, and 6 inches). More accurate gauges typically have a bigger diameter.

Pressure connectors are normally parallel pipe threads (G) according to ISO 228-1, or taper pipe threads (NPT) according to ANSI/ASME B1.20.1.

There are also digital pressure gauges that have a numeric pressure indication instead of an analog pointer. This article focuses on analog gauges, but most of the principles are valid for both.

Pressure gauges are commonly used in all industries and are a very common instrument to be calibrated. As with any process measurement device, it should be calibrated at regular intervals to assure that it is measuring correctly. Gauges being mechanical instruments, adds the risk for them to drift due to mechanical stress.

For more information on why you should calibrate instruments, please see the blog post Why calibrate?

For more information on how often instruments should be calibrated, please see post How often should instruments be calibrated?

The basic principle of calibration

If we simplify the principle of a pressure gauge calibration to its minimum, we can say that when we calibrate a pressure gauge, we provide a known accurate pressure input and read the indication on the gauge, and then document and compare these. The difference in the values is the error and the error should be smaller than the required accuracy for the gauge.

 

20 things you should consider

This section lists the 20 most common things you should consider when you are calibrating pressure gauges.

Back to top ⇑


1 - Accuracy classes

Pressure gauges are available in many different accuracy classes. Accuracy classes are specified in ASME B40.100 (accuracy classes from 0.1 to 5 % range) as well as in EN 837 (accuracy classes from 0.1 to 4 % range) standards. The accuracy class specification most often being “% of range” means that if the accuracy class is 1% and if the scale range is zero to 100 psi, the accuracy is ± 1 psi.

Make sure you know the accuracy class of the gauge you are going to calibrate, as this will naturally specify the acceptable accuracy level, but it will also have other effects on the calibration procedure.

Back to top ⇑

2 - Pressure media

When calibrating pressure gauges, the most common pressure media are gas or liquid. Gas is most often regular air, but in some applications, it can also be different gases, such as nitrogen. Most commonly, the liquid is water or oil. The pressure media during the calibration depends on the media that is used in the process that the gauge is connected to. Media also depends on the pressure range. Low pressure gauges are practical to calibrate with air/gas, but as the pressure range gets higher it is more practical and also safer to use liquid as the media.

Back to top ⇑

3 - Contamination

While installed in a process, the pressure gauge uses a certain type of pressure media, this should be taken into account when selecting the media for the calibration. You should not use a media during the calibration that could cause problems when gauge is installed back to process. Also, the other way around, sometimes the process media could be harmful to your calibration equipment.

There can be dirt inside the gauge that can get into the calibration equipment and cause harm. With a gas operated gauges, you can use a dirt/moisture trap, but for a liquid operated gauge, you should clean the gauge prior to calibration.

One of the most extreme process situations is if the gauge is used in to measure the pressure of oxygen. If any grease goes into a high pressure oxygen system during the calibration of the gauge, it can be very dangerous and could cause an explosion.

Back to top ⇑

4 - Height difference

If the calibration equipment and the gauge to be calibrated are at a different height, the hydrostatic pressure of the pressure media in the piping can cause errors. This normally is not an issue when gas is used as the media, as gas is light compared to liquid. But when liquid is used as media, the liquid in the piping will have a weight due hydrostatic pressure and can cause errors. The magnitude of the error depends on the density of the liquid and the difference in height, as the gravity is pulling the liquid inside the tubing. If it is not possible to have the calibrator and gauge at the same height, then the effect of height difference should be calculated and taken into account during the calibration.  

An example of effect of hydrostatic pressure:

Hydrostatic pressure is calculated as follows:

Ph = ρ g h

Where:

Ph = hydrostatic pressure
ρ = density of liquid (kg/m3)
g = local gravity (m/s2)
h = height difference (m)

 As on example: if water is the media (density 997.56 kg/m3), local gravity is 9.8 m/s2 and there is a 1 meter (3.3 feet) difference between the DUT and the reference equipment, this will cause an error of 9.8 kPa (98 mbar or 1.42 psi).

Note that depending on the pressure to be measured, the error caused by the height difference can be significant.

Back to top ⇑

5 - Leak test of piping

If there are any leaks in the piping during the calibration, unpredictable errors can occur. Therefore, a leak test should be done prior to calibration. The most simple leak test is to pressurize the system and let the pressure stabilize for some time, and monitor that the pressure does not drop too much. Some calibration systems (pressure controllers) may be able to maintain the pressure even in case of a leak, if it has a continuous controller adjusting the pressure. In that case, it is difficult to see a leak, so the controller should be closed to enable a closed system for a leak test. Adiabatic effect should also always be taken into account in closed system, especially with gas a media, as explained in the next chapter.

Back to top ⇑


6 - Adiabatic effect

In a closed system with gas as the pressure media, the temperature of the gas effects the volume of the gas, which has an effect to the pressure.

When pressure is increased quickly, the temperature of the gas will rise, and this higher temperature makes the gas to expand, thus having a bigger volume and higher pressure. When the temperature starts to cool down, the volume of the gas becomes smaller and this will cause the pressure to drop. This pressure drop may seem like a leak in the system, but it is actually caused by the adiabatic effect due to change in the gas temperature. The faster the pressure is changed, the bigger is the effect is. The pressure change caused by this effect will gradually get smaller as the temperature stabilizes.

So, if you change the pressure quickly, make sure you let it stabilize for a while before judging that there is a leak in the system.

Back to top ⇑

7 – Torque force

Especially for torque sensitive gauges, don’t use excessive force when connecting pressure connectors to the gauge, as it may damage the gauge. Follow manufacturer’s instructions for the allowed torque force. Take the time to use proper tools, appropriate adapters and seals.

Back to top ⇑

8 - Calibration / mounting position

Because pressure gauges are mechanical instruments, its position will effect to the reading. Therefore, it is recommended to calibrate the gauge in the same position as it is used in the process. Manufacturer’s specifications for the operation/mounting position should also be taken into account.

A typical specification for a mounting position is that a change of 5 degrees in position should not change the gauge indication more than half (0.5 times) of the accuracy class.

Back to top ⇑

9 - Generating pressure

To calibrate a pressure gauge, you need to source the pressure applied to the gauge.

There are different ways to do that: you can use a pressure hand pump, a pressure regulator with a bottle or even a dead weight tester. A dead weight tester will provide a very accurate pressure and you don’t need a separate calibrator to measure the pressure, but dead weight tester is expensive, not very mobile, requires a lot of attention to use and it is sensitive to dirt.

It is more common to use a pressure calibration hand pump to generate pressure and an accurate pressure measurement device (calibrator) to measure the pressure. A pressure controller can also be used to supply the pressure.

Back to top ⇑


10 - Pressurizing / exercising the gauge

Due to its mechanical structure, a pressure gauge will always have some friction in its movement, and may change its behavior over time, therefore you should exercise it before calibration. This is especially the case if the gauge has not been applied with pressure for a while. To exercise, supply the nominal max pressure and let it stay for a minute, then vent the pressure and wait a minute. You should repeat this process 2-3 times before starting to do the actual calibration cycle.

Back to top ⇑

11 - Reading the pressure value (resolution)

The scale in analog pressure gauges have limited readability. It has major and minor scale marks, but it is difficult to accurately read the pressure value when the indicator is in between the scale marks. It is much easier to see when the needle is exactly at a scale mark. Therefore, it is recommended to adjust the input pressure so that the needle is exactly at an indication mark, and then record the corresponding input pressure. If you just supply a certain accurate input pressure and then try to read the indicator, it will cause errors due to limited reading accuracy.

Also, it is important to look at the indication perpendicular to the gauge scale. Many accurate gauges have a reflecting mirror along the scale, behind the needle pointer. This mirror helps you read it, and you should read it so that the mirror reflection of the needle is exactly behind the actual needle. Then you know that you are looking perpendicular/straight at the gauge.

Picture: Left gauge in the below picture is difficult to read accurately as the indicator is between scale marks, while the right one is easy to read since the applied pressure is adjusted so that pointer is exactly on scale mark:


Pressure gauge calibration - Beamex blog post

 


Picture: Many high accuracy pressure gauges are provided with a mirror along the scale, helping to view the gauge perpendicular, as the mirror image of the pointer is hidden behind the pointer, or with the help of the reflection of the pointer:

 Pressure gauge calibration - Beamex blog post

 

Back to top ⇑

Remaining topics

To prevent this blog post coming way too long, please download the white paper and read all the 20 topics from that.

The remaining topics not covered here include:

12 - Number of calibration points
13 - Hysteresis (direction of calibration points)
14 - “Tapping” the gauge
15 - Number of calibration cycles (repeatability)
16 - Adjustment / correction
17 - Documentation – calibration certificate
18 - Environmental conditions
19 - Metrological traceability
20 - Uncertainty of calibration (TUR/TAR)

 

Please download related free white paper by clicking the below picture:

How to calibrate pressure gauges - Download White Paper

 

Related resources

Beamex products suitable for pressure calibration, including pressure gauge calibration: 
https://www.beamex.com/calibrators/pressure-calibrators/

An online tool for pressure unit conversion on Beamex web site: Pressure unit converter

 

Please check out our new electric pressure pump Beamex ePG:

Electric pressure calibration pump Beamex ePG

 

Related blog posts:


Back to top ⇑

Topics: Pressure calibration, pressure gauge

Data Integrity in Calibration Processes

Posted by Heikki Laurila on Mar 15, 2017

Calibration in pharmaceutical industry

Data Integrity in Calibration Processes - Beamex blog post 

What is Data Integrity? Why is it important and acute? What could a breach cause? What is ALCOA Plus? 

As a concept, data integrity is by no means a new one, it has been around for several decades. Anyhow, in this article, we look at the data integrity more from the calibration process point of view, and focus mainly on the pharmaceutical and regulated industry. At first we take a look at the data integrity generally; what it is, why it is important and what a breach could cause. The ALCOA plus concept is also briefly discussed.

I remember in the early 90’s when we had pharmaceutical customers auditing us prior to a calibration software purchase, and data integrity was already then one of the normal topics discussed during such a supplier audit. So it is not a new topic, although it has become more acute recently.

Download the related free white paper by clicking the picture below: 

 Data Integrity in Calibration Processes

 

It’s all about trust

Often, when we buy an everyday product, we can quickly see if the product is operating properly, or if it is faulty. For example, if you buy a new TV and turn it on, you can quickly see if it working or not. But with different products it is not so easy to see if you have a proper product. This is especially the case with medicines. When you pick up a medicine, how do know that it is a product working properly according to design specifications? In most cases you can’t tell that, so it is all about trust – you must be able to trust that the medicine you take is a proper one.


What is Data Integrity?

Data integrity is fundamental in a pharmaceutical quality system ensuring that products are of the required quality.

In every process, there is a lot of data produced. Data integrity is the maintenance of, and the assurance of the accuracy and consistency of the data over its entire life-cycle. It is a critical aspect to the design, implementation and usage of any system which stores, processes, or retrieves data. The term Data Integrity is pretty widely used and has different meanings in different contexts. The term itself is pretty old and was initially used in computing. The integrity of the data collected and recorded by pharmaceutical manufacturers is critical to ensuring that high quality and safe products are produced. To ensure the integrity of data, it should be protected from accidental or intentional modifications, falsification and deletion.

With many processes in the process industry, you cannot just simply test the final product to see if it is a proper one. Instead you must assure that the conditions during the process are correct in order for it to produce the correct product. These critical conditions must naturally be recorded and maintained to assure they were correct. This is certainly the case in many processes in a pharmaceutical plant.


Why is Data Integrity important at the moment?

Data integrity has recently risen to an even more important topic than before.

Data integrity related violations have led to several regulatory actions such as warning letters and import alerts. Actually, a large number of the recent warning letters issued by FDA are somehow related to data integrity. 
As international regulatory agencies have more focus on data integrity, the FDA, WHOA and MHRA auditors have been trained to better recognize data integrity issues.

MHRA (Medicines & Healthcare products Regulatory Agency in UK) has recently released new guide “GMP Data Integrity Definitions and Guidance for Industry” (March 2015). There is a deadline set for pharmaceutical companies to comply at the end of 2017. Also, FDA has released “Data Integrity and Compliance With CGMP - Guidance for Industry” (April 2016). This is still in draft mode but has been on comment rounds. Both of these will naturally have effect with the pharmaceutical industry. Sure already before there has been guidance for the good manufacturing practice (CGMP), such as 21 CFR parts (210, 211, and 212), discussing data integrity related issues, but these mentioned new regulation updates will raise the focus.

One additional reason why more focus has been put to data integrity is the increase of the use of mobile devices in calibration processes. This includes applications used in tablets and mobile phones. It also includes the increase of the use of documenting calibrators, which automatically store the calibration results in their memory during a calibration and transfer this data to calibration software. Since the use of automated documenting calibrators will improve the business case of a calibration system, they are being more widely used. To learn more on what a documenting calibrator is and how it benefits the calibration process, please check the earlier blog post: What is a documenting calibrator and how do you benefit from using one?

As results of all these, data integrity is getting more and more acute.


Impacts of breach of data integrity 

The impact of breach of data integrity can be looked as the impact to customer and impact to the pharmaceutical company.

For the customer the impact can be that the medicine does not have the required effect, patient safety can be compromised and in a worst case it can cause even loss of lives.

For the pharmaceutical company the impact can be; warning letter from FDA, bans of license to produce, negative reputation, loss of customer confidence, reduction of market share, and reduction of share price.
So the impacts of breach of data integrity may be huge.


Accidental / intentional

A breach of data integrity may be accidental or intentional. Often there are computerized systems involved to handle the data and the users may not be aware of any issues in such systems. Certainly the majority of data integrity issues are accidental and non-intentional. Anyhow, in looking at some of the FDA warning letters, we can see that in the very worst cases there has been even intentional falsifying of records.

 

Main steps towards better data integrity

Many pharmaceutical companies seems to agree that the main steps towards better data integrity are:

  • Better education and communication
  • Detection and mitigation of risks
  • Focus on technology and IT systems
  • Governance of data integrity
Validation is also something that is a must for any computerized system in pharmaceutical industry. And it is good to remember that ANSI defines systems as: people, machines and the methods organized to perform specific functions. So it is not only the computer system that needs to be validated.

 

ALCOA and ALCOA plus

ALCOA Plus in Data Integrity - Beamex blog post 


The acronym ALCOA has been around since the 1990’s, being used by regulated industries as a framework for ensuring data integrity, and is key to good documentation practice (GDP). ALCOA relates to data, whether paper or electronic, and is defined by FDA guidance as:

  • Attributable
  • Legible
  • Contemporaneous
  • Original
  • Accurate

The newer ALCOA Plus ads a few attributes to the list: 

  • Complete
  • Consistent
  • Enduring
  • Available

For more detailed description of the ALCOA Plus, please take a look at the related free white paper.
 

What could cause data integrity issues?

Some practical and very general things that could cause data integrity issues in any systems are, for example: lack of training, user privileges, poor or shared passwords, control of a computerized system, incomplete data entry, and lack of audit data records for changes and modifications.

The first trap to avoid for consumers – fraud drugs

Although not really a data integrity issues for the industry, this is an important point for consumers. People are buying more from the internet nowadays and you can also buy medicines from internet, but unfortunately you don’t always get what you ordered. A huge amount of medicines bought online are frauds. Sometimes packaging is obviously inappropriate, so it becomes apparent that the medication is a fraud. But, unfortunately that is not always the case and people do, at times, consume fraudulent medicine. It is clear that the fraud medication does not provide the expected cure, but it is also a big risk for our safety and as at its worse, it may be even lethal.


New regulation for product packaging to avoid fraud drugs

To better control fraud drugs, the European Medicines Agency (EMA) has recently introduced a new regulation that will require all prescription drug makers in all (but three) EU (European Union) countries to incorporate new safety features on their product packaging by February 2019. The regulation, which is part of a broader effort to combat falsified medicines in the EU, will require drug makers to add a unique identifier and an anti-tampering device to the packaging of most centrally authorized products. This naturally ads another burden and cost for the drug manufacturers, to build the systems to support this, but this will certainly be beneficial for the customers. Although this specific regulation is for the European Union area, it will have effect globally.


Conclusion

Although the data integrity concept has existed for a long time, it has recently risen to be more acute due to the use of mobile tools and added focus of regulatory agencies. Although in the end, data integrity is pretty common sense - to assure the integrity of data throughout its life cycle - in practice with various systems and tools being used, it gets more complicated. Since the impacts of the breach of data integrity can be enormous, it is something that has a high priority.

In another blog post later on, I will take look into more practical things, such as what can cause data integrity issues in a normal process calibration environment, whether it is a paper based or paperless system, insourced or outsourced.

 

Data Integrity in Calibration Processes 

 

Related Beamex products

Using the Beamex MC6 documenting calibrator with the “Security Plus” optional feature ensures the integrity of the calibration data throughout the calibration process, when used in conjunction with the Beamex CMX calibration management software.

  

Related useful references

21 CFR Part 11, Electronic Records; Electronic Signatures:
http://www.fda.gov/RegulatoryInformation/Guidances/ucm125067.htm

MHRA GMP Data Integrity Definitions and Guidance for Industry, March 2015:
https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/412735/Data_integrity_definitions_and_guidance_v2.pdf

Data Integrity and Compliance with CGMP Guidance for Industry DRAFT GUIDANCE, April 2016:
http://www.fda.gov/downloads/drugs/guidancecomplianceregulatoryinformation/guidances/ucm495891.pdf

FDA warning letters are public and can be found here:
http://www.fda.gov/ICECI/EnforcementActions/WarningLetters/default.htm

European Medicines Agency (EMA), recent regulation for product packaging:
http://www.raps.org/Regulatory-Focus/News/2016/02/09/24281/EU-Regulation-Requires-New-Safety-Features-on-Drug-Packaging-by-2019/

 

Topics: Data Integrity

Temperature units and temperature unit conversion

Posted by Heikki Laurila on Feb 21, 2017

Temperature units and temperature unit conversion - Celsius, Fahrenheit, Kelvin - Beamex blog post 

Edit: The definition of kelvin has been edited after the 2019 redefinition of the SI system.

 

In this blog article, I will be discussing temperature, temperature scales, temperature units, and temperature unit conversions.

 

Explanation of the above picture

The above infographics illustrate a quick comparison between temperature units in a few common temperatures. Sure a room temperature varies and the human body temperature is not always 37°C (98.6°F). Also, the boiling point of water depends on air pressure and is not always exactly 100°C. But a real Finnish sauna is always at least 100°C (212°F) when properly heated… :-)

A while ago I wrote a blog post about pressure units and I was thinking it would be a good idea to write a similar one on temperature units and their conversions. Let’s first take a short look at what temperature really is, then take a look at some of the most common temperature units, and finally the conversions between them.

 

Table of content

 

You can find a Temperature Unit Converter on our website, click the below link to visit it:

Temperature Unit Converter

 

Download this article for free as a pdf file by clicking the below picture:

Temperature unit conversion - Beamex white paper

 

What is temperature?

Temperature is an intensive quantity and describes the energy state of the matter. All materials have atoms and molecules that are in constant movement, vibrating or rotating. A difficult subject simplified, the more they move, the more temperature the material will have. The temperature of an object can be defined by the average kinetic energy of its atoms and molecules, a definition for temperature that we can understand relatively easily. Kelvin is the unit of a fundamental physical quantity called thermodynamic temperature (T) and Since 2019, the kelvin is defined with the Boltzmann constant.

 

So, what is hot and what is cold? 

It is all pretty relative, so these terms hot and cold are not very accurate or scientific. So we need a more specific way to indicate temperature. Several different temperature scales and units have been developed during the recent centuries. And since different scales have been used in different parts of the world, there are still several different scales in use. The actual specifications of some of the old temperature scales were not initially very accurate (such as a human’s body temperature), but later on, specific and accurate reference points and specifications were created.

For high temperatures, there is not really any limit, and it is possible to go to a very high temperature. For example, the temperature at the sun’s surface is 5800 kelvin, while the temperature inside the sun is up to 13.6 million kelvins.

But for the low end of temperature, there is a very specific limit, being the absolute zero temperature, which is the lowest possible temperature. Absolute zero is a theoretical state that possibly cannot ever be achieved. Theoretically, all the movement of atoms would cease almost completely, retaining only quantum mechanical zero-point energy. Absolute zero temperature equals 0 kelvin, -273.15 °Celsius or -459.67 °Fahrenheit. In outer space, the temperature is pretty cold and the average temperature of the universe is less than 3 kelvin.

But let’s next take a look at some of the most common temperature scales and units.

 

International temperature scales

Thermodynamic temperature is very difficult to measure and several international temperature scales for practical measurements have been published:

  • ITS-27; International Temperature Scale of 1927
  • IPTS-48; International Practical Temperature Scale of 1948
  • IPTS-68; International Practical Temperature Scale of 1968
  • ITS-90; International Temperature Scale of 1990

Some additional scales have also been used, for example, PLTS-2000 for improved measurements of very low temperatures in the range 0.9 mK...1 K (Provisional Low-Temperature Scale of 2000).

By international agreement, the current ITS-90 scale is based on the before mentioned thermodynamic temperature (T). The scale defines the methods for calibrating a specified kind of thermometers in a way that the results are precise and repeatable all over the world. Also, the numerical values are believed to be as close to the actual thermodynamic temperature (T) as possible at the time. The methods for realizing the ITS-90 temperature scale include fixed points and functions for interpolating the temperatures in between the fixed values.

The fixed points in the ITS-90 scale are the following:

ITS90 fixed points Beamex

 

Temperature Units

 

The SI unit of temperature - Kelvin (K)

Kelvin is the base unit of temperature in the SI system (International System of Units). Kelvin unit’s abbreviation is K (no degree or degree sign). Kelvin unit was first presented by William Thomson (Lord Kelvin) in 1848.

The kelvin scale was initially developed by shifting the starting point of Celsius scale to absolute zero. With the redefining of the SI system in 2019, the kelvin scale has been defined by fixing the numerical value of the Boltzmann constant to 1.380649×10−23 J⋅K−1.

The kelvin is often used in science and technology. It is anyhow not that much used in everyday life. The symbol of kelvin temperature in terms of ITS-90 is the upper case letter T90.

 

Celsius (°C)

Celsius is currently a derived unit for temperature in the SI system, kelvin being the base unit. The abbreviation of Celsius is °C (degree Celsius) and the size of one Celsius degree is the same size as one kelvin. The unit and the actual Celsius scale were first presented by a Swede Anders Celsius in 1742. The two main reference points of the Celsius scale were the freezing point of water (or melting point of ice) being defined as 0 °C and the boiling point of water being 100 °C.

The melting point of ice is a relatively accurate specification (assuming you have purified ice and it is properly stirred), but the boiling temperature of the water is not such an accurate temperature in practice as the boiling temperature depends a lot on the atmospheric pressure. As Celsius is an SI unit derived from Kelvin, it’s also linked to ITS-90 and its symbol is the lower case letter t90. In terms of ITS-90, the melting point of the ice is slightly below 0 °C and the boiling point of the water at the normal atmospheric pressure is approximately 99.974 °C.

The size of a degree Celsius equals the size of degree kelvin and 0 K equals -273.15 °C.

The Celsius unit is better suited for everyday use than kelvin and is very popular globally, although not so much used in the USA. A Celsius degree is sometimes also called a Centigrade.

 

Fahrenheit (°F)

Fahrenheit unit’s abbreviation is °F. The Fahrenheit scale was first introduced by a Dutchman named Gabriel Fahrenheit in 1724.  The two main reference points of the scale are the freezing point of water being specified as 32°F and the temperature of the human body being 96°F.

In practice, it is easy to see that the temperature of a human body is not a very precise definition.

Nowadays the Fahrenheit scale is redefined in a way that the melting point of ice is exactly 32 °F and the boiling point of water exactly 212 °F. The temperature of the human body is about 98 °F on the revised scale.

In many areas, Fahrenheit has been replaced with Celsius as a temperature unit, but Fahrenheit is still in use in the USA, in the Caribbean, and also in parallel use with Celsius in Australia and in the UK.

 

Rankine (°R, °Ra)

Rankine scale is abbreviated as °R or °Ra. Rankine scale was presented by a Scottish William Rankine in 1859, so a few years after the Kelvin scale. The reference point of the Rankine scale is the absolute zero point being 0 °R, like in the Kelvin scale. 

The size of one Rankine degree is the same as the size of one Fahrenheit degree, but as mentioned, the zero point is very different. 

The freezing point of water equals 491.67 °Rankine.

Rankine is not a widely used scale. It was used in some fields of technology in the USA, but NIST does not recommend the use of Rankine anymore.

 

Réaumur (°Ré, °Re)

Réaumur scales were introduced by Réne de Réaumur in 1730. It has the reference points being the freezing point of water 0 °Ré and boiling point of water being 80 °Ré.

The Réaumur scale was used in some parts of Europe and Russia, but it has mainly disappeared during the last century.

  

Conversions between temperature units

The table below provides calculation formulas for converting temperature readings from one unit to another unit.

Temperature unit conversion table Beamex 

Cgs unit of temperature

The abbreviation “cgs” comes from the words “centimetre-gram-second”. As these words hint, the cgs system is a variation of the metric system, but instead of using the meter it uses centimeter as the unit for length, and instead of kilogram it uses gram as the unit for mass. Different cgs mechanical units are derived from using these cgs base units.

The cgs is a pretty old system and has been mostly replaced first by the MKS (meter-kilogram-second) system, which then has been replaced by the SI system. Yet, you can still sometimes run into some cgs units.

The cgs system does not have its own temperature units, so it uses the same temperature units: Kelvin, Centigrade, and Fahrenheit.

 

Temperature unit converter

I know that the above-presented conversion table may not be the easiest one to use…

We developed a free and easy-to-use temperature unit converter on our website that converts between the above listed 5 different temperature units. Hopefully, you will find this converter helpful.

Temperature Unit Converter

 

Below is an example screenshot from the temperature unit converter, when converting 100 °C to other units:

temperature-unit-conversion-table

 

Download this article for free as a pdf file by clicking the below picture:

Temperature unit conversion - Beamex white paper

 

Beamex temperature calibration products

Please take a look at the new Beamex MC6-T temperature calibrator. Click the below picture for more information:

Beamex MC6-T temperature calibrator

 

Please take a look at other temperature calibrators and temperature calibration products Beamex offers on our website: 

Beamex Temperature Calibration Products

 

Temperature Calibration eLearning

Free eLearning course on industrial temperature calibration.

Master temperature calibration with this free comprehensive eLearning course from Beamex. Deepen your knowledge, pass the quiz, and earn your certificate!

Read more and enroll >

 

Temperature Sensor Calculator

A free tool to easily convert between temperature and electrical signals for thermocouples and RTD sensors. 

https://www.beamex.com/resources/temperature-sensor-calculator/

 

I hope you found this post useful.

 

 

Topics: Temperature calibration

Pressure units and pressure unit conversion

Posted by Heikki Laurila on Feb 08, 2017

Pressure units and pressure unit conversion - Beamex blog post

It’s a jungle out there!

There are a lot of different pressure units in use around the world and sometimes this can be very confusing and may cause dangerous misunderstandings.

In this blog post, I will discuss the basics of different pressure units and different pressure unit families.

If you just want to convert between different pressure units, please check out our handy free pressure unit converter

 

Table of contents

 

Now that we are talking about pressure, let's start with a video on how to calibrate a pressure transmitter:

 

What is Pressure?

When I talk about pressure in this post, it does not refer to the stress you may be suffering in your work, but to the physical quantity. It is good to first take a quick look at the definition of pressure, this will also help to better understand some of the pressure units.

If you remember the studies of physics in school … as most of us don’t remember… a short reminder is in order: pressure is defined as force per area perpendicular to the surface. That is often presented as formula p = F/A. The pressure is indicated with the letter “p”, although the capital letter “P” can also be seen being used on some occasions.

So what does this force per area mean in practice? It means that there is a certain force affecting a specified area. When we look at force, it is specified as being Mass x Gravity. As there are so many different engineering units used for both mass and area, the number of combinations of these is huge. Plus there are also a lot of pressure units that do not directly have the mass and area in their names, although it often is in their definition.

It is good to notice that in practice the “force” is not always included in the pressure unit names. For example, pressure unit kilogram-force per square centimeter should be indicated as kgf/cm², but often it is indicated just as kg/cm² without the “f ”. Similarly, pound-force per square inch (pfsi) is normally indicated as pounds per square inch (psi).

 

Download this article as a free pdf by clicking the picture below:

CTA-Pressure-units

 

International System of Units (SI system) / Metric

Let’s start to look at the pressure units by looking at the SI system, which is the International System of Units, derived from the metric system. Now that I mentioned the metric system, I can already see some of you taking a step back… but please stay with me!

SI system is the world’s most widely used system of measurement. It was published in 1960 but has a very long history even before that.

 

SI unit of pressure

Pascal (Pa) is the SI unit of pressure and is the basic pressure unit in the SI (the International System of Units) system.

 

 

What is Pascal? 

Pascal (Pa) is the standard unit of pressure used to measure the amount of force exerted per unit area. It is named after Blaise Pascal, a French mathematician, and physicist. One pascal is a relatively small amount of pressure, and it is equivalent to the force of one newton per square meter. However, for practical purposes, gas pressures are often measured in kilopascals (kPa), which is equal to 1000 pascals.

 

Pascal is not that old base unit is SI system, as it was adopted in 1971.

Pascal is a so-called “SI derived unit” of pressure, as it is derived from the base units specified in the SI system. 

The definition of one Pascal is the pressure of one Newton per square meter. 
A Newton is a force needed to accelerate one kilogram of mass at the rate of one meter per second. 

So, we can say one Newton is one kgm/s².

To say Pascal (Pa) in a formula:

Pascal-formula.jpg

 

Pascal is a very small pressure unit and is often used with different prefixes. Common multiples include hectopascal (1 hPa = 100 Pa), kilopascal (1 kPa = 1 000 Pa) and megapascal (1 MPa = 1 000 000 Pa).

The standard atmospheric pressure is over one hundred thousand Pascals, to be exact it is 101 325 Pa, or 101.325 kPa.

In our (Beamex) accredited calibration laboratory we document all pressure calibrations in Pascals in calibration certificates (with suitable multiples).

We commonly see Pascal pressure unit being used in cleanroom differential pressure measurement transmitters or gauges, as the difference between different rooms/areas is commonly only tens of Pascals. 
Calibrating these very low-pressure instruments is a challenge of its own.

 

Out of Pascal’s definition, the kg force can be replaced with different units like g (gram) force, and the meter can be replaced with centimeters or millimeters. By doing that, we get many other combinations or pressure units, such as kgf/m², gf/m², kgf/cm², gf/cm², kgf/mm², gf/mm², just to list a few.

The unit “bar” is still often used in some areas. It is based on the metric system but is not part of SI system. Bar being 100 000 times Pascal (100 times kPa) it is anyhow easy to convert. In some areas (like NIST in the USA) the bar is not recommended to be used widely.

And like for all pressure units, SI or not SI, we can use the common prefixes/coefficients in front of them, most commonly used are milli (1/100), centi (1/10), hecto (100), kilo (1000) and mega (1000000). To list a few examples, that already gives us different Pa versions, all being commonly used: Pa, kPa, hPa, MPa. The unit bar is most commonly used without prefix or with prefix milli: bar, mbar.

But taking all mass units and combining those with all area units from the SI system, we get many combinations.

Although the SI system is used in most countries, there are still a lot of other pressure units also being used. So let’s take a look at those next.

 

Imperial units

In countries using the Imperial system (like the USA and UK), the engineering units used both for mass and area are different than with the SI system. Therefore this also creates a whole new set of pressure units. Mass is being measured commonly in pounds or ounces, and area and distance with inches or feet.
So some pressure units derived from these are lbf/ft², psi, ozf/in², iwc, inH2O, ftH2O.

In the United States, the most common pressure unit is pounds per square inch (psi). For process industries, a common unit is also inches of water (inH2O), which is derived from level measurement and the historical measurements of pressure differences with water in a column.

 

Liquid column units

The older pressure measurement devices were often made by using liquid in a transparent U-tube. If the pressure in both ends of the tube is the same, the liquid level on both sides is on the same level. But if there is a difference in the pressures, then there is a difference in the liquid levels. The level difference is linearly proportional to the pressure difference. In practice, you can leave one side of the tube open to the room’s atmospheric pressure and connect the pressure to be measured to the other side. As referred to the current atmospheric pressure, it is a gauge pressure type being measured.

Pressure-measurement-u-tube.jpg

The pressure scale is marked in the tube so you read the pressure by reading the difference in liquid levels. When pressure is applied it will change the liquid level and we can read the value. This sounds very simple, no electronics and no wearing parts, so what could possibly go wrong… well, let’s see about that.

The most commonly used liquid in the column was obviously water. But in order to be able to measure higher pressure with a smaller U-tube, heavier liquids were needed. One such liquid is mercury (Hg) as it is much heavier than water (13.6 times heavier). When you use the heavier liquid you don’t need to have that long column to measure higher pressure, so you can make a smaller and more convenient size column. For example, blood pressure was earlier (still sometimes is) measured with a mercury column. Mercury is mainly used because a water column for the same pressure range would be so long it would not be practical to use it in a normal room, as the water column is about 13.6 times longer than the mercury column. As a result of this, even today the pressure unit that blood pressure is typically expressed is a millimeter of mercury (mmHg).

A common industrial application for use of liquid column pressure units is to measure the liquid level in a tank. For example, if you have a water tank that is 20 feet (or 6 meters) high and you want to measure the water level in that tank, it sounds pretty logical to install a pressure indicator with a scale 0 to 20 feet of water, as that would tell straight what the water level is (13 feet in the example picture).

Pressure-measurement-liquid-level-tank.jpg

 

Back to the water column: It is clear that when the length indication was made to a U column, many different length units have been used, both metric and non-metric. This has generated many different pressure units.

Although a liquid column sounds very simple, it is important to remember that the weight of the liquid depends on the local gravity, so if you calibrate the column in one place and take it to another (distant, different elevation) place, it may not be measuring correctly anymore. So gravity correction is needed to be precise.

Also, the temperature of the liquid affects the density of the liquid and that also affects slightly the readings of a U-tube. There are various different liquid column-based pressure units available, having the liquid temperature specified in the pressure unit, most commonly used temperatures are 0 °C, 4 °C, 60 °F, 68 °F. But there are also water column units, which have no indication of the water temperature. These are based on a theoretical density of water, being 1 kg/1 liter (ISO31-3, BS350). In practice, the water never has that high density. The highest density that water has is at +4 °C (39.2 °F) where it is approximately 0.999972 kg/liter. The density of water gets lower if the temperature is higher or lower than +4 °C. Temperature can have a pretty strong effect on the density, for example going from +4 °C to +30 °C changes the water density by about 0.4%.

Finally, the readability of a mechanical liquid column is typically pretty limited, so you can’t get very accurate measurements. And due to the mechanical limitations, you can’t use a U-tube for high pressure. All of these above-mentioned issues make a U-tube liquid column not very practical to use. Also, modern digital pressure measurement devices have replaced liquid columns. But many of the pressure units created in the era of liquid columns have remained and are still used today. To shortly summarize the liquid column-based pressure units:

• For the length we have many units; mm, cm, m, inch and feet.
• Then we have columns for different liquids, like water (H2O) and mercury (Hg).
• We have water column units for different density at temperatures, like 0 °C, 4 °C, 60 °F and 68 °F and for theoretical densities.

By combining all of these, we get a long list of pressure units, just to mention a few: mmH₂O, cmH₂O, mH2O, mmHg, cmHg, mHg, iwc, inH2O, ftH2O, inHg, mmH2O@4°C, mmH2O@60°F, mmH2O@68°F, cmH2O@4°C, cmH2O@60°F, cmH2O@68°F, inH2O@60°F, inH2O@68°F, inH2O@4°C, ftH2O@60°F,ftH2O@68°F, ftH2O@4°C and so on.

 

Atmospheric units

For measurement of the atmospherics’ absolute pressure, dedicated pressure units have been created. One of such is the standard atmosphere (atm) which is defined being 101325 Pascal. To add confusion, there is also a technical atmosphere (at) which is pretty close, but not quite the same as atm. The technical atmosphere is one-kilogram force per square centimeter. So 1 at equals about 0.968 atm.

Another pressure unit used for measuring atmospheric absolute pressure is torr, being 1/760 of standard atmosphere. So torr is an absolute pressure unit, although that is typically not mentioned, you just need to know it, which can cause confusion. Torr was initially meant to be the same as 1 millimeter of mercury, although the later definitions show a very small difference in between. Torr is not part of the SI system.

 

Cgs unit of pressure

The abbreviation “cgs” comes from the words “centimetre-gram-second”. As these words hint, the cgs system is a variation of the metric system, but instead of using the meter it uses centimeter as the unit for length, and instead of kilogram it uses gram as the unit for mass. 
Different cgs mechanical units are derived from using these cgs base units.

The cgs is a pretty old system and has been mostly replaced first by the MKS (meter-kilogram-second) system which then has been replaced by the SI system. Yet, you can still sometimes run into cgs units of pressure.

The cgs base pressure unit is barye (Ba), which equals 1 dyne per square centimeter. 

Dyne is the force needed to accelerate a mass of one gram to a rate of one centimeter per second per second. 

As pressure unit conversion, 1 barye (Ba) equals 0.1 Pascal (Pa).

 

And some more…

In addition to all the above pressure units, there are still plenty
more existing…
Just to mention, for example in a Beamex MC6 calibrator, there are over 40 different pressure units, plus still a few custom units for the thrill-seekers.

 

Pressure unit conversions standards

If you work with pressure, you know that it is very common that pressure is indicated with a certain pressure unit and you need to convert it into another pressure unit. 

Pressure units are based on standards and the conversion between units should also be based on standards. The most common standards for pressure units are:

  • SI system
  • ISO31-3
  • ISO 80000-4:2006
  • BS350
  • PTB-Mitteilungen 100 3/90
  • Perry’s Chemical Engineer’s Handbook, 6th ed, 1984

 

Pressure unit converter tool

I tried to make a conversion table between different pressure units, but that table started quickly to become a huge matrix that would not be easy for you to use at all. So instead of making a conversion table, we developed an online pressure unit converter for our website. With this converter, you can easily convert a pressure reading from one unit into other units. Please click on the link to check out the pressure unit converter.

Below is an example screenshot from the pressure unit converter, when converting 1 MPa into other units:

 

Pressure-unit-conversion-table

 

Interested in printing this text or sharing it with your peers?

Download this full article as a free White Paper from the picture link below:

CTA-Pressure-units

 

Beamex pressure calibration tools

Click on this link to learn more about the Beamex pressure calibration tools.

 

Please check out our new electric pressure pump Beamex ePG:

Electric pressure calibration pump Beamex ePG

 

 

Topics: Pressure calibration

Calibration Out of Tolerance – Part 2

Posted by Heikki Laurila on Feb 02, 2017

In this post I continue on the topic of calibration being “Out of Tolerance” (OoT) or “Failed”.

“Out of Tolerance” (OoT) means that during a calibration some calibration point(s) failed to meet the required tolerance level. This causes the result of the calibration to be out of tolerance, or as often also referred to, that it was a failed calibration.

In the earlier blog post I already covered some of the items related to this subject, they were:

  • What does “Out of Tolerance “ mean?
  • What is the tolerance level used?
  • How was the calibration found to be out of tolerance?
  • How critical is it?

If you missed the first part, please check the blog post here.
I have also made a white paper on this subject and you can download it from the below link.

New Call-to-action

But now, let’s continue with the remaining topics.

When did it happen?

An out of tolerance situation is naturally noticed during a calibration, but this is anyhow not the moment when the instrument went out of tolerance, or started to measure incorrectly. But when did it happen then? It is important to determine when it happened, because any measurements done after that moment are suspect. In case of a critical process measurement that failed, any products produced after that moment are effected and may need to be recalled, in worst cases.

It is not an easy task to determine the moment when the instrument went out of tolerance. By checking the previous calibration data, and confirming that the instrument was then left in acceptable condition, is a place you can start from. However, if there are no records between the previous good calibration and the new failed calibration, you need to question everything done in between. You can study the measurement results and any relevant data in between the OoT and the previous good calibration to see if there is anything that would indicate when the instrument drifted out of specification. This may be, for example, a sudden raise in the reported changes or issues in process, or in case of a calibrator, a time period when more failed calibrations started to appear. You may also analyze the history of that instrument to see if there is an indication of any typical drift for that instrument, and possibly interpolate the data to find the most likely moment when it went out of tolerance. It can anyway be very difficult to determine the actual moment when an instrument failed to meet its tolerance. It may be that there is no option but to assume that all calibrations done after the previous successful calibration are effected and suspect to be failed.

 

Impact analysis - what are the consequences?

Once we know that the out of tolerance really happened and we have analyzed how much it was and have an idea when it had occurred, the next step is to evaluate the impact. You need to find out where this failed instrument has been used and what measurements are suspect.

In the case of a process transmitter installed to a location, it is obvious where it has been used, but in case of portable measuring equipment, or a portable calibrator, it is different situation. One powerful option available in some calibration management program is a “reverse traceability” report. This kind of report lists all the calibrations where a specific instrument has been used, over a certain time period. This report is most helpful when you need to analyze, for example, where a portable calibrator has been used. If you do not have an automated reverse traceability report and need to manually go through calibration reports to see where that certain calibrator was used, it may take many man hours to complete. However you do it, it needs to be done.

In the case of a process instrument being out of tolerance, you need to have your process specialist analyze what the impact of this failure is for the actual process and to your end product. In best case scenario, if the effect to the process measurement was so small, it will not cause any significant damage. However, in worst case, if the analysis tells you that the effects to the process, and to the products being produced, are so big that the products produced do not meet their specifications, then costs can be huge. In many processes, the quality of the end product cannot be simply tested in the final product, but the process conditions must be correct during the manufacturing process. If this for example involves food/medicine or the heat treatment process of critical aerospace/automobile parts, then you are obligated to inform your clients/customers, or even withdraw products form market. Product withdrawal is a dramatic consequence; it will get you into the news, it will be very expensive, and it will have a negative effect to your company brand, reputation and stock value.

In the case of a process calibrator that fails to meet its tolerance, you will need to evaluate how much the failure had effect to all the measurements made with in since its last known good calibration. Many times, the calibrator is significantly more accurate than the process instruments calibrated by it, so there is some safety margin. In the best case scenario, even if the calibrator failed recalibration, the failure can be so small that it does not have significant effect to the calibrations that have been done with it. But in the worst case, if all of the calibration work that has been done with that calibrator are suspect, then you need to analyze the effect for each process measurement that has been calibrated. As previously mentioned, this can be a really big task as you need to do the analysis for all the process measurements being effected.

 

Quality assurance considerations

You may have heard your quality professionals talking about CaPa, being an abbreviation of Corrective Actions and Preventive Actions. This is something that is stipulated by most quality standards, such as the very common ISO/ IEC 9001 quality standard as well as ISO/IEC 17025 used in accredited calibration laboratories. Corrective actions are obviously the actions you take to correct the situation, while preventive actions are all the actions you take in order to prevent the same situation from happening again in the future. The effectiveness of corrective and preventive actions is important to review. Also, all other similar instances should be considered to see if there is any possibility for similar occurrences elsewhere. Quality standards also require that these processes are documented and that responsibilities are specified.

root cause analysis is typically required by quality standards to find out what caused an OoT to occur. A risk analysis, or generally, risk-based thinking, is something required by the modern quality system standards. Continuous improvement is also a common quality requirement to ensure that you continuously improve your quality system and learn from any mistakes, so that problems do not happen again.

Many companies, especially in regulated industries, are using some form of a “deviation management software system” where all OoT calibration cases are recorded in order to control and document the process of handling these cases.

 

Summary

Summarizing the key points of these two blog posts and the related white paper, if you get an out of tolerance calibration, you need to do the following:

  • Verify what tolerance level was used and that it is a correct level.
  • Verify the uncertainty used in making any decisions that a measurement is out of tolerance and that the uncertainty is appropriate.
  • How critical is this out of tolerance observation?
  • Where in the traceability chain did this occur?
  • When did it occur?
  • Make an impact analysis to find out what the consequences are.
  • Perform relevant quality assurance considerations.

New Call-to-action

Topics: Calibration tolerance

Metrological Traceability in Calibration – Are you traceable?

Posted by Heikki Laurila on Jan 18, 2017

What is metrological traceability in calibration and how can you be traceable?

Metrological Traceability in Calibration – Are you traceable?

In calibration, the metrological traceability is a fundamental consideration. If the calibrations you perform in your plant are not traceable, you don’t know if they are correct or not and therefore there is really no point in doing them.

In practice you see terms as “Calibration Traceability”“Measurement Traceability”, or sometimes just the word “traceability” being used, although it is formally most correct to talk about “Metrological Traceability”. Using just the word “traceability” may cause confusion as it relates also to many other contexts such as material traceability, document traceability, requirement traceability matrix etc.

In USA the “NIST traceability” is probably the mostly often used term. NIST (The National Institute of Standards and Technology) has adopted the VIM’s (International Vocabulary of Metrology) international definition of metrological traceability, which is explained in the next chapter below.

Let’s first take a look at the formal definition of metrological traceability and then discuss what you need to do in order to claim that the calibrations in your plant are traceable.

 

Download this article as free pdf by clicking the picture below:

CTA-WP-traceability-720x300px_v1.png

 

 

Formal definition of traceability

The formal definition of metrological traceability:

Property of a measurement result whereby the result can be related to a reference through a documented unbroken chain of calibrations, each contributing to the measurement uncertainty.

This definition is based on the official definition in standards listed in the “references” chapter in the end of the downloadable white paper.

That definition sure have many fancy words, so I want to break it down to a level that is more practical and easier to understand.

Traceable-pyramid_1500px_v1.jpg

Picture 1. The metrological traceability in calibration can be presented as a pyramid. The pyramid illustrates how the different levels in the traceability are located. As all your process instruments are located in the lowest level, their traceability is dependent on all the levels above.

 

Calibration/Metrological traceability chain in practice

Let’s take a look at what the metrological traceability and the traceability chain is in practice, in a typical process plant, looking from bottom to top:

• In your plant, you have many process instruments, such as transmitters, that are calibrated regularly using your process calibrator, or similar measurement standard.

• The process calibrator is typically sent out to an external calibration laboratory for calibration, assuming it is the highest level reference standard in your plant. Alternatively, the process calibrator may also be calibrated internally in the plant, using a higher level reference standard.

• The highest level reference standard(s) of your plant are sent out to an external calibration laboratory, preferably an accredited one, to be calibrated.

• The external calibration laboratory will calibrate their references to assure traceability to National Calibration laboratory, or similar.

• The National Calibration laboratories work with International level laboratories and make international comparisons with each other’s assuring that their calibrations are on the same level.

• The International level laboratories base their measurements on international comparisons, international definitions and realization of the International System of Units (SI system).

Traceability-chain.jpg

Picture 2. The metrological traceability in calibration can be also presented as a chain. The chain illustrates pretty well the fact that everything hanging below certain link is not traceable if that link is broken.

The higher you go in the chain, the smaller the uncertainty is, or the other way is to say the better the accuracy is. The above simplified practical example shows how a process measurement that you make in your plant is traceable up to an international level through an unbroken chain of measurements. The old worn-out saying “a chain is only a strong as the weakest link” is very much valid here. If any link in the chain is missing (or overdue), all measurements below that level have no traceability and are suspect to error.

There are conditions that need to be met before you can say that your process measurements are traceable, more on that in the next chapters.

 

When can you claim that your measurement is traceable?

Timely calibrations
All the calibrations in the traceability chain have to be done on at regular intervals. It is not enough that you once have had your reference standard calibrated and then you continue using it for years without recalibrations. The calibration of any measurement device only remains valid for a stated period of time. Therefore, the traceability expires when the calibration expires.

Every step needs to be documented
Every calibration in the traceability chain needs to be documented. This means of course that the calibration results are documented in the calibration certificate, but also that the calibration procedure is done according to a written procedure according to the company’s quality system. It is pretty clear that a calibration without a calibration certificate is not a proper calibration, and certainly not a traceable calibration. It is also good to remember that if the calibration is done without documented procedures in an environment without a Quality System, the calibration is not reliable and cannot be proven to be traceable.

Every step needs to include measurement uncertainty
As the definition says, it is also important that every calibration step in the traceability chain have the related measurement uncertainty documented.
If the uncertainty information is missing from the calibration, you can’t claim it is traceable. The main reason is that without knowing and documenting the uncertainty, you could calibrate an accurate measurement equipment with one that is less accurate. Or that the calibration procedure causes such a big uncertainty, that the calibration is neither good nor traceable.

 

Calibrations inside your plant

Typically the process plant’s internal calibration activities are not accredited, meaning that they are not able to produce an accredited calibration certificate. This is perfectly fine, in most cases it is not reasonable or necessary to get accreditation. Sure you could use an external accredited calibration service that comes in and makes the calibration of your process instruments, but in practice in most cases that is an overkill. This is assuming that your plant is following a quality system such as the ISO 9001 quality standard. In some regulated industries or critical measurements the accredited calibration of the process instruments may be worth the effort. In the internal calibrations inside your plant, you can transfer the traceability from one reference to the next one, or to the process instruments, even multiple times in multiple levels. This is as long as the basic requirements are met, such as, but not limited to, the following:

• calibration results are documented in certificate
• there are sufficient procedures on how to perform the calibration
• there is a quality system
• the training and competence of workers are adequate and documented in records
• the uncertainty of the calibration is known and documented

 

External calibrations – accredited or not?

To get the traceability into your plant, send your reference standard(s) outside to an external calibration laboratory for calibration. Using an accredited calibration laboratory is highly recommended. It is not compulsory to use an accredited laboratory, but if you use a non-accredited laboratory you must ensure (audit) yourself that the laboratory is traceable, this means for example, but not limited to, the following:

• traceability of that laboratory is documented
• its quality system and proper procedures are in working order
• competence of workers are adequate
• uncertainty of the calibration is properly calculated
• uncertainty of the calibration is suitable for your use

To find out all the necessary information, it requires a very good dedicated competence of the person performing the audit of the laboratory. If that is an accredited laboratory, you know that competent auditors are auditing the laboratory on a regular basis, ensuring everything is in order. So using an
accredited calibration laboratory makes it all so much easier for you.

There is anyhow one thing that is always left to you, that is the last bullet in the above list – you must anyhow assure that the uncertainty of the laboratory used is suitable for your reference and for your needs. I have seen more than once an accredited calibration certificates where the total uncertainty of the calibration is bigger than the accuracy/uncertainty specifications for the reference calibrated.

So even though you use an accredited calibration laboratory to calibrate your references, its uncertainty may not be suitable for your needs. It is good to remember that even if a calibration laboratory is an accredited one, it does not mean that its uncertainty is suitable and small enough for you. There are many accredited calibration laboratories out there and they have different uncertainties they can offer. It is possible to get an accreditation for a calibration laboratory that has a big uncertainty, but of course that uncertainty will be documented in the certificate and in its scope of accreditation, so it is known and easy for you to find out. Anyhow, when you calibrate your reference standards, you must ensure that you use a laboratory that can offer good enough uncertainty for your needs. If using an accredited laboratory you will know what the uncertainty of the calibration is. However, if using a non-accredited laboratory, that remains mystery. It is good to remember that it is not enough that the laboratory has some good reference standards, everything else must also be in order for the calibration to be traceable.

 

Summary

To shortly summarize, let’s take the definition of metrological traceability and summarize what it means in practice:

Property of a measurement result whereby the result can be related to a reference through a documented unbroken chain of calibrations, each contributing to the measurement uncertainty.

Shortly this means that in order to be traceable every calibration has to:

• Include a calibration certificate
• Include an indication of what is the reference being used and its traceability
• Be performed according to documented procedures
• Be an unbroken chain of calibrations
• Include measurement uncertainty
• Be performed by trained and competent resources 
• Be valid, and not expired

 

Interested in printing this text, sharing it with your peers or getting useful references regarding metrological traceability? Download the text as a White Paper from below.

CTA-WP-traceability-720x300px_v1.png

 

Also, please check out the article What is calibration on our web site.

Topics: Calibration, traceability

Calibration uncertainty for dummies – Part 3: Is it Pass or Fail?

Posted by Heikki Laurila on Jan 04, 2017

Calibration uncertainty - Beamex blog post

 

In this post we discuss the following scenario: You have done the calibration, have the results on the certificate and you compare results to the tolerance limits. It’s time to pop the big question: Is it a Passed or Failed calibration? Or is it In or Out of Tolerance?

This is the third and final post in this three-part series on the subject of calibration uncertainty. If you missed the earlier blog posts on this subject, you can find them earlier in our blog (or links below), or you can get all information by downloading the related white paper from the picture link below.

Measurement Uncertainty: Calibration uncertainty for dummies - Part 1

Calibration uncertainty for dummies - Part 2: Uncertainty Components

CTA-calibration-uncertainty

 

Compliance statement – Pass or Fail

Most often when you calibrate an instrument, you have a predefined tolerance limits that the instrument has to meet. Sure, you may do some calibrations even without tolerance limits, but in process industry the tolerance levels are typically set in advance. Tolerance levels are the maximum levels indicating how much the result can differ from the true value. If the errors of the calibration result are within the tolerance limits, it is a Passed calibration, and if some of the result errors are outside of tolerance limits, it is a Failed calibration.  This sounds pretty simple, like basic mathematics. How hard can it be?

It is, in any way, important to remember that it is not enough just to take the error into account, you must also take the total uncertainty of the calibration into account!

Taking the uncertainty into account makes it another ball game. As discussed in the paper, there are many sources of uncertainty. Let’s go through some examples next.

 

Example

Let’s say the process transmitter you are about to calibrate has a tolerance levels of ± 0.5% of its measurement range. During the calibration you find out that biggest error is 0.4%, so this sounds like a Pass calibration, right? But what about if the calibrator that was used has an uncertainty specification of ±0.2%? Then the result 0.4% could be turned into either pass or fail, it is impossible to know which one. Plus, in any calibration you also have uncertainty caused by many other sources, like the standard deviation of the result, repeatability, calibration process, environmental conditions, and others. When you estimate the effect of all these uncertainty components, it gets even more likely that the example calibration was a fail after all, although it looked like a passed one at first.

 

Example – different cases

Let’s look at a graphical illustration of the next example, to make this easier to understand. In the below picture, there are four calibration points taken, the diamond shape reflecting the actual calibration result. The line above and below the result indicates the total uncertainty for this calibration. The tolerance level is marked with a line in the picture.

We can interpret the different cases shown above as following:

  • Case 1: This is pretty clearly within the tolerance limits, even when uncertainty is taken into account. So we can state that this as a “Pass” result.
  • Case 4: This is also pretty clear case. The result is outside of the tolerance limits, even when uncertainty is taken into account. So this is clearly a bad or “Fail” result.
  • Case 2 and Case 3: These cases are a bit more difficult to judge. It seems that in case 2 the result is within the tolerance while in case 3 it is outside, especially if you don’t care about the uncertainty. But taking the uncertainty into account, we can’t really say this with confidence.

There are regulations (for example; ILAC G8:1996 - Guidelines on Assessment and Reporting of Compliance with Specification; EURACHEM / CITAC Guide: Use of uncertainty information in compliance assessment, First Edition 2007) for how to state the compliance of calibration. These guides suggests to state a result as passed only when the error added with uncertainty is less than the tolerance limit. Also, they suggest to state failed only when the error added (or subtracted) with the uncertainty is bigger than the tolerance limit. When the result is closer to the tolerance limit than half of the uncertainty, it is suggested to be called an “undefined” situation, i.e. you should not state it being neither pass nor fail.

We have seen many people interpreting the uncertainty and pass/fail decision in many different ways over the years. In practice, the uncertainty is often not taken into account in the pass/fail decision, but it is anyway very important to be aware of the uncertainty, when making the decision.

 

Example – different uncertainty

Another situation to illustrate with a picture next, is when the total uncertainty is not always the same. The cases 1 and 2 have the same measurement result, so without uncertainty we would consider these being the same level measurements. But when the uncertainty is taken into account, we can see that case 1 is really terrible because the uncertainty is simply too large to be used for this measurement with the given tolerance limits. Looking at case 3 and 4 it seems that the case 3 is better, but with uncertainty we can see that it is not good enough for a pass statement, while the case 4 is.

Again I want to point out that we need to know the uncertainty before we can judge a measurement result. Without the uncertainty calculation the above cases 1 and 2 look similar, although with uncertainty taken into account they are very different.

 

TUR / TAR ratio vs. uncertainty calculation

The TUR (test uncertainty ratio), or TAR (test accuracy ratio), is often mentioned in various publications. In short this means that if you want to calibrate a 1% instrument and you want to have 4:1 ratio, your test equipment should be 4 times more accurate, i.e. having 0.25% accuracy, or better. Some publications suggest that having a TUR/TAR ratio large enough, there is no need to worry about uncertainty estimation/calculation. The quite commonly used ratio is 4:1. Some guides/publications do also have recommendations for the ratio. Most often the ratio is used as in the above example, i.e. just to compare the specifications of the DUT (device under test) and the manufacturer’s specifications of the reference standard. But in that scenario you only consider the reference standard (test equipment, calibrator) specifications and you neglect all other related uncertainties. While this may be “good enough“ for some, calibrations, this system does not all uncertainty sources into account. So it is highly recommended to make the uncertainty evaluation/calculation of the whole calibration process.

We also get asked quite regularly: “How many times more accurate should the calibrator be, compared to the device to be calibrated?”. While some suggestions could be given, there isn’t really any correct answer to that question. Instead you should be aware of the total uncertainty of your calibrations. And of course, it should reflect to your needs!

 

Summary - key take-outs from the white paper

To learn more about the subject, please take a look at the related white paper.  Here is a short list of the key take-outs from the white paper:

  • Be sure to distinguish “error” and “uncertainty”
  • Experiment by making multiple repeats of measurements to gain knowledge of the typical deviation
  • Use appropriate reference standards (calibrators) and make sure they have a valid traceability to national standards and that the uncertainty of the calibration is known and suitable for your applications
  • Consider if the effect of the environmental conditions have a significant effect to the uncertainty of your measurements
  • Be aware of the readability and display resolution of any indicating devices
  • Study the specific important factors of the quantities you are calibrating
  • Familiarize yourself with the “root sum of the squares” method to add independent uncertainties together
  • Be aware of the coverage factor / confidence level / expanded uncertainty, of the uncertainty components
  • Instead, or in addition to the TUR/TAR ratio, strive to be more aware of all the related uncertainties
  • Pay attention to the total uncertainty of the calibration process before making pass/fail decisions

CTA-calibration-uncertainty

 

Best regards,
Heikki

 

 

 

Topics: calibration uncertainty

Calibration Out of Tolerance: What does it mean and what to do next? - Part 1 of 2

Posted by Heikki Laurila on Dec 16, 2016

Calibration out of tolerance - Beamex blog post

Calibration out of tolerance – ready to hit the Panic button?

In this post we are talking about calibration being “Out of Tolerance” (OoT). 
When you calibrate something and the result is a “Fail”, then it is an Out of Tolerance-situation. 
But it’s not all that simple - there is more to it and more things to be considere
d. 

What does the “Out of Tolerance” (OoT) mean?

Let’s start by discussing the what out of tolerance really means. In summary, it means that during a calibration some calibration point(s) failed to meet the required tolerance level. This causes the result of the calibration to be out of tolerance or, as often also referred, that it was a failed calibration. This definition sounds pretty simple but when we start to look deeper, it turns out to be more complicated. So let’s continue…

 

What is the tolerance level used?

For a calibration, we normally specify a tolerance level (maximum permitted error) that a given test result is compared against. If we say that the instrument failed to meet the required tolerance level, it becomes an out of tolerance case and specific follow-up procedures may be initiated. 

Whenever a tolerance limit is not met, the next logical question should be; what is the tolerance level that was used? Or maybe the question should be that what tolerance level should be used

For a process instrument, it is very typical that the tolerance level used is the manufacturer’s accuracy specification of the installed process instrument (as specified by the manufacturer). It means that if you buy 100 similar transmitters and install them into 100 different locations in your plant, all of these locations will have the same tolerance level. However, in most cases, many of these installation locations have different criticality and therefore they should also have different tolerance levels. A critical process location needs to have a smaller tolerance level and many times it is also calibrated more often. Likewise, in a non-critical location, it is waste of resources, time and money, to calibrate as often and to use the same tolerance level. Personnel who have the best knowledge of the actual requirements of your process in question, should be involved when deciding tolerance levels for given locations.

If it is a matter for your calibrator or your reference standards, then it is more common to use the manufacturer’s specification as the tolerance level (at least to begin with). Comparison to the manufacturer’s specifications during re-calibration or certification will indicate how reliable the equipment is for performing field calibration. Calibrator tolerances should also consider local needs and be adjusted accordingly over time. 

If you want to learn more about calibration intervals, please check our earlier blog post and related article on How often instruments should be calibrated.

New Call-to-action

 

How was it found to be out of tolerance?

Assuming tolerance levels are set correctly, when someone says that during a calibration an instrument failed to meet its tolerance level, the next logical question is: are you sure? And to continue, how sure are you

This leads to the question; which calibrator was used to make the calibration, and what is the total uncertainty of that calibration? If the situation is where the calibration was outsourced to an external calibration laboratory, what is the calibration uncertainties of that laboratory?

If you have read my earlier blog post and/or white paper on “Calibration Uncertainty”, I think you can see where I am going with this one..  for every measurement and calibration, the total uncertainty of the calibration is critical.

 Whenever you make the compliance statement that something passes or fails a calibration, or that it is inside or outside of the tolerance, you must consider the uncertainty of the calibration, in order to be able make a proper decision.  

I will not go any further into calibration uncertainty in this article. For more information, please take a look at the earlier post on this subject: Calibration uncertainty for non-mathematicians”.

 

How critical is it?

What is the next step when we have found an instrument to be out of tolerance and our tolerance levels are valid and the calibration was done with appropriate uncertainty? This means that we must admit that this really is an out of tolerance case, and action needs to be taken. Before hitting the “Panic” button, it is important to see how critical the case is.

For a process instrument, it is good practice to have the calibration tolerance a bit tighter than the actual process requirement. By doing this, even if the instrument fails slightly against the tolerance in calibration, it does not have a dramatic effect on the actual process. Generally, if the results are just slightly out of tolerance, this may not be critical for the measurements that have been done with it. An analysis needs to be done in order to see if a failure is critical or not.

During a criticality analysis, you should analyze what is the impact of this out of tolerance case. If it is a process instrument, what effect does this amount of error in the process measurement have to the actual process and for process control? Also, what effect does this have on the actual end product being produced?

In case of a calibration standard, what effect does the error in the standard have to all the various measurements and calibrations that have been done with that standard?

For an extremely critical process or safety measurement, redundancy may be added with two or more simultaneous measurement instruments being installed. In this case, a failure in one instrument’s capability to measure correctly does not cause a critical failure.

For some measurements, the process instrument can be checked before each measurement is made with it. For example, a weighing scale can be checked with reference weight(s) before making measurements, or on a regular daily basis. A calibrator can also be cross-checked against another similar level reference instrument periodically (between the regular recalibrations). The calibration interval can also be adjusted according to the criticality of the measurements made with a given instrument. Depending on the application, if the cost of an out of tolerance calibration is very high, that will also effect the calibration interval and strategy.

 

Remaining topics

In the next post I will be covering the remaining topics discussed in the white paper being:

  • Where in the traceability chain did the OoT occur?
  • When did the OoT happen?
  • Impact analysis - what are the consequences?
  • Quality assurance considerations

New Call-to-action

 

Also, please check out the article What is calibration on our web site.

Topics: Calibration tolerance

Calibration uncertainty for dummies - Part 2: Uncertainty Components

Posted by Heikki Laurila on Nov 25, 2016

Stanard-deviation.png

Figure 1. Standard deviation

This is the second blog post (out of three) continuing on the subject of calibration uncertainty. In the first blog post we discussed the basics; how important it is to be aware of the uncertainty, short terminology and the classic “piece of string“ example.  If you missed the Part 1-blog post, you can find it in the below link. If you want to get all the parts at once, feel free to download the related full white paper from the link below.

 

Measurement Uncertainty: Calibration uncertainty for dummies - Part 1

 

CTA-calibration-uncertainty

 

Standard deviation – one important uncertainty component

There are several uncertainty components making up the total uncertainty. The standard deviation of the measurement is one important component, so let’s discuss that next.

The first simple, yet good, practice is that when you normally make a measurement/calibration once, try instead to repeat the same measurement several times. You will most likely discover small differences in the measurements between the repeats. But which measurement is the correct one? Without diving too deep into statistics, we can say that it is not enough to measure only once. If you repeat the same measurement several times, you can find the average and the standard deviation of the measurement. So you will learn how much the results can typically differ between repeats. This means that you can find out what is the normal difference between measurements. It is suggested to make a measurement multiple times, even up to ten times, for it to be statistically reliable enough to calculate the standard deviation. These kind of uncertainty components, that you get by calculating the standard deviation, are called the A-type uncertainty. You may say: What?? - Always repeating the same measurement ten times is just not possible in practice!

Luckily you don’t always need to make ten repeats, but you should still experiment with your measurement process by sometimes repeating the same measurement several times. This will tell you what the typical deviation of that whole measurement process is and you can use this knowledge in the future as an uncertainty component related to that measurement, even if you just make the measurement once during your normal calibration.

Imagine that you would perform a temperature measurement/calibration multiple times and you learn that there is a ±0.2 °C difference between the repeats. Next time you make the same measurement, even if you would then make it just once, you would be aware that there is this ±0.2 °C possible difference, so you could take it into account and don’t let the measurement get too close to the acceptance limit. If you keep calibrating similar kinds of instruments over and over again, it is often enough to make the measurement just once and use the typical experimental standard deviation. In summary, you should always be aware of the standard deviation of your calibration process – it is one important part of the total uncertainty.

 

Your reference standard (calibrator) and its traceability

Often, one of the biggest sources of uncertainty comes from the reference standard (or calibrator) that you are using in your measurements/calibrations. Naturally to start with, you should select a suitable reference standard for each measurement. It is also important to remember that it is not enough to use the manufacturer’s accuracy specification for the reference standard and keep using that as the uncertainty of the reference standards for years. Instead you must have your reference standards calibrated regularly in a calibration laboratory that has sufficient capabilities (uncertainty small enough) to calibrate the standard and to make it traceable. Pay attention to the total uncertainty of the calibration that the laboratory documented for your reference standard. Also, you should follow the stability of your reference standards between its regular calibrations. After some time, you will learn the true uncertainty of your reference standard and you can use that information in your calibrations.

 

Other sources of uncertainty

In the white paper you can find more detailed discussion on the other sources of uncertainty. 
 To shortly summarize, these include:

  • Device under test
  • Reference standard (calibrator)
  • Method/process for making the measurements/calibrations
  • Environmental conditions
  • The person(s) making the measurements
  • Additional uncertainty components depending on the quantity being measured/calibrated

All of these above listed uncertainty components are referred as the Type B uncertainty.

 

Summary - key take-outs from the white paper

To learn more about the subject, please take a look at the related white paper.  
 Here is a short list of the key take-outs from the white paper:

  • Be sure to distinguish “error” and “uncertainty”
  • Experiment by making multiple repeats of measurements to gain knowledge of the typical deviation
  • Use appropriate reference standards (calibrators) and make sure they have a valid traceability to national standards and that the uncertainty of the calibration is known and suitable for your applications
  • Consider if the effect of the environmental conditions have a significant effect to the uncertainty of your measurements
  • Be aware of the readability and display resolution of any indicating devices
  • Study the specific important factors of the quantities you are calibrating
  • Familiarize yourself with the “root sum of the squares” method to add independent uncertainties together
  • Be aware of the coverage factor / confidence level / expanded uncertainty, of the uncertainty components
  • Instead, or in addition to the TUR/TAR ratio, strive to be more aware of all the related uncertainties
  • Pay attention to the total uncertainty of the calibration process before making pass/fail decisions

CTA-calibration-uncertainty

 

Best regards,
Heikki

Topics: calibration uncertainty

Pressure calibration basics – Pressure types

Posted by Heikki Laurila on Nov 18, 2016

Pressure types - Beamex blog post

Figure 1. Pressure types

On a regular basis, we receive customer questions about pressure types and we can realize that this is a topic that causes confusion. In everyday situations, we don’t usually talk much about different pressure types, but there are different types (sometimes also referred as “modes”) available. This post gives a short explanation of the different pressure types.

The two principal pressure types are gauge (or gage) and absolute pressure.

Vacuum is sometimes considered as its own pressure type, although it is a negative gauge pressure.

Barometric pressure is also used in discussions, being the atmosphere’s absolute pressure.

Differential pressure is also considered a pressure type as being the difference of two separate pressures. In the end, all the pressure types are differential, with just a different point of comparison. Let’s have a quick look at these different types.

Gauge pressure

Gauge (gage) pressure is the most commonly used pressure type. With gauge pressure we always compare the pressure we are measuring against the current atmospheric pressure. So it is the difference of the measured pressure and the current atmospheric pressure, meaning that we are that much above (or below) current atmospheric pressure. If our gauge pressure measurement device is open to atmospheric, it will always read zero, although the atmospheric pressure is different on any given day. Gauge pressure can be indicated as word “gauge” after the pressure unit (e.g. 150 kPa gauge). The abbreviation “g” is also used, although it is not fully legitimate and may cause confusion with the pressure unit.

Since gauge is the default pressure type, often there is no indication of pressure type, when it is gauge.

One practical example of gauge pressure is a car’s tire pressure; although we don’t talk about “gauge” pressure, we measure and fill it up to certain gauge pressure, i.e. certain amount above atmospheric pressure, regardless if it is a low (rainy) or high (sunny) atmospheric pressure on that day.

 

Absolute pressure

Absolute pressure is the pressure compared to absolute vacuum, so it is the difference of the measured pressure and the absolute vacuum. An absolute vacuum is a state where the vacuum is so deep that there is no air molecules left, so there is no pressure. In practice a perfect absolute vacuum is impossible to achieve, but we can get pretty close. Also, in outer space, the pressure is absolute vacuum. An absolute pressure can never be negative, or in practice not even zero. If somebody tells you about a negative absolute pressure, you can ask him to check his facts… Absolute pressure should be indicated as word “absolute” after the pressure reading (e.g. 150 psi absolute). Sometimes you can see also abbreviations “a” or “abs” being used, but the whole word “absolute” should be used if there is a risk that ”a” or "abs" can cause confusion in combination with the pressure unit. It is important to remember to highlight that it is absolute pressure in question, otherwise it may be confused with gauge.

 

Vacuum pressure

Vacuum pressure is a (gauge) pressure which is below current atmospheric pressure. Being a gauge pressure, it is compared against the current atmospheric pressure and is often indicated as negative gauge pressure. The term vacuum is sometimes also used as a generic term to refer to a pressure that is below atmospheric pressure, even if it could also be measured as absolute pressure. In that case it is of course not a negative number, it is just an absolute pressure being smaller than the current atmospheric absolute pressure. For example, if you pull a 40 kPa vacuum, that can be said to be -40 kPa gauge, but it can also be indicated in absolute pressure being for example 60 kPa absolute, if the barometric pressure is 100.000 kPa absolute at the moment.

 

Differential pressure

As the name already hints, the differential pressure is a difference of two separate pressures. The value can be positive or negative (or zero) depending which of these two pressures is higher.

A common industrial application is the measurement of flow by comparing a differential pressure over a constriction in the tubing (usually zero-based), or a tank level measurement by measuring the differential pressure between tank top and bottom. Another common measurement is the very low differential pressure difference between a clean room and surrounding areas.

 

Barometric pressure

The barometric pressure is the absolute pressure of current atmospheric pressure at a specific location. The nominal barometric pressure has been agreed to be 101 325 Pa absolute (101.325 kPa absolute, 1013.25 mbar absolute or 14.696 psi absolute). The barometric pressure is dependent on weather conditions, your location and your elevation, being the highest at sea level elevation and lowest at high mountains.

A weather forecast is one practical example of the use of absolute pressure to indicate high or low barometric pressure, roughly corresponding sunny or rainy weather. If a weather forecast would use gauge pressure, the air pressure would always be zero, so that would be pretty useless forecast (well, they often are useless anyhow) … ;-)

 

The basic conversion rule between gauge and absolute pressure is the following:
Absolute pressure = atmospheric pressure + gauge pressure

I hope this short post provided some useful information on the different pressure types.

Best regards,
Heikki

 

CTA Beamex Calibration Tips

Topics: Pressure calibration

Proof Testing: Calibration By A Different Name?

Posted by Heikki Laurila on Nov 08, 2016

The oil and Gas industry in the North Sea has been hit by a Perfect Storm – the rapid fall in the price of oil to levels less than the average extraction cost has resulted in the delay or cancellation of a huge number of investment projects.  

Additionally a large proportion of the assets operating in and around the North Sea are having their operational lives extended beyond that originally designed.  Of course this is not limited to off shore rigs – the gas receiving plants that were built in the mid 1980s were designed using the ‘CRINE’ contact terms – essentially every part of the installation was selected under ‘lowest cost wins’ rules.  The end result was the opposite of standardisation. A typical gas plant may have approximately 3000 instruments, but from perhaps ten different manufacturers!  The focus is clearly on Operational Excellence – and many operators have already managed to reduce their operating costs dramatically.  How is this situation affecting the world of calibration? 

 

Operational Efficiency

We all know that the calibration of instrumentation used in regulatory control is mandatory – the introduction of ISO9001 made this very clear for any company using the standard as the basis of their Quality Processes. As we have seen in other recent blogs that whilst ISO9001 has been revised, the task facing an instrumentation technician today is changing. In the past it was acceptable to calibrate every instrument on the plant annually when there were sufficient staff to calibrate 10 to 20 instruments every day.  Now OE task teams are rightly asking questions such as can we make the instrumentation tasks more efficient, can we change the calibration intervals based on the risk of failure or inaccurate readings – can we do more with less staff? 

Let’s just look at the variety of tasks being performed by instrumentation on an asset.  There are the day to day operations of the plant, the pressure, temperatures, flows and levels associated with wellheads, separators, compressors, injection systems etc. Then there are the fiscal measurements - those skids that result in tax being paid – so typically specialist flow rigs that measure the quantity and quality of oil and gas being shipped off a platform or received at an onshore plant. Traditionally the task of calibrating fiscal and process instruments have been performed by two completely separate teams – often separate contractors and to date this appears to be continuing.  

 

Proof Testing - New Tasks For Busy Instrument Techs?

Just recently I have been asked by a number of platform operations engineers about the Proof Testing of Safety Instrumented Systems. Safety Systems are increasingly becoming totally integrated with their associated Process Control Systems – although there is typically separate hardware and instrumentation, the user interface is similar or even the same.  The SIS is designed to ensure a process goes to a safe condition in the event of a critical event that could lead to an unsafe condition or even a catastrophic event, such as an explosion or a uncontrolled pollution event. ISO61511 is used to define the design, installation, operation (and decommissioning) of safety systems, but in practical terms this means that instrumentation engineers are legally required to test the operation of the safety systems loops and record the information, making it available for inspection by a government agency such as the Health & Safety Executive. In many respects it’s very similar to the regulations faced by instrumentation engineers in the pharmaceutical industry, where the emphasis is on electronic signatures, the competence of staff and integrity of data.  

In terms of instrumentation, the Proof Testing tasks are very similar to a typical calibration, but the key difference is in the frequency of the testing.  Earlier in this blog I referred to the risk based approach to calibrating instruments and yes, having an instrument calibrated on its due date is desirable, but not exactly critical. (Heresy I hear you cry!). With an SIS Proof Test it’s the exact opposite – the test intervals will very likely vary from loop to loop.  Each loop is designed to meet a particular Safety Integrity Level determined by the possible effect of a critical failure.  Each element of the loop, each instrument, valve and switch will have been designed and manufactured to meet the required SIL – but here’s the rub – it all depends on the Proof Testing interval of the instrument as defined in the safety manual.  So an instrument from vendor A will meet for example SIL2, but requires to be tested annually, but another apparently similar instrument from vendor B (hidden in the small print of the safety manual) may require testing every three months in order to meet its Safety Integrity Level, and often loops are designed to have similar instruments from different vendors remove any systemic failure modes.

 

And Whilst You’re At It…

Whilst IEC 61511 applies to safety instrumented systems, there is also the task of the inspection and testing of all the C&I infrastructure in hazardous areas.  The regular inspection of Ex rated switches, junction boxes, safety showers etc. is now covered by IEC60079-17.  Again the emphasis is on the legal requirement – not just to perform the inspections but to be able to prove the inspections have been carried out.  Often the inspections are visual only to record the condition of the item and raise a corrective work order if necessary.

It doesn’t take a rocket scientist to see the similarities in the requirements of all these various calibrations, proof tests and inspections.  So the question is can we take the multiple standard operating procedures documents, put them in an electronic form that we can carry around the platform and efficiently perform the tasks, recording the results and transfer them back to SAP or similar. 

 

An Excel spreadsheet is no longer sufficient….

Whilst the majority of Oil and Gas customers are currently using separate teams or contractors to perform these tasks, you can see why several companies are ‘rationalising’ the workforce – making fewer technicians perform more and varied tasks.  So it makes a great deal of sense if they can use the same tools, hardware and software, to not only perform calibrations, maintenance checks and proof tests, but probably more importantly from an operations point of view, to have the resulting records readily available and traceable.  

Topics: Calibration, Proof testing

Calibration uncertainty for dummies

Posted by Heikki Laurila on Nov 02, 2016

Measurement uncertainty: Calibration uncertainty for dummies - Beamex blog post

This article was updated on December 2, 2021. The three separate blog articles on calibration uncertainty were combined into this one article. Also, some other updates were made.

 

If you don’t know the measurement uncertainty, don’t make the measurement at all!

This time we are talking about a very fundamental consideration in any measurement or calibration – uncertainty !

I made a white paper on the basics of uncertainty in measurement and calibration. It is designed for people who are responsible for planning and performing practical measurements and calibrations in industrial applications but who are not mathematicians or metrology experts.

You can download the free white paper as a pdf file by clicking the below picture:

 
Being aware of the uncertainty related to the measurement is a very fundamental concept and you should not really perform any measurements unless you are aware of the related uncertainty.


It seems that the generic awareness of and interest in uncertainty is growing, which is great.

The uncertainty of measurements can come from various sources such including the reference measurement device used to perform the measurement, the environmental conditions, the person performing the measurements, the procedure, and other sources.  

There are several calibration uncertainty guides, standards, and resources available out there, but these are mostly just full of mathematical formulas. In this paper, I have tried to keep the mathematic formulas to a minimum.

Uncertainty estimation and calculation can be pretty complicated, but I have tried my best to make some sense out of it.


What is measurement uncertainty? 

What is the uncertainty of measurement? Put simply, it is the “doubt” in the measurement, so it tells us how good the measurement is. Every measurement we make has some “doubt” and we should know how much in order to be able to decide if the measurement is good enough to be used.

It is good to remember that error is not the same as uncertainty. In calibration, when we compare our device to be calibrated against the reference standard, the error is the difference between these two readings. The error is meaningless unless we know the uncertainty of the measurement.


Classic “piece of string” uncertainty example


Let’s take a simple example to illustrate the measurement uncertainty in practice; we give the same piece of a string to three different people (one at a time) and ask them to measure the length of that string. There are no additional instructions given. They can use their own tools and methods to measure it.

More than likely, you will get three somewhat different answers. For example:

  • The first person says the string is about 60 cm long. He used a 10 cm plastic ruler and measured the string once and came to this conclusion.
  • The second person says it is 70 cm long. He used a three-meter measuring tape and checked the results a couple of times to make sure he was right.
  • The third person says it is 67.5 cm long with an uncertainty of ±0.5 cm. He used an accurate measuring tape and measured the string several times to get an average and standard deviation. Then, he tested how much the string stretches when it was pulled and noticed that this had a small effect on the result.

Even this simple example shows that there are many things that affect the result of measurement: the measurement tools that were used, the method/process that was used, and the way that the person did the job.


So, the question you should be asking yourself is:

At your plant, when calibration work is performed, which of these three above examples will it be?

What kind of “rulers” are being used at your site and what are the measuring methods/processes?

If you just measure something without knowing the related uncertainty, the result is not worth much.

 

Uncertainty components

Standard deviation – an important component of uncertainty

 

Stanard-deviation-1

 

Several components make up total measurement uncertainty, and one of the most important is standard deviation, so let’s discuss that next.

A simple, yet worthwhile practice is to repeat a measurement/calibration several times instead of just performing it once. You will most likely discover small differences in the measurements between repetitions. But which measurement is correct?

Without diving too deep into statistics, we can say that it is not enough to measure once. If you repeat the same measurement several times, you can find the average and the standard deviation of the measurement and learn how much the result can differ between repetitions. This means that you can find out the normal difference between measurements.

You should perform a measurement multiple times, even up to ten times, for it to be statistically reliable enough to calculate the standard deviation.

These kinds of uncertainty components, which you get by calculating the standard deviation, are called A-type uncertainty components.

But repeating the same measurement ten times is just not possible in practice, you may say.

Luckily you don’t always need to perform ten repetitions, but you should still experiment with your measurement process by sometimes repeating the same measurement several times. This will tell you what the typical deviation of your whole measurement process is, and you can use this knowledge in the future as an uncertainty component related to that measurement, even if you only perform the measurement once during your normal calibration.

Imagine that when performing a temperature measurement/calibration multiple times, you learn that there is a ±0.2 °C difference between the repetitions. Next time you perform the same measurement – even if you only perform it once – you would be aware of this possible ±0.2 °C difference, so you could take it into account and not let the measurement get too close to the acceptance limit.

If you calibrate similar kinds of instruments repeatedly, it is often enough to perform the measurement just once and use the typical experimental standard deviation.

In summary, you should always be aware of the standard deviation of your calibration process – it is an important part of the total uncertainty.

 

Your reference standard (calibrator) and its traceability

One of the biggest sources of uncertainty often comes from the reference standard (or calibrator) that you are using in your measurements / calibrations.

Naturally, to start with you should select a suitable reference standard for each measurement.

It is also important to remember that it is not enough to use the manufacturer’s accuracy specification for the reference standard and keep using that as the uncertainty of the reference standards for years.

Instead, you must have your reference standards calibrated regularly in a calibration laboratory that has sufficient capabilities (a small enough uncertainty) to calibrate the standard and to make it traceable. Pay attention to the total uncertainty of the calibration that the laboratory documents for your reference standard.

Also, you should follow the stability of your reference standard between calibrations. After some time, you will learn the true uncertainty of your reference standard and you can use that information in your calibrations.

 

Other sources of uncertainty

In the white paper you can find more detailed discussion on the other sources of uncertainty.

These include:

  • Device under test (DUT)
  • Reference standard (calibrator)
  • Method/process for performing the measurements/calibrations
  • Environmental conditions
  • The person(s) performing the measurements
  • Additional uncertainty components depending on the quantity being measured/calibrated

These uncertainty components are referred to as type B uncertainty components.

 

Is it passed or failed calibration?

In this section we discuss the following scenario: You have performed the calibration, you have the results on a certificate, and you have compared the results to the tolerance limits. It’s time to pop the big questions: Is it a passed or failed calibration? Is it in or out of tolerance?


Compliance statement – pass or fail

Typically, when you calibrate an instrument you have predefined tolerance limits that the instrument has to meet. Sure, you may perform some calibrations without tolerance limits, but in process industries the tolerance levels are typically set in advance.

Tolerance levels indicate how much the result can differ from the true value. If the errors of the calibration result are within the tolerance limits, it is a passed calibration, and if some of the errors are outside of tolerance limits, it is a failed calibration. This sounds simple, like basic mathematics. How hard can it be?

It is important to remember that it is not enough to just take the error into account; you must also take the total uncertainty of the calibration into account!

Taking the uncertainty into account turns this into a whole different ball game. As discussed in the white paper, there are many sources of uncertainty. Let’s go through some examples next.

Example #1 - reference with too big uncertainty

Let’s say the process transmitter you are about to calibrate has a tolerance level of ±0.5% of its measurement range.

During the calibration you find out that the biggest error is 0.4%, so this sounds like a pass calibration, right?

But what if the calibrator that was used has an uncertainty specification of ±0.2%? Then the 0.4% result could be either a pass or a fail – it is impossible to know which.

Plus, in any calibration you also have uncertainty caused by many other sources, like the standard deviation of the result, repeatability, the calibration process, environmental conditions, etc.

When you estimate the effect of all these uncertainty components, it is even more likely that the calibration was a fail after all, even though it looked like a pass at first.

 

Example #2 - different cases

Let’s look at a graphical illustration of the next example to make this easier to understand. In the picture below there are four calibration points taken, the diamond shape indicating the actual calibration result. The line above and below the result indicates the total uncertainty for each calibration point. The black horizontal line marks the tolerance limit.

 

Calibration-uncertainty-upper-tolerance-limit-1

 

We can interpret the different cases shown above as follows:

  • Case 1: This is clearly within the tolerance limits, even when uncertainty is taken into account. So we can state this as a pass.
  • Case 4: This is also a clear case. The result is outside of the tolerance limits, even before uncertainty is taken into account, so this is clearly a fail.
  • Cases 2 and 3: These are a bit more difficult to judge. It seems that in case 2 the result is within the tolerance limit while in case 3 it is outside, especially if you don’t care about the uncertainty. But taking the uncertainty into account, we can’t really say this with confidence.

There are regulations (for example, ILAC G8:1996 – Guidelines on Assessment and Reporting of Compliance with Specification; EURACHEM / CITAC Guide: Use of uncertainty information in compliance assessment, First Edition 2007) for how to state the compliance of a calibration.

These guides suggest that a result should only be considered a pass when the error plus the uncertainty is less than the tolerance limit.

They also suggest that a result should only be considered a fail when the error with the uncertainty added or subtracted is greater than the tolerance limit.

When the result is closer to the tolerance limit than half of the uncertainty, they suggest it should be called an “undefined” situation, i.e. you should not state the result as a pass or fail.

We have seen many people interpreting the uncertainty and pass/fail decision in many different ways over the years. In practice, uncertainty is often not taken into account in the pass/fail decision-making process, but it is nonetheless very important to be aware of the uncertainty when making a decision.

 

Example #3 – different uncertainties

Another situation to illustrate is when the total uncertainty is not always the same.

Cases 1 and 2 have about the same measurement result , so without uncertainty we would consider these as the same level measurements.

But when the uncertainty is taken into account, we can see that case 1 is really terrible because the uncertainty is simply too large to be used for this measurement with the given tolerance limits.

Looking at cases 3 and 4 it seems that case 3 is better, but with uncertainty we can see that it is not good enough for a pass statement, while case 4 is.

 

Calibration-uncertainty-upper-lower-tolerance-limit

 

Again, I want to point out that we need to know the uncertainty before we can judge a measurement result. Without the uncertainty calculation, cases 1 and 2 look similar; with uncertainty taken into account they are very different.

 

TUR / TAR ratio vs. uncertainty calculation

TUR (test uncertainty ratio) and TAR (test accuracy ratio) are often mentioned in various publications. Some publications even suggest that with a large enough TUR/TAR ratio there is no need to worry about uncertainty estimation / calculation.

A commonly used TAR ratio is 4:1. In short this means that if you want to calibrate a 1% instrument, your test equipment should be four times more accurate, i.e., it should have an accuracy of 0.25% or better.

Some guides/publications also have recommendations for the ratio. Most often the ratio is used as in the above example, i.e., to compare the specifications of the device under test (DUT) and the manufacturer’s specifications of the reference standard.

But in that scenario you only consider the reference standard (test equipment, calibrator) specifications and you neglect all other related uncertainties.

While this may be “good enough” for some calibrations, this system does not take all uncertainty sources into account.

For an accurate result it is highly recommended that you perform the uncertainty evaluation/calculation, taking into account the whole calibration process.

A question we are asked regularly is “How many times more accurate should the calibrator be compared to the device to be calibrated?”. While some suggestions could be given, there isn’t really any correct answer to this question. Instead, you should be aware of the total uncertainty of your calibrations. And of course, it should reflect your needs! 

 

Summary & the key takeaways from the white paper

To learn more about this subject, please download and read the white paper linked in this post.

Here is a short list of the key takeaways:

  • Be sure to distinguish between “error” and “uncertainty”.
  • Experiment by performing multiple repetitions of measurements to gain knowledge of the typical deviation.
  • Use appropriate reference standards (calibrators) and make sure they have a valid traceability to national standards and that the uncertainty of the calibration is known and suitable for your applications.
  • Consider if the environmental conditions have a significant effect on the uncertainty of your measurements.
  • Be aware of the readability and display resolution of any indicating devices.
  • Study the important factors of the specific quantities you are calibrating.
  • Familiarize yourself with the “root sum of the squares” method to add independent uncertainties together.
  • Be aware of the coverage factor/confidence level/expanded uncertainty of the uncertainty components.
  • Instead of or in addition to the TUR/TAR ratio, strive to be more aware of all the related uncertainties.
  • Pay attention to the total uncertainty of the calibration process before making pass/fail decisions.

 

Download the free white paper article in pdf format by clicking the below image: 

CTA-calibration-uncertainty

 

You may also like this one related to uncertainty:

 

Also, please check out the article What is calibration on our website.

 

 

Topics: calibration uncertainty

Calibration video: How to calibrate a temperature measurement loop

Posted by Heikki Laurila on Oct 28, 2016

In the previous blog post Ned discussed the basics of loop calibration. 
Now, let’s watch a video on the same subject.

In this video Roy shows an example of how to calibrate a temperature measurement loop. This loop consists of the following parts:

•    Temperature sensor in the process 
•    Temperature transmitter in the field
•    Local display in the field
•    Control room display 

In this example, the transmitter is a FOUNDATION Fieldbus transmitter, but it could also be a 4 to 20mA output analog transmitter. If using an analog transmitter, you would also have the DCS/PLC system input card as one part in the loop. As the FF transmitter has a digital output, there is no similar input card in the control system, which could affect the loop accuracy. So, you could say that the FF transmitter’s digital output is what is displayed in controls system, unless there is scaling done in control system.

So, in order to calibrate the whole temperature loop, you start by injecting the process sensor into a temperature block that generates a known, accurate temperature. It is recommended that the temperature block have a reference temperature probe to assure best accuracy. Once each temperature set point is fully stabilized you will read the true temperature of the temperature block and call the control room to get the control room display. That way you will record the start and end points of the loop.

In this example Roy also reads the transmitter’s output and the local display in each temperature point. 

Please take a look at the video below and make sure to also have a look at Ned’s previous blog post: Loop calibration basics for more details about loop calibration.

 

Topics: Loop calibration

Loop calibration basics

Posted by Heikki Laurila on Oct 20, 2016

Last year, I presented a paper on this topic at an ISA event (Power Generation Division meeting). My opinions are based on customer feedback and what I have learned from them over the years.

While not a new concept, there are advanced calibration techniques based on loop testing. In some cases, it is best practice to perform individual instrument calibration to achieve maximum accuracy (e.g. custody transfer metering). However, there are viable methods where a loop can be tested end-to-end and if readings are within acceptable tolerances, there is no need to break into the loop for individual instrument testing. To be effective, a common sense approach is required with the goal to minimize downtime, maximize technician efficiency while ensuring reliable control and maintaining a safe work environment.

The idea of a loop can mean different things to different people due to their work background and/or industry. In practice, a loop is simply a group of instruments that in combination make a single measurement or effect a control action in a process plant. A typical temperature example would be a temperature element (RTD or T/C) that in turn is connected to a transmitter, which is connected in series to a local indicator and finally a control system input card (DCS or PLC); the signal is then displayed on one or more control panels and the measurement is ultimately used to control the process.

Beamex-calibration-loop-calibration.jpg

When evaluating a loop for testing, an important distinction to make is can the loop be tested from end-to-end or can only a portion of the loop be tested?

For an end-to-end test, in the example temperature loop (Figure 1), the temperature element would need to be removed from the process and placed in a dry block or temperature bath in order to simulate the process temperature. The final displayed measurement would be compared to the simulated temperature and the error interpreted. An end-to-end loop test is the best practice; if an accurate temperature is made for the control process, it does not matter how the individual instruments are performing. The DCS/PLC value is what is used to make any control changes, alarms, notifications, etc. However, if the loop measurement has a significant error, then the error of each instrument in the loop should be checked and corrected one by one in order to bring the final measurement back into good operation.

In some cases, it is not possible to make an end-to-end loop test. In the example loop, it may be extremely difficult or expensive to remove the probe from the process or the probe cannot be inserted into a temperature block/bath. If this is the situation, then a partial loop test can be performed where the temperature element is disconnected from the transmitter and a temperature calibrator is used to simulate a signal into the transmitter. As in the end-to-end loop test, the final displayed measurement would be compared to the simulated temperature and the error interpreted, etc. While the loop is broken apart, it would be good to check the installed temperature element; perhaps a single-point test could be done by temporarily inserting a certified probe/thermometer into the process and comparing that measurement against the element’s output when connected to a calibrator.

By approaching the task of calibration with a fresh look, there are plenty of opportunities to “do more with less” and effectively “touch” every instrument in the plant more efficiently using loop calibration strategies. Logical and careful planning of loop testing strategies will result in improved control performance without compromising quality, reliability or safety of plant operations.

Read the white paper to learn how to analyze loop error and walk through various loop testing examples including wireless/fieldbus, multivariable and more.

CTA-loop-calibration

 

Topics: Loop calibration

How to implement calibration software

Posted by Heikki Laurila on Oct 13, 2016

Getting calibration software is much more than just selecting the right product and buying the necessary amount of licenses. The hardest part is actually to get people to use the calibration software in the right way. And especially within a set project budget and time limitations.

To achieve this, you should see the calibration software implementation as a project with certain steps.

Calibration-process-infographics_1.jpg

Start by setting goals

You should define and document why you are looking for implementing calibration software in the first place. What do you expect from the new process, tools and people? What is the main business challenge or opportunity that drives your company to implement a new calibration software? Better compliance? Shorter calibration times? Make sure you can answer this upfront, as it will guide you and the resources you use throughout the entire process.

Calibration-process-infographics_2.jpg

Focus on people

Remember that calibration software is used by people. New software many times means a new way of performing and managing calibrations. The significance of this calibration process change can often be underestimated, which can result in new technology not being taken efficiently into use. It can be surprisingly difficult to even get people to use new tools or to get them to use the tools in a correct way. So focus on people and managing the organizational change.

Calibration-process-infographics_4.jpg

 

Follow the steps of an IT project model

The biggest problems in a calibration software implementation project, such as scope creep and  budget and schedule overruns, are usually caused by poor planning and inadequate resourcing. A lot of these risks can be removed or at least minimized by using an IT project model for the software implementation.

There are a lot of different IT project models to choose from. Ultimately, they all serve the same purpose. The benefit of following an IT project model is that you make your best effort in ensuring that all necessary viewpoints and requirements are taken into consideration, there will be no surprises in the process, decisions are made in time and input is received from relevant people. Calibration software implementation is a cross-functional project involving people from instrumentation, IT and quality, therefore early input is key to success.

Typical steps of an IT implementation model applied to a calibration software project include the following:

  • Target setting and project planning. Here you define the targets, scope, resources as well as the model that you will use for implementing the new calibration software.
  • Process blueprinting. Key thing is to understand that you are not just installing new software, but actually introducing a new way of doing things.
  • Technical specifications. Based on the process description, a technical user requirement specification is prepared.
  • Development and testing. Calibration software is configured based on requirements set in the previous project phases.
  • Final go live decision made after development and testing.

Calibration-process-infographics_3.jpg

If you want to find out more, click the below image to download our white paper on this topic.

 WP: Project model for calibration software installations

 

Topics: Calibration software

How to calibrate an RTD HART temperature transmitter

Posted by Heikki Laurila on Oct 07, 2016

The temperature transmitter is a popular instrument in process plants. Like most transmitters, it needs to be calibrated to assure that it is operating accurately. So, let’s take a look at how the calibration is done!

Before looking at the calibration, it is good to remember the fundamental purpose of a temperature transmitter; temperature needs to be measured in various (remote) locations in the process plants. A temperature sensor (typically RTD or thermocouple) is inserted into the process to measure the process temperature. The output of the temperature sensor is a resistance (RTD) or voltage (thermocouple) that is dependent on the temperature specified in international standards.

Anyhow, it is not possible/practical to transfer the sensor’s output signal long distances.Therefore, the temperature sensor is connected to a temperature transmitter that converts the sensor signal into a format that is easy to transfer long distances, conventionally this is a 4 to 20 mA current signal. This kind of current signal is possible to transfer long distances in normal simple cables, even if there is some resistance in the cable it does not cause the current to change, it just causes some voltage drop.

But to get to the actual calibration - in order to calibrate the temperature transmitter, we need to check that it is converting input to output accurately.

There are different calibration scenarios, you can for example:

  1. calibrate the transmitter only by simulating the sensor and measuring the output current
  2. calibrate the transmitter with the sensor connected into it
  3. calibrate the whole temperature measurement loop from the sensor to the DCS/SCADA display

Today we focus on the first scenario and in order to show how the calibration is done, we have done a video for you.  If you are interested in the second and third scenario, we do have videos for them as well on our YouTube channel.

In this video we will show you how to:

  • Make the connections
  • Make an "As Found" calibration
  • Perform trimming of the HART RTD transmitter
  • Make an "As Left" calibration

Please check out the video below, hope you like it!

 

Related content:

Topics: Temperature calibration, Transmitter, HART

What is a documenting calibrator and how do you benefit from using one?

Posted by Heikki Laurila on Sep 30, 2016

Beamex-ICS-process-comparison---blog-pic-v1.jpg

Figure 1. The calibration process with and without a documenting calibrator.

Let’s face it, for the most of us, documentation is not the most exciting thing to do. 
But when you calibrate your process instruments, it is important to document the calibration results, otherwise the calibration is a wasted effort. Instead of documenting the calibration results manually with pen and paper, wouldn’t it be nice if the calibrator would do all the documentation automatically? Sounds interesting? In this post I will talk about Documenting Calibrators that will automate the documentation of your calibrations.

So, what is a Documenting Calibrator?

To start with, a calibrator or process calibrator is a test equipment that is accurate enough for you to use for calibrating your process instruments. The calibrator needs to have a valid calibration that is traceable to your National standards, to enable you to perform traceable calibrations of your process instruments.

What makes it documenting?

  • Firstly, in my definition a calibrator is documenting if it can save the calibration results into its memory during the calibration, so that there is no need to for manual documenting of the calibration results.
  • Secondly, a documenting calibrator should also be able to communicate with a calibration software, in order to transfer the calibration results electronically from its memory into the calibration software. The communication should also work the other way around, i.e. the calibration software should be able to send information about the work that needs to be done into the calibrator.

How is the calibration process different when using a documenting calibrator vs. when using a non-documenting calibrator?

Calibration process without a documenting calibrator:

Beamex-ICS-process-comparison---blog-pic-v2.jpg

If you don’t use a documenting calibrator, then the steps in your calibration process are typically:

  1. Your planning & scheduling tool tells you that it is time to go and calibrate certain instruments.
  2. You print out the work order on a paper and distribute it to the proper department/person.
  3. The calibration tech goes out into field and performs the calibration.
  4. The calibration tech documents the calibration results with pen and paper.
  5. When the calibration is done, the work order can be closed
  6. The result is checked/verified.
  7. The calibration results are archived.

You say you don’t archive paper results, but you have a calibration software? 

 Well, if you have a calibration software where you manually type in the results after you get back to computer - that is just another error prone case of manual entry of results and additional time spent, so not a very good process.

Calibration process with a documenting calibrator

Beamex-ICS-process-comparison---blog-pic-v3_002.jpg

If you are using a documenting calibrator together with a calibration management system supporting it, your process is more streamlined:

  1. The calibration work is planned and scheduled in the calibration management software (or the maintenance management system that the calibration software is linked to).
  2. The work orders are sent electrically into the documenting calibrator.
  3. You perform the calibration with the documenting calibrator, results being automatically saved into the calibrator’s memory.
  4. Finally, you receive the results from the calibrator electrically into your calibration management software. The results are automatically stored into the data base (and your maintenance management system is automatically notified).

BAM! You did the calibration and the documentation was done automatically!

Why use a documenting calibrator? What are the benefits?

As the earlier steps and illustrations show, the calibration process is pretty different with or without documenting calibrators. The main benefits with using documenting calibrators are:

  • The calibration takes much less time and therefore saves you resources, time and money.
  • The quality, consistency and reliability of the results are better, as there is no mistakes caused by manual writing of calibration results.
  • The calibration procedure guides the users and assures a uniform process.
  • Results are automatically stored in the data base, no manual typing or archiving of paper results is necessary.

The calibration management software may also be integrated into your maintenance management system, enabling paperless flow of work orders between the two systems.

Who should use a documenting calibrator?
Who benefits most from using one?

So, why should you use documenting calibrators and when do you get most benefits from using them? I’d say you get most benefits in the following cases:

  • If you do a lot of calibrations, you will save more time and money with the more effective calibration process.
  • If you are regulated company, or just want to benefit from the improved quality of the calibration data and the uniform process with automated functions.
  • If you want to improve the effectiveness of your calibration process.
  • If you want to make sure your calibration process fulfill the requirements of your quality system, or external audits.
  • If you want to utilize a more streamlined calibration process.

Summary

That was a short take on Documenting Calibrators. Please feel free to send any comments or questions. If you are interested to read more on this subject, please take a look at our White Paper: The benefits of using a documenting calibrator.

Download white paper

 

Yours,
Heikki

Heikki Laurila is Product Marketing Manager at Beamex Oy Ab. He started working for Beamex in 1988 and has, during his years at Beamex, worked in production, the service department, the calibration laboratory, as quality manager and as product manager. Heikki has a Bachelor’s degree in Science. Heikkis family consists of himself, his wife and their four children. In his spare time he enjoys playing the guitar.

 

Related content

 

Topics: Calibration, Calibration software, Digitalization

How to calibrate HART pressure transmitters

Posted by Heikki Laurila on Sep 21, 2016

A pressure transmitter is one of the most common instruments in a process plant. In order to assure its accuracy, it needs to be calibrated. But what do you need to calibrate it and how is it done?

The pressure transmitter’s purpose is to convert the pressure signal into an electric signal that is easy to transport into the control system, which may be located far from the pressure measurement location. Most commonly the pressure is converted into a 4 … 20 mA standardized signal.

Anyhow, the transmitter’s accuracy is about how accurately it converts the input pressure into the mA signal. If you want to know that, then you need to calibrate the transmitter. 
In order to calibrate a pressure transmitter, you need:

  • loop supply (if not connected to controls system’s loop supply)
  • a pressure generator to generate input pressure
  • an accurate calibrator to measure the input pressure
  • an accurate calibrator to measure the output mA current.

Typically, the pressure transmitter is a HART protocol transmitter, so in case there is any need to adjust/trim it, you will need to use a device supporting HART communication.

So, how to calibrate? Explaining how to do the calibration would result in quite a long text, so we have put together a video for you instead. The video shows you how to calibrate and trim a HART pressure transmitter.

Please have a look at the video:

 

Related content:

 

Topics: Pressure calibration, Transmitter, HART

Why and how to calibrate WirelessHART transmitters?

Posted by Heikki Laurila on Sep 16, 2016

In one of our previous blog posts we took a general look at how to calibrate smart transmitters. This time we focus on the WirelessHART transmitters. The WirelessHART transmitters are gaining popularity, so I think it is good to take a look at:

  • What they are
  • How they work
  • Why they should be calibrated
  • How they can be calibrated

Beamex_MC6_wireless_HART_calibration_v1.jpg

WirelessHART 

Although the WirelessHART was developed already in the mid 80’s (by Rosemount) it was not until 2007 that it was ratified as a standard by HCF (HART Communication Foundation).

The conventional HART (wired) transmitter uses digital signal superimposed on top of the standard 4 … 20mA signal. The WirelessHART uses, as the name pretty clearly indicates, wireless communication on a 2.4 GHz radio band. There is no analog mA output signal in the wireless HART transmitter, only the wireless signal.

And since there is no analog signal and no cables, the operation power cannot come from control system, instead each transmitter needs a battery. The more often the transmitter sends signal, the shorter the battery life becomes. In practice, the output is often sent pretty seldom (maybe once every few minutes) and the transmitter is often used in applications where high speed is not so important.

 

Operation

The fundamental purpose of a wireless HART transmitter is the same as for other process transmitters – i.e. to measure the input process signal and convert that into an accurate output signal. But in this case the output is a wireless digital signal.

Calibration

In order to calibrate the wireless HART transmitter, you need to provide an accurate input and read the output. You could read the wireless output if you have a suitable device that can read the wireless signal. But as mentioned earlier, the wireless HART transmitter is often configured to transmit pretty seldom, so that would be a very slow calibration process. Fortunately, the wireless HART transmitters also have screw terminals that you can use to connect in the same way as to a wired HART transmitter. You can use some kind of HART communicator or a calibrator supporting HART to do that. The output signal via the screw terminals is also responding very fast, so it makes the calibration process more practical. Also, if you find out that you need to do adjustment/trimming, this needs to be done by communicating via the screw terminals. And in case you need to do any configuration, that too needs to be done via the screw terminals.

A principle block diagram of a wired and wireless HART transmitters:

Wired_WirelessHART_transmitters.jpg

 

Why calibrate?

At this point you may ask why this modern fancy transmitter needs to be calibrated in the first place. Very shortly the answer is: in order to know that it is working correctly and accurately. If you are interested to learn more about the reasons, please take a look at our previous blog post on Why Calibrate (and it’s White Paper).

If you want to learn more about the calibration of Wireless HART transmitters, please check out our white paper Calibrating WirelessHART transmitters.

Read White Paper Now

We have also done a short video with Mike on this subject, please feel free to check that out on our YouTube channel: How to calibrate a WirelessHART transmitter.

Thanks for reading, please feel free to suggest subjects you would like us to write about.

 

Yours,
Heikki

Heikki Laurila is Product Marketing Manager at Beamex Oy Ab. He started working for Beamex in 1988 and has, during his years at Beamex, worked in production, the service department, the calibration laboratory, as quality manager and as product manager. Heikki has a Bachelor’s degree in Science. Heikkis family consists of himself, his wife and their four children. In his spare time he enjoys playing the guitar.

Topics: Transmitter, HART

How a modern calibration process improves power plant performance

Posted by Heikki Laurila on Sep 14, 2016

Today we will take a quick look into a power plant and discuss why it is beneficial to have a proper calibration process in place. We will also discuss the typical calibration related challenges we have seen in power plants and finally, we’ll take a brief look at how a modern calibration process could help overcome this challenges.

Beamex_MC6_calibrator.jpg

 The 5 most common reasons for having a proper calibration process

A summarized list of the most common reasons:

  1. Efficiency

One of the most important reasons is to ensure that the plant is working efficiently, i.e. produces as much energy as possible form the source fuel. Even the most sophisticated control systems get the measurement data from the process measurements and if the measurements are not accurate the control system cannot control the plant properly.

  1. Plant safety

Plant safety is an essential matter for all types of power plants. Depending on the plant type, the number of safety circuits may vary, but for all plant types is important to keep the safety circuits properly calibrated. Sure there are also regulations related to the calibration of safety circuits.

  1. Regulation

There are various regulations valid for power plants that stipulate the calibration.

  1. Emissions control

Depending on the plant type there are different requirements for the calibration of the continuous emission monitoring measurements.

  1. Invoicing-related measurements

It is of course of great importance that any measurements that are directly related to the invoicing are calibrated properly.

In one of my previous blog posts I discussed reasons for calibration. Some of the reasons listed in that blog post are also valid for power plants, go check it out

 

 3 most common calibration-related challenges in a power plant

The three most commonly seen calibration-related challenges in a power plant seems to be:

  1. Lack of metrology resources

In power plants, like in any process plant, the amount of dedicated, professional metrology experts seems to get smaller and smaller. The workers need to carry out various tasks and rarely have possibility to focus on metrology and calibrations.

  1. Legacy calibration process

Often the calibration process has not been updated in a long time and is very outdated. This kind of process can be very labor intensive, may result in poor calibration accuracy, generate a lot of paper work and be prone to errors.

  1. Outage support

Often most of the calibration work must be done during an outage. This causes a big challenge for the resource management and often external resources needs to be hired to manage all the work.

 

How could a modern calibration process help?

Here I list a few ways that a modern calibration process could help power plants:

Management, monitoring and scheduling of calibrations can be automatized, typically with the help of a dedicated calibration software tool. The calibration software may communicate directly with the maintenance management system and form an automated, paperless flow of work orders.

Documenting calibrators can document the calibration results automatically and transmit results electrically to the calibration software. This helps to avoid manual entry of results and manual pass/fail calculations.

So, basically a modern calibration process makes the calibration processes more efficient by automating steps and making them paperless, helping resources to be more effective and assuring better quality of results.

Learn more about the business benefits of a modern calibration process.

Download white paper

 

Yours,
Heikki

 Heikki Laurila is Product Marketing Manager at Beamex Oy Ab. He started working for Beamex in 1988 and has, during his years at Beamex, worked in production, the service department, the calibration laboratory, as quality manager and as product manager. Heikki has a Bachelor’s degree in Science. Heikki's family consists of himself, his wife and their four children. In his spare time he enjoys playing the guitar.

 

 

Topics: Calibration, Calibration process

Why must also "Smart" transmitters be calibrated?

Posted by Heikki Laurila on Sep 09, 2016

The so called "Smart" transmitters are getting ever more popular in the process industry, but what are these "Smart" transmitters and why do I say that also they must be calibrated? 

That is what we will be discussing in this blog post. Besides that, we will also take a quick look at how to calibrate these transmitters.

Smart_Transmitters_Beamex.jpg

 

What is a “Smart” transmitter?

First, the use of the word “Smart” is not really standardized anywhere, so anyone can use it the way they want. Anyhow, it normally refers to a modern transmitter that is processor-based and has capability for digital communication. In this blog post the word smart means process transmitters that support HART, WirelessHART, FOUNDATION Fieldbus or Profibus protocols. 

And since we talk about calibration, it also refers to transmitters that are measuring something and that are generally being calibrated, such as pressure, temperature, flow, level etc. transmitters.

How to calibrate a smart or non-smart transmitter?

In order to do a metrological calibration on a transmitter, being an analog one or smart one, we want to check it’s accuracy to make sure that the conversion from input to output is correct. In practice we need accurate/traceable equipment (reference standard or calibrator) to generate/measure the input and to measure the output. We typically do this in a few points across the measurement range and document the results. If we find too much error, we need to adjust the transmitter in order to make it operate accurately.

In case of a conventional analog transmitter the output is a mA signal which is easy to measure. But in case of a smart transmitter the output is digital, so we need to have some suitable device to read the digital output. Other than that, the calibration of analog and smart transmitters is very similar.

Whether the transmitter’s output is analog or digital, it does not change the purpose of the transmitter – to accurately convert input to output – and it does not change the fact that you need to calibrate it to make sure that it is accurate. 

If we think about the whole measurement loop, an analog loop needs also A/D conversion in the control system end, which is another item to be calibrated in the loop. In case of digital smart transmitters, there is no need for such an additional conversion, as the digital output from the transmitter is sent to a control system.

A simplified principle diagram of a conventional analog and digital smart pressure transmitters:

Analog_Digital_Smart_Pressure_Transmitters.jpg

 

Why calibrate?

As mentioned, the smart transmitter has digital output, but it still needs to be calibrated to make sure it is working accurately, i.e. converting the input to output correctly. Sure modern transmitters are getting more accurate and more stable than old transmitters, so they don’t necessarily need to be calibrated as often as the older ones.

But if you don’t calibrate your transmitter, it means that you don’t care if it is measuring correctly, so why did you buy and install the transmitter at all?

And if the transmitter sales guy is offering you new smart transmitters that don’t need to be calibrated at all, please forgive him, he is anyways just trying to reach his sales budget… ;-)

There are many reasons and motivations to make periodic calibrations, these are listed in the earlier blog post Why calibrate? (and the related White Paper) so please have a look at that to learn the most common reasons for calibration.

 

Configuration is not calibration!

One great feature of modern smart transmitters is that they can be configured through the digital communication. Typically you use a communicator or computer with a suitable modem to do that. The configuration can also be done remotely from a long distance. With “configuration” I mean that you can view and edit various parameters of the transmitter. It is important to remember that if we talk about metrological calibration (i.e. process to check the transmitter accuracy), this cannot be done with a communicator only and remotely. For a proper calibration, you always need the accurate and traceable reference standard (calibrator).

If you are interested to see how you could use Beamex MC6 to calibrate different smart transmitters, have a look at some of these videos on our YouTube channel:

And of course if you have any questions or comments, please let us know.

I would also really like to hear your suggestions on topics you are interested in learning more about and that we should write about in this blog!

If you did read this far, great. Thank you and talk (or write) to you later.

Yours,
Heikki

Heikki Laurila is Product Marketing Manager at Beamex Oy Ab. He started working for Beamex in 1988 and has, during his years at Beamex, worked in production, the service department, the calibration laboratory, as quality manager and as product manager. Heikki has a Bachelor's degree in Science. Heikkis family consists of himself, his wife and their four children. In his spare time he enjoys playing the guitar. 

 

Topics: Transmitter, Calibration process, FOUNDATION Fieldbus, HART

ISO9001:2015 – how do the changes affect your calibration process?

Posted by Heikki Laurila on Sep 07, 2016

The ISO9001 standard was revised in 2015. In this blog, I will examine the main changes. And as I am madly interested in calibration, I will mostly concentrate on the changes that will affect your calibration processes, if you are an ISO9001 certified company or are applying the standard.

The ISO9000 series is the most popular quality management standard. It was initially published almost 30 years ago, in 1987. We received the ISO9001 certificate for Beamex back in 1992 when I was working as Quality Manager. It was all pretty new then and it was the first revision of the standard used. Since then, the standard has been revised several times, in 1994, 2000, 2008 and 2015. While 2008 revision was considered pretty minor, the 2015 revision was a more major one. As usual, certified companies have a 3 year transition period to update their quality system to meet the new revision.

 

Main changes in brief

Below is a short list including some of the main changes in the new revision:

  • The high level structure (HLS) of ISO90001 has been updated to the common structure of other ISO standards, such as the ISO14000 environmental standard.

  • An important fundamental higher level update is that the term “management” has been replaced with “leadership.” This is a pretty big modernization to the thinking.

  • The older version of standard had a dedicated chapter for test and measuring equipment; the new revision distributes this into the higher level chapters, setting requirements on human and equipment resources. So, this is not as specific and easy to interpret as before. But on the other hand, it requires you to think about the bigger picture.

  • Finally, the risk-based thinking approach is a big change that has been added throughout the standard. Pretty much, this changes the way the standard is to be applied. This is also one of the biggest changes in the standard that will affect the calibration processes. Therefore, next I will look into some basics for risk-based thinking and discuss some practical calibration-related risks.

Risk-based thinking

Sure, the risk-based thinking may be familiar to many of you from other standards and other occasions. Although the idea is to find and evaluate the risks, it is good to remember that it also helps to reveal new opportunities.

Risk index

When risks are evaluated, a tool often used is a risk index. It is a systematic way to first evaluate the impact/severity of something happening and the likelihood of that to happen. You determine a rating for impact and likelihood, then multiply these and you will get a risk index. Often impact and likelihood are rated on 1 to 5 scale, so you will end up with a 1 to 25 risk index. The bigger the index, the bigger the risk. Maybe a picture explains it better.

ISO9001_Riskindex.png

The green, yellow and red indicate low, medium and high risk. For further study, there is for example a dedicated ISO31010:2009 standard for risk management/assessment.

 

Risk-based thinking in calibration

So how do you apply this risk-based thinking to your calibrations? Obviously, the updated standard does not give you any direct answers, but it demands you to make the required analyses. Now, I will very briefly list a few practical things I think should be considered to minimize risks in calibrations:

Importance of each measurement

Analyze and determine which are the most important measurements in your site are. For critical measurements, install better measurement devices, calibrate them more often, analyze the consequences of an out-of-tolerance situationuse tighter pass/fail limits, and adjust even when there is small error, although not close to limits, etc. Even if your resources are limited, you should use them on the most important calibrations.

Base acceptance limits on the real need

Make sure the pass/fail limit of each measurement is based on the real process need and don’t just use the transmitter manufacturer’s specs for each installed location.

Eliminate all manual entry of results 

Avoid any manual entry of calibration results. Use documenting calibration equipment that transfer the results to your calibration software automatically.

Automate error calculation and Pass/Fail decision

Manually calculating error and comparing it to acceptance limits can be a very error prone task. Implement systems that make this calculation for you.

Analyze drift over time

When you analyze the drift of each instrument over time, you will be equipped with valuable data on its stability, and can adjust the calibration period accordingly. Avoid making unnecessary calibrations (still remembering the costs and consequences of an out-of-tolerance situation).

Always be aware of total uncertainty

Don’t just use device A to calibrate device B using process C and be happy if the numbers look good. Instead, always be aware of the total uncertainty of the calibration, including the different uncertainty components coming from equipment and process.

Automate as much as possible

Try to automate your calibration process as much as possible because this makes the process repeatable and minimizes the risk of  human errors.

 

Summary

This latest revision of the ISO9001 standard is more suitable for modern thinking. The changes may not be so easy to implement, as it takes some thinking and possibly workflow adjustment to accomplish them. This was a very short summary written on this subject. So, if you want to learn more, download the white paper below. 

Yours in blog,
Heikki

Heikki Laurila is Product Marketing Manager at Beamex Oy Ab. He started working for Beamex in 1988 and has, during his years at Beamex, worked in production, the service department, the calibration laboratory, as quality manager and as product manager. Heikki has a Bachelor’s degree in Science. Heikkis family consists of himself, his wife and their four children. In his spare time he enjoys playing the guitar.

Eveything you need to know about how the ISO9001:2015 standard affects calibration, in one white paper. Read it now!

 Read White Paper Now

 

Topics: Calibration, Calibration process

The key aspects of building a calibration system business case

Posted by Heikki Laurila on Mar 01, 2016

An investment into calibration equipment and systems must be financially justified, just like any other business investment. But does a cheaper cost of purchase always mean a higher return on investment? Not necessarily. When building a business case for calibration calibration system investment, what may seem at the outset to be cheaper may not necessarily be so, if the evaluation is made from total or life cycle cost perspective instead of evaluating cost of purchase only. 

Calibration system A can have a lower cost of purchase than calibration system B, but if calibration system B has better operational efficiency, the total cost of calibration system B can be lower than alternative A. The key point is to consider what elements form the total cost of implementing and running a system instead of focusing on isolated costs only, such as the purchase price of a calibrator or cost of a software license.

Why invest into a calibration system?

Your calibration system business case starts from defining a purpose. Why are you considering to make an investment into calibration? What value are you looking to generate for the company from the investment? Many times, a calibration system investment is competing with the same monetary resources as, for instance, a new building or renewing the company parking lot. That’s why the evaluation should start from defining why you are investing into calibration. Some common reasons for making a calibration system investment relate to improving efficiency of the calibration process (e.g. time savings), boosting plant safety, enriching product quality through more accurate measurements, improving compliance with applicable standards and regulations as well as harmonizing calibration processes between different manufacturing plants of a company.

Compare total costs, focus on system life cycle

When investing into a calibration system and building a business case for comparing alternative solutions, the nature of calibration activities as well as the life cycle of the system and calibration process should be included in the equation. Purchase price as well as features and functions comparisons are important, but they are only a start and only a small part of the financial evaluation of different alternative calibration solutions. To put it short, there are basically three different cost generating elements related to calibration: equipment, labor and downtime (planned/unexpected). When trying to understand the total costs between different calibration system investments as well as the economic benefits related to the alternative investments, you can ask yourself:

  • What are the implementation vs running costs of the system?
  • What is the expected system life cycle?
  • What type of productivity benefits can be achieved (e.g. time savings, process improvements)?
  • Does the system impact process downtime?
  • What are the costs of maintaining the system?
  • Does the system improve plant safety?
  • Is product quality influenced?
  • What is the estimated labor impact?
  • Does the system impact risk mitigation and compliance?

Build for the complete life cycle
 

The key element in creating a business case for calibration system is to build the business case for the entire life cycle of the system and not just compare cost of equipment purchases and software licenses. Therefore, you should actually compare alternative calibration processes, and not the equipment as such. Consider what it means to implement and run a process with alternative equipment and have various viewpoints on this: financial, time-savings, risk mitigation, implementation/running/maintenance costs as well as headcount impact, among other things.

 
When implementing a calibration system, the risk of failure is high - 
Learn how to overcome the common pitfalls.
Download white paper

 

Regards,
 Villy

Topics: Calibration, Calibration process, Calibration software, calibration system, General

Field Calibration or Workshop Calibration?

Posted by Heikki Laurila on Feb 23, 2016

We are sometimes asked if it is better to calibrate process instruments in the field, or in a calibration workshop. It is obviously impossible for us to give a correct generic answer for that question. The right answer for each application depends on various things. In this post I have listed a few common arguments for both field calibration and workshop calibration.

FieldVSWorkshop.jpg

Terminology - in this post the term "field calibration" means using portable calibration equipment and going into the factory to calibrate the process instruments on site. The term “workshop calibration” means removing the process instruments from their current location and taking them into a workshop to calibrate them there, with stationary calibration equipment.

Field calibration

Here are some of the most typical reasons for calibrating in the field

  • If you want to calibrate the whole measurement loop in one go, starting from the measurement point in the field to the control room display, you need to go out in the field to start from the process sensor/transmitter. In that scenario, often only if the total loop has too big an error, then the individual parts of the loop need to be calibrated separately.
  • If you don’t want to or cannot remove the instrument from its installed location, then you should perform the calibration in the field. 
  • Calibrating in the field assures that the instrument is calibrated in the same actual field conditions where it is also used.

  • Field calibration can be a more effective way to calibrate, assuming that the field instruments have been installed and designed so that the calibration access is easy to do in the field.

  • For many quantities, there are convenient portable calibration equipment available.

  • Field calibration seems to be the more commonly used method.

Workshop calibration

Common arguments for workshop calibration:

  • If you calibrate instruments during the commissioning phase, when the instruments are not yet installed in the field, it is more convenient to calibrate instruments in a dedicated calibration workshop.

  • If your requirement is for best possible uncertainty, it is often easier to get better total uncertainty in a workshop than out in the field. This is achieved using dedicated stationary high-accuracy calibration equipment and controlled environmental conditions and processes in the workshop.

  • If you use rotating spares, or want to calibrate loose spare devices before installing them into field, this is practical to do in a workshop.

  • If you want to achieve accreditation for your calibration work, it is easier to get accreditation for calibration performed in a calibration lab/workshop, than out in the field.

  • Sometimes the actual field conditions can be very challenging/harsh to perform calibration in, and in that case it is better to make it in workshop.

  • When you want to calibrate your portable working standard calibration equipment using your reference standard equipment, this calibration is often done in the workshop.

  • In a dedicated workshop, all equipment is always in its place and ready for use. Also it can be made ergonomic and convenient to use.

Summary

Often the most effective calibration process is a combination of field calibration and workshop calibration. Some instruments are calibrated out in the field using portable calibration equipment, while some calibrations and services are performed in the dedicated workshop.

Even though this post was pretty short, I hope it offers some food for thought.
 And please let me know if you have any own examples of when it is most convenient to calibrate in the field versus in a workshop.

Yours in blog,
 Heikki

Heikki Laurila is Product Marketing Manager at Beamex Oy Ab. He started working for Beamex in 1988 and has, during his years at Beamex, worked in production, the service department, the calibration laboratory, as quality manager and as product manager. Heikki has a Bachelor’s degree in Science. Heikkis family consists of himself, his wife and their four children. In his spare time he enjoys playing the guitar.
 

Still not sure if field calibration or workshop calibration is the right choice for you?
Read o
ur white paper to learn more. 

Download white paper

 

 

Topics: Workshop calibration, Calibration, Calibrator, Field calibration

Calibration of a HART transmitter and the most common misconceptions about a HART communicator

Posted by Heikki Laurila on Jan 26, 2016

A HART transmitter is the most common smart transmitter type used in the process industry. What should be taken into account when calibrating and trimming a HART transmitter? Let’s take a look at the most important things to take into consideration. And also, let’s set straight the most common misconceptions regarding HART communicators.

 

The structure of a HART transmitter

Let’s first take a look at the fundamental structure of a HART transmitter (Figure 1). The process measurement (pressure, temperature sensor, etc.) is connected to the Input, and from the Output you will get an analog mA signal dependent on the input. It is also possible to read out the digital PV variable, which is frequency shift modulated and travels on top of the mA signal in the same wires.

 However, in pratice the mA signal is the one that is most commonly used. As can be seen, the input goes first into the input section which makes an A/D conversion and turns the signal into digital. From that point, the digital process variables (like Primary Variable PV) can be read. The digital signal goes through the conversion sections, where various configurations are done, such as range and transfer function. Finally the digital representation of the mA signal (AO) is converted into a true analog mA signal in the output section and its D/A conversion.

 

HART_Transmitter-1.jpg
Figure 1. The structure of a HART transmitter.

 

Terminology

Calibration

When you want to perform a metrological calibration, what do you do? You measure (or generate) an accurate input and also accurately measure the output, most often mA output. You do this with a few points along the whole range, for example with 25% steps. Finally you document the calibration. The input and output should be measured with an accurate and traceable reference standard or calibrator. According to most international standards, calibration does not include the adjustments that may be needed, but in practice it is sometimes included in the term calibration.


Configuration

The term configuration means to change some settings of the HART transmitter using a communicating device supporting HART protocol. It is good to remember that configuration is not calibration, and configuration does not assure transmitter’s accuracy. Configuration can be done with a HART communicator, or with a calibrator supporting HART communication.


Trimming (or adjusting)

If you have calibrated the HART transmitter and found out that the transmitter have some error, the transmitter should be trimmed (or adjusted) to measure accurately. Sometimes people think that if they use only the mA output, they only need to adjust the Output/Analog Section. But as the Input Section and Output Section are in series, both should be always adjusted if mA output is used. Sure, you could adjust one to compensate the error in the other one, but that is not recommended.

 

HART Communicator – the most common misconceptions set straight

A HART communicator cannot calibrate a HART transmitter

A common misconception seems to be that a HART communicator can be used for transmitter calibration – Nope, that is not correct. As explained above, the metrological calibration of a HART transmitter cannot be done with a HART communicator alone, you will always need a reference standard (calibrator) to do it.


A HART communicator cannot trim a HART transmitter

Another common misconception is that HART communicator alone can trim a HART transmitter – Nope, that is not correct. Trimming (or adjusting) cannot be done with a HART communicator alone, you will always need a reference standard (calibrator) to make the measurements required for the trimming.


A HART communicator cannot measure mA

Sometimes people think that a HART communicator can measure the transmitter’s mA output signal – Nope, that is not correct. A typical HART communicator can show the AO value, which is the digital representation of the nominal mA value. So, it is not a true measured mA current signal and does not tell what mA the transmitter is really outputting. Even if the AO value in the HART communicator would show 4.000 mA, it does not mean that the output current would be 4.000 mA. So you really need an mA meter or a calibrator to measure the true mA output.

 

Summary

Hopefully this short post helped to clarify a HART transmitter’s principal structure, calibration, trim and the related terminology. Also, please make sure that these common misconceptions about the HART communicator do not exist at your site.


WARNING! Commercial contents follows!

Please check out the Beamex MC6 which contains both a HART communicator and an accurate process calibrator - all in one box. Not to mention all the other goodies it includes… ;)

Yours in blog,

 Heikki

Heikki Laurila is Product Marketing Manager at Beamex Oy Ab. He started working for Beamex in 1988 and has, during his years at Beamex, worked in production, the service department, the calibration laboratory, as quality manager and as product manager. Heikki has a Bachelor’s degree in Science. Heikki's family consists of himself, his wife and their four children. In his spare time he enjoys playing the guitar. 

 

How to calibrate HART pressure transmitters - Beamex Video

 

Topics: HART Communicator, Pressure calibration, Temperature calibration, Transmitter, Beamex MC6, Calibration, Calibrator, Field calibration, HART

Vibration measurements and calibration

Posted by Heikki Laurila on Jan 18, 2016

Seismic velocity measurements provide ideal resolution at typical rotating equipment running speeds. Vibration is always transmitted through the bearings used on rotating equipment. It has been proven time and time again that measuring vibration on rotating equipment is the most universally effective predictive maintenance practice for critical pumps, motors, compressors, fans, cooling towers and rollers.

For example, over time, industrial motor bearings wear out and begin to wobble. Eventually these bearings will need to be replaced, or the machine will fail. But how do you measure the bearing performance to be sure you aren’t changing them out too soon, or in an even worse case, waiting until the machine breaks down and it’s too late? There is a whole world of 4-20 mA loop vibration sensors, like the Seismic Velocity 4-20 mA transmitter from Metrix or IMI Sensors, and they are gaining in popularity.
 
 Simple and cost effective, these sensors protect and monitor vital plant machinery. They measure the vibration in the bearings, helping to predict and forecast machine failure. They interface directly with the PLC, DCS or SCADA systems already in place for process instrumentation. Thus, no additional budget need be spent on monitoring or data acquisition. Plants that do not monitor vibration can enter into the practice with minimal upfront costs. 4-20 mA vibration sensors allow them to perform vibration trending and notify a technician for a more detailed diagnostic test of the machinery when its vibration alarm threshold is crossed. Still, these sensors only help in predictive maintenance, if they are providing accurate measurements. They need to be calibrated too.

During a presentation at the ISA Process Control Symposium in November 2015, Michael Scott, Industrial Product Manager and Certified Category II Vibration Analyst with The Modal Shop, presented Field calibration and testing of industrial vibration protection systems. During this demonstration, he calibrated a 4-20 mA vibration sensor.

Vibration_measurements

 

Using a shaker as a standard, he entered the make, model, and serial number into Beamex CMX calibration management software. Then, a tag for the sensor was created with an input of 0-1 inches per second peak, and an output of 4 to 20 mA (measured). The Beamex MC6 documenting calibrator prompted him to set the shaker to specific target points and log associated mA readings. When uploading the test, the CMX prompted him to select the shaker from a pick-list of standards to fully document what was used to perform the test (shaker serial number for the input and MC6 serial number for the output). The outcome is an automated, paperless vibration sensor calibration with a calibration certificate to provide proof and traceability, which not only verifies the accuracy of the sensor, but can be useful during audits, such as an OSHA VPP Star safety audit.

 

Topics: Vibration measurement, Beamex MC6

How often should instruments be calibrated?

Posted by Heikki Laurila on Jan 07, 2016

One of the questions we get asked most frequently is how often a customer should calibrate his instruments. Unfortunately there is no straight answer to this, at least not one that would always be correct. Instead there is a list of variables that should be taken into account when deciding the calibration period for any measurement device. Let’s take a quick look at these variables.

Note! There is a newer updated version of this blog post. You can find that at the below link:

How often should instruments be calibrated? [Update]

 

Many manufacturers provide a recommendation for the measurement device’s calibration period. Or they may have stability specifications given for different calibration periods. Following the equipment manufacturer’s recommendation is an easy and good starting point.

When you install a measurement instrument into your process, it is important to know what the accuracy requirements for that specific installation location are. Not all the places in a factory where you install similar transmitters have the same accuracy need.  The installed equipment’s specifications compared to the need of the installed location will affect the calibration period. If you install a very accurate transmitter into a location that does not have a high accuracy need, it can be calibrated less often. You don’t always have to follow the specifications of that installed equipment.

The criticality of a measurement location is one important factor related to the calibration period. The more critical locations should naturally be calibrated more often than the  less critical location.

The workload or operating conditions of a measurement device will also affect how often it should be recalibrated. If used very often and/or in very harsh operating conditions, it is good to calibrate it more often.

The stability history of a device is also an important aspect. If you have a long history of a device and it is found to be very stable, it can be calibrated less often. And on the contrary, if the history shows that the instrument drifts and often fails in recalibration, it should logically be calibrated more often. A calibration software can help to easily show the history trend automatically and will therefore help to make the analysis. Doing it manually may require a lot of work.

In some areas there are regulatory requirements, standards or quality systems that specify how often instrument should be calibrated. It’s tough to argue with them.

In some applications the costs of a failed recalibration are so high that it becomes cheaper to calibrate more often than to let the instrument fail. This is specially the case in pharmaceuticals, food and beverage, and other regulated industries, or in any critical location.

In many industries the quality of the final product cannot be proven by measuring the final product. Instead, various measurements needs to be performed during the manufacturing process. These measurements needs to be maintained accurately with periodical calibrations.

These above principles can be applied for any kind of measurement device, weather it is a process transmitter or a reference standard.

 

History trend Figure 1. The figure illustrates the maximum error of an instrument during time. Each bullet point indicate a calibration point, and every time the instrument was calibrated it was also adjusted to have 0% error. The figure shows that in the beginning the drift is smaller than the allowed error limit (the red dotted line), and that the drift diminishes each time. Based on this, the calibration interval period has been prolonged. However, after a few longer intervals the drift has started to grow again, and therefore, the interval has been shortened. Even though the interval was shortened the drift seems to have gotten worse, and this means that it would be time to have the instrument replaced.

This topic gets me going and I could write much more, but could you read much more, or could you even read this much… ;)

The idea of this blog is to keep stories pretty short, so that is all for now.

Yours in blog,
 Heikki

Heikki Laurila is Product Marketing Manager at Beamex Oy Ab. He started working for Beamex in 1988 and has, during his years at Beamex, worked in production, the service department, the calibration laboratory, as quality manager and as product manager. Heikki has a Bachelor's degree in Science. Heikki's family consists of himself, his wife and their four children. In his spare time he enjoys playing the guitar. 

 

Read more about calibration frequency

Topics: Measurement, Calibration, Calibrator, General

Calibrating a square rooting pressure transmitter

Posted by Heikki Laurila on Dec 10, 2015

We quite often receive questions regarding the calibration of a square rooting pressure transmitter.  Most often the concern is that the calibration fails too easily at the zero point. There is a reason for that, so let’s find out what that is.

First, when we talk about a square rooting pressure transmitter, it means a transmitter that does not have a linear transfer function, instead, it has a square rooting transfer function. When the input pressure changes, the output current changes according to a square rooting formula. For example, when the input is 0% the output is 0% of the range, just as when input is 100% output is 100%. But when the input is only 1% the output is already 10%, and when the input is 4% output is 20%. The picture later explains this graphically.

So, why and when would you use that kind of transmitter? It is used when you are measuring flow with a differential pressure transmitter. In case you have some form of restriction structure (orifice/venturi) in your pipe the bigger the flow is, the more pressure is generated over that structure. When the flow grows the pressure does not grow linearly, it grows with a quadratic correlation.

If you want to send a mA signal to your control room, you use a square rooting pressure transmitter that compensates for the quadratic correlation - and as a result, you have a mA signal that is linear to the actual flow signal. You could also use a linear pressure transmitter and make the conversion calculation in your DCS system, ISO-5167 gives more guidance.

So, what about when you start calibrating this kind of square rooting transmitter?

You can, of course, calibrate it in a normal way, by injecting a known pressure to the transmitter’s input and measuring the mA output. You should anyhow remember that the output current does not change linearly when the input pressure changes. Instead, the mA output grows according to the rooting transfer function. This means that in the beginning, when you are at zero input and you have 4 mA output, the transfer function is VERY steep. Even the smallest change in the pressure will cause the output to change a lot. I have illustrated this in the simple figure below. The red curve shows the transfer function of a square rooting transmitter and the blue line shows the function of a linear transmitter.

 

linear-vs-square-rooting Figure 1. Linear versus square rooting.

In practice, this means that if your input pressure measurement fluctuates just one or a few digits, the output should change a lot in order for the error to be zero. What happens is that if the measured values fluctuates even in the least significant digit, the error calculation will say that the point fails. In practice, it is just about impossible to make that zero point to be a “Pass” calibration point within the allowed tolerance.

So what to do? In order to calibrate, you should simply move the first calibration point a bit higher than 0% of the input range. If the first calibration point is at 5 – 10% of the input range, you are already out of the steepest part of the curve and you can get reasonable readings and error calculation. Of course, then you don’t calibrate the zero point, but your process is normally not running at zero point either.

I hope this short explanation made some sense and helped with this issue, let me know if you need any further explanations!

Yours truly,
 Heikki

 

Update, Feb 2018:

How to calculate the output?

There have been some questions on how to calculate the output mA of a square rooting pressure transmitter.

Below is a formula that you can use to calculate what the output current should be at a given input point:

Square rooting pressure transmitter formula

Where:

Oidealis the theoretical output value at the measured input for a calibration point (I).
Iis the measured input for a calibration point.
Izerois the theoretical input value at Input 0%.
Ifsis the theoretical input value at Input 100% (full scale).
Ofsis the theoretical output value at Output 100% (full scale).
Ozerois the theoretical output value at Output 0%.

 

 

 

Heikki Laurila is Product Marketing Manager at Beamex Oy Ab. He started working for Beamex in 1988 and has, during his years at Beamex, worked in production, the service department, the calibration laboratory, as quality manager and as product manager. Heikki has a Bachelor's degree in Science.
Heikkis family consists of himself, his wife and their four children. In his spare time he enjoys playing the guitar. 

 

Interested in differential pressure flowmeter calibration? Watch our webinar  recording

 

Topics: Pressure calibration, Square rooting transmitter, Workshop calibration, Calibration, Field calibration, Flow calibration

Top 5 reasons why companies update their calibration systems

Posted by Heikki Laurila on Dec 07, 2015
As many as every fourth company in the process industry is at the moment considering to make some kind of update to its calibration process and systems. I admit, the number sounds quite high, but it is based on a specific study we recently made with the International Society of Automation (www.isa.org) concerning calibration process changes.

Top-5-reasons-for-updating Figure 1. Top 5 reasons for updating a calibration system. Results from a study concerning calibration process change, conducted by Beamex and ISA in 2015.

So, why do companies plan or decide to update their calibration processes? By an update in the calibration process I mean, in this context, making a change in the tools, systems and work procedures for performing, documenting and managing calibration of process instruments. Any change must of course happen for a reason. Most likely the reason is a challenge or problem in the current way of doing things that the company wants to fix. If there’s no problem, there’s no clear reason or justification to update anything. So what are the top five reasons or problems causing calibration process updates?

The most common reason why companies decide to update their calibration systems is to make technicians work faster and more efficient. 42 % of companies state this as one of the key reasons to implement changes in their calibration processes. The key reason is therefore related to gaining economic and productivity efficiency through making a change in the calibration system.

Almost as many people state that the key reason for making a change is to ensure compliance with regulatory and quality requirements. The return on investment of compliance is maybe more difficult to calculate compared to calculating time-savings of a calibration engineer between current and new calibration processes, but you can always ask yourself: what is the price of non-compliance? Ultimately, non-compliance could even mean shutdown of a manufacturing site by a regulatory authority, and we can all understand what kind of economic impact that would have on a business.

The third most common reason for making a process change is to improve business performance and plant production. Again, gaining economic and productivity efficiency is at the heart of a process change, but now the reasons are maybe even on a broader scale, to improve plant- or even company-level performance through smarter calibration.

The fourth most common reason to implement a process change is to replace and old and outdated legacy system. Instead of just gaining economic or compliance improvements, companies are therefore also “forced” to update their calibration systems based on technological necessities and risks, such as managing currently calibrations with software that is not supported or maintained anymore with new releases. The fifth most common reason is also technology-related, as companies also decide to update their calibration processes due to new technological requirements, such as smart instruments being used at a manufacturing site.

As said, every change requires a reason and the reason is often in the form of a problem or challenge that requires fixing. The top five reasons for making a calibration process change are economical, compliance and technology related. So what’s your problem? What would drive you to make a calibration process change? Also, would you like to learn what the areas are where companies most often need help when making a calibration system and process update? Let me know :)

Regards,
 Villy

Villy Lindfelt is Director of Marketing & Legal Affairs at Beamex Oy Ab.
He supervises the marketing team as well as focuses on contracts and documentation, related to calibration system implementation projects. Villy started working for Beamex in 2004 and has since then been surrounded by bright people who have thought him a thing or two about calibration and implementing calibration systems. He has a Master’s degree in Economics as well as a Master of Laws degree. Villy's family consists of himself, his wife and their two daughters.

 

Topics: ISA, Calibration, Calibration process, calibration system, Calibrator, General

How to calibrate a pressure switch

Posted by Heikki Laurila on Nov 30, 2015

Update March 2020: We have a newer blog post on pressure switches, please find it here: Pressure Switch Calibration

We often get asked questions regarding the calibration of a pressure switch.
 Since we didn’t have any videos on the topic on our YouTube channel, our dynamic duo from our US office, Roy and Ned, decided to make one.

In this video they show how to calibrate a pressure switch. In the video they use a hand pump to generate the pressure. It is also possible to use the automatic pressure controller (model POC6), controlled by MC6, to make a fully automatic calibration of a pressure switch. And naturally it is also possible to calibrate temperature and electrical switches automatically. In the beginning of the video Roy and Ned present something they call statistical facts..

Subscribe to our YouTube channel to never miss a new Beamex video!

Topics: Pressure calibration, Pressure switch, Temperature calibration, Beamex MC6, Calibration, Calibrator, General

Why calibrate?

Posted by Heikki Laurila on Nov 27, 2015

Hi all,

In this blog post we’ll continue to talk about another fundamental topic: 
Why should you calibrate? 

It is good to remember the old rule: “All measurement devices measure wrong, and calibration tells how wrong they are.”

 

banner.aspx

All measurement devices tend to drift over time, they lose their accuracy unless calibrated at certain intervals. Of course more modern instruments drift less than old-fashioned ones.

In many industries you need to comply with regulations that require you to calibrate at certain intervals. These regulations include ISO9000, ISO14000, FDA regulations and many more.

A process plant takes in raw material and converts it to finished products, trying to do it as effectively as possible. Keeping all the critical process measurements accurate with regular calibrations helps to keep the plants working more effectively and produce more output and money.

Money is an important reason for many things, also for calibration. When the money transfer or invoicing is based on measurements, it is clear that the more accurate the measurements are, the more accurate the money transfer is.

One very important motive for calibrating is safety. This includes employee safety at the plant, ensuring that the plant is a safe place to work. It also includes customer safety, for example in the food and pharmaceutical industry.

The environment is also something we all should take good care of. The various emissions of an industrial plant are measured with measurement devices, and keeping these accurate with regular calibrations helps to keep the environment clean.

This was a short summary on the reasons why to calibrate. Beamex can offer more detailed information if you are interested. Feel free to comment below if you have questions or comments.

We have also written two blog posts on our experiences of what calibration actually is and how often instruments should be calibrated, click on the links to read them. 

Write to you soon,
 Heikki

Heikki Laurila is Product Marketing Manager at Beamex Oy Ab. He started working for Beamex in 1988 and has, during his years at Beamex, worked in production, the service department, the calibration laboratory, as quality manager and as product manager. Heikki has a Bachelor's degree in Science.
 Heikkis family consists of himself, his wife and their four children. In his spare time he enjoys playing the guitar. 

Do you know the costs and risks of not calibrating?
We do and we have written a white paper about it, read it now!

Download white paper

 

Also, please check out the article What is calibration on our web site.

 

Topics: Pressure calibration, Temperature calibration, Calibration, Calibrator, General

What is calibration?

Posted by Heikki Laurila on Nov 27, 2015

Hello readers,

As this is a calibration blog, I will begin with writing a short post on a very fundamental thing – what is calibration? But how can I explain shortly what calibration is?

I will start with a practical example: I was driving my car and the speed meter showed 80 km/h. Suddenly, a police stopped me and told me that the speed limit was 60 km/h and that my actual speed was 78 km/h. Hey, I just calibrated my speed meter! Now I know that when the meter shows 80 km/h, the real speed is actually 78 km/h.  I even got a certificate, a pretty expensive one as well… But for some reason it was called a ticket, and not a certificate… ;)

More seriously, in short – calibration is a documented comparison of the device to be calibrated against a traceable reference device. The reference device is often referred to as a calibrator. The reference device should naturally be more accurate than the device to be calibrated. There are many opinions on the accuracy ratio of the calibrator and device under test. Anyhow, the important thing is that you are aware of all the uncertainties related to your reference standard and the whole calibration process.

As mentioned earlier, the reference standard (or calibrator), needs to be traceable. This means that he reference standard must have a valid calibration, meaning that is has been calibrated against a higher degree traceable reference and its calibration period is not overdue. Sometimes the term “calibration” includes also the adjustments needed to be made to the device under test, to read the same as the reference standard. Many international standards keep the adjustment as a separate thing, and this is also how we think at Beamex.

I hope this provides a start for the future writes to the blog.

Blog to you soon,
 Heikki

 
Heikki Laurila is Product Marketing Manager at Beamex Oy Ab. He started working for Beamex in 1988 and has, during his years at Beamex, worked in production, the service department, the calibration laboratory, as quality manager and as product manager. Heikki has a Bachelor's degree in Science. Heikki's family consists of himself, his wife and their four children. In his spare time he enjoys playing the guitar. 

 

Download our free Ultimate Calibration e-book

Topics: Process automation, Calibration, Calibrator, General

About Beamex blog

Beamex blog provides insightful information for calibration professionals, technical engineers as well as potential and existing Beamex users. The blog posts are written by Beamex’s own calibration and industry experts or by guest writers invited by Beamex.

Disclaimer

    Subscribe to Email Updates

    Most recent blog posts

    Aug 23, 2024

    Calibration Process Savings Calculator [Online calculator]

    Are you ready to uncover potential savings within your calibration processes? In this blog, we’re ...

    Jul 04, 2024

    Revolutionizing calibration services with technology - case story [webinar]

    In today's fast-paced world, managing calibration services efficiently is critical. At Douglas ...

    Jun 19, 2024

    How to get your boss to buy you a new calibrator

    When you work with something, it's so much easier if you have the proper tools, right? The same ...

    May 23, 2024

    Hysteresis in pressure calibration: What you need to know

    Pressure calibration is crucial for ensuring the accuracy and reliability of process instruments ...

    Apr 03, 2024

    With empathy to excellence - The secret to great customer service

    Empathy is an essential element of great customer service. Along with understanding, it’s the most ...

    Feb 13, 2024

    Calibrating for a Cleaner Future - Unlocking the Potential of Waste to Energy

    Waste-to-Energy (WtE) has been around since the first waste incinerator was built in 1874, but the ...

    Jan 09, 2024

    How an accurate, reliable calibration solution could supercharge your business [Podcast]

    Whether you work in manufacturing, pharmaceuticals, healthcare, or any other field that relies on ...

    Dec 12, 2023

    Temperature Calibration Training Course [eLearning]

    In this blog post, we want to share a new way to learn more about industrial temperature ...

    Nov 29, 2023

    What is the adiabatic process?

    Is it a leak? - Understanding the adiabatic process in pressure calibration The adiabatic process ...

    Aug 08, 2023

    How Douglas Calibration Services reduces technicians’ workload and stress [Case Story]

    Beamex LOGiCAL Calibration Management Software has helped Douglas Calibration Services improve ...

    Jul 04, 2023

    Partnering in calibration excellence at the Beamex Pharma User Forum

    The biggest issues facing industry leaders from some of the largest pharma companies worldwide, ...

    Jun 15, 2023

    Digital Calibration Certificate (DCC) – What is it and why should you care?

    The digitalization of metrology has been slower than in many other fields, and calibration ...

    Apr 26, 2023

    CMMS calibration module or dedicated calibration software?

    When your computerized maintenance management system (CMMS) already has a calibration module, why ...

    Feb 13, 2023

    The most common calibration-related FDA warnings to pharma companies

    As consumers of the products from pharmaceutical companies, we all want to be sure that we can ...

    Oct 19, 2022

    Working workshop wonders with Endress+Hauser [Case Story]

    A customized Beamex solution including hardware and software sits at the heart of Endress+Hauser’s ...

    Sep 22, 2022

    Ensuring sustainable management of process safety for chemicals

    In the chemicals industry, safety is priority number one. But how do you ensure safety in a ...

    Aug 31, 2022

    What does operational excellence mean? How calibration can help to improve?

    In this article, we’re going to take a closer look at a topic that’s talked about a lot but isn’t ...

    Jul 06, 2022

    How automated calibration can help service companies be more competitive

    Service companies that perform calibrations for the process industries operate in a challenging ...

    May 30, 2022

    Stepping on the gas with the UK’s National Grid[Case Story]

    Beamex’s automated, paperless calibration solution has helped National Grid streamline its ...

    Apr 25, 2022

    Calibration Management and Software [eBook].

    In this blog post, we want to share with you an education eBook focusing on calibration management ...

    Mar 21, 2022

    Understanding Pressure Calibrator Accuracy Specifications

    Comparing the accuracy specifications of pressure calibrators can be a challenging task because ...

    Feb 08, 2022

    How to avoid safety and compliance issues in fine chemicals

    Safety and compliance are non-negotiable in the fine chemicals industry, which produces complex, ...

    Jan 11, 2022

    Calibration management - transition from paper-based to digital

    A Tale of Three Steves Stephen Jerge, Calibration Supervisor for Lonza Biologics, a multinational ...

    Dec 01, 2021

    Automating the calibration management ecosystem

    It’s time to say goodbye to error-prone paper-based calibration! While the pen might be mightier ...

    Oct 05, 2021

    Improving efficiency and ensuring compliance in the pharmaceutical industry

    Today, it seems like everyone is talking about digitalization – and for good reason. Done properly, ...

    Jun 22, 2021

    Pressure Calibration [eBook]

    In this blog post, we want to share a free educational eBook on Pressure Calibration and other ...

    May 20, 2021

    The Evolution of Calibration Documentation

    Our modern history is defined by the advent of writing. Writing is humankind’s principal technology ...

    Mar 25, 2021

    Manual Data Entry Errors

    Many businesses still use a lot of manual entry in their industrial processes. This is despite the ...

    Feb 23, 2021

    How to choose a calibration laboratory - 13 things to consider

    So you have invested in some new, accurate calibration equipment. Great! But as with many other ...

    Dec 02, 2020

    How to calibrate a temperature switch

    Temperature switches are commonly used in various industrial applications to control specific ...

    Oct 22, 2020

    CMMS and calibration management integration - Bridging the gap

    Recently, Patrick Zhao, Corporate Instrument & Analyzer SME for Braskem America, spoke at the ...

    Sep 23, 2020

    Temperature Calibration Webinars

    We have recently done two webinars on temperature calibration; one was done by Beamex, Inc. in USA ...

    Aug 11, 2020

    Sustainability in Energy from Waste

    Waste not, want not; a phrase coined to denote resourcefulness, the idea of utilising what we have ...

    Jun 23, 2020

    Sanitary temperature sensor calibration

    Sanitary temperature sensors are commonly used in many industries, such as Food and Beverage, ...

    May 07, 2020

    Future calibration trends by calibration experts in the pharmaceutical industry

    We regularly organize user group meetings for our pharmaceutical customers. During a recent ...

    Mar 30, 2020

    Pressure Switch Calibration

    Pressure switches are very common instruments in the process industry, and various kinds of ...

    Feb 19, 2020

    Temperature Calibration [eBook]

    In this blog post we want to share with you an educational eBook focusing on temperature ...

    Jan 23, 2020

    Calibration Trends Featuring Automation & Digitalization [Webinar]

    In this blog post, I am proud to share a recent webinar collaboration with ISA (International ...

    Dec 03, 2019

    Pressure Transmitter Accuracy Specifications – the small print

    Pressure transmitters are widely used in the process industry. The advertised accuracy ...

    Oct 07, 2019

    How calibration improves plant sustainability

    You only live once: A common phrase used around the world to indicate how one should live their ...

    Aug 27, 2019

    How to calibrate temperature sensors

    Temperature measurement is one of the most common measurements in the process industry. Every ...

    Jul 24, 2019

    Why use calibration software?

    The shortest answer is to automate documentation to save time, lower risks, and quickly analyze ...

    Jun 27, 2019

    Calibration uncertainty and why technicians need to understand it [Webinar]

    In this blog post, I want to share a webinar titled "Calibration uncertainty and why technicians ...

    May 14, 2019

    How a business analyst connected calibration and asset management [Case Story]

    How one of America’s largest public power utilities integrated its asset management software with ...

    Apr 24, 2019

    Optimal Calibration Parameters for Process Instrumentation

    Many calibration technicians follow long-established procedures at their facility that have not ...

    Feb 22, 2019

    Weighing Scale Calibration Video

    In this post we share an educational video on how to calibrate weighing scales. The video goes ...

    Jan 11, 2019

    Calibration in Times of Digitalization - a new era of production

    This is the first blog post in the Beamex blog series "Calibration in Times of Digitalization" we ...

    Nov 19, 2018

    Do more with less and generate ROI with an Integrated Calibration Solution

    Process instrument calibration is just one of the many maintenance related activities in a process ...

    Oct 18, 2018

    How to calibrate temperature instruments [Webinar]

    In this blog post, I will share with you a two-part webinar series titled “How to calibrate ...

    Aug 23, 2018

    Uncertainty components of a temperature calibration using a dry block

    In some earlier blog posts, I have discussed temperature calibration and calibration uncertainty. ...

    Jun 20, 2018

    AMS2750E Heat Treatment Standard and Calibration

    Update July 2020: Please note that a new F version (AMS2750F) has been released in June 2020. In ...

    May 30, 2018

    Using Metrology Fundamentals in Calibration to Drive Long-Term Value

    This article discusses some critical items to address for a calibration program based on sound ...

    Apr 17, 2018

    Pt100 temperature sensor – useful things to know

    Edit October 2023: The Tolerance (Accuracy) Classes edited per IEC 60751:2022. Pt100 temperature ...

    Mar 20, 2018

    How to calibrate pressure instruments [Webinar]

    In this blog post, I want to share with you a two-part webinar series titled “How to calibrate ...

    Feb 27, 2018

    Common Data Integrity Pitfalls in Calibration Processes

    In this blog post, I will discuss the most common data integrity pitfalls in the calibration ...

    Jan 30, 2018

    How often should instruments be calibrated? [Update]

    How often should instruments be calibrated? That is a question we get asked often. It would be nice ...

    Dec 20, 2017

    Ohm’s law – what it is and what an instrument tech should know about it

    In this post, I would like to talk you about the Ohm’s law. Why? Because it is helpful in many ...

    Nov 23, 2017

    How to avoid common mistakes in field calibration [Webinar]

    This post includes the recordings of the two-part webinar series titled “How to avoid the most ...

    Oct 31, 2017

    Calibration in a hazardous area

    This post discusses calibration in hazardous area in the process industry.

    Sep 19, 2017

    Thermocouple Cold (Reference) Junction Compensation

    In this blog post, I will take a short look on thermocouples and especially on the cold junction ...

    Aug 28, 2017

    Resistance measurement; 2, 3 or 4 wire connection – How does it work and which to use?

    In this blog post, I explain how a resistance or RTD meter works and the difference between the 2, ...

    Jul 18, 2017

    What is barometric pressure?

    We regularly get asked, “what is barometric pressure?” So I decided to make a short blog post to ...

    Jun 13, 2017

    Measuring current using a transmitter’s test connection – don’t make this mistake!

    If I had to summarize the content of this post into one sentence, it would be: Using a mA meter ...

    May 16, 2017

    Weighing scale calibration - How to calibrate weighing instruments

    In this article, I look at the practical considerations and the different tests you should perform ...

    Apr 05, 2017

    How to calibrate pressure gauges - 20 things you should consider

    Pressure Gauge Calibration 20 things you should consider when calibrating pressure gauges Pressure ...

    Mar 15, 2017

    Data Integrity in Calibration Processes

    Calibration in pharmaceutical industry What is Data Integrity? Why is it important and acute? What ...

    Feb 21, 2017

    Temperature units and temperature unit conversion

    Edit: The definition of kelvin has been edited after the 2019 redefinition of the SI system. In ...

    Feb 08, 2017

    Pressure units and pressure unit conversion

    It’s a jungle out there! There are a lot of different pressure units in use around the world and ...

    Feb 02, 2017

    Calibration Out of Tolerance – Part 2

    In this post I continue on the topic of calibration being “Out of Tolerance” (OoT) or “Failed”. ...

    Jan 18, 2017

    Metrological Traceability in Calibration – Are you traceable?

    What is metrological traceability in calibration and how can you be traceable? In calibration, the ...

    Jan 04, 2017

    Calibration uncertainty for dummies – Part 3: Is it Pass or Fail?

    In this post we discuss the following scenario: You have done the calibration, have the results on ...

    Dec 16, 2016

    Calibration Out of Tolerance: What does it mean and what to do next? - Part 1 of 2

    Calibration out of tolerance – ready to hit the Panic button? In this post we are talking about ...

    Nov 25, 2016

    Calibration uncertainty for dummies - Part 2: Uncertainty Components

    Figure 1. Standard deviation This is the second blog post (out of three) continuing on the subject ...

    Nov 18, 2016

    Pressure calibration basics – Pressure types

    Figure 1. Pressure types On a regular basis, we receive customer questions about pressure types and ...

    Nov 08, 2016

    Proof Testing: Calibration By A Different Name?

    The oil and Gas industry in the North Sea has been hit by a Perfect Storm – the rapid fall in the ...

    Nov 02, 2016

    Calibration uncertainty for dummies

    This article was updated on December 2, 2021. The three separate blog articles on calibration ...

    Oct 28, 2016

    Calibration video: How to calibrate a temperature measurement loop

    In the previous blog post Ned discussed the basics of loop calibration. Now, let’s watch a video on ...

    Oct 20, 2016

    Loop calibration basics

    Last year, I presented a paper on this topic at an ISA event (Power Generation Division meeting). ...

    Oct 13, 2016

    How to implement calibration software

    Getting calibration software is much more than just selecting the right product and buying the ...

    Oct 07, 2016

    How to calibrate an RTD HART temperature transmitter

    The temperature transmitter is a popular instrument in process plants. Like most transmitters, it ...

    Sep 30, 2016

    What is a documenting calibrator and how do you benefit from using one?

    Figure 1. The calibration process with and without a documenting calibrator.

    Sep 21, 2016

    How to calibrate HART pressure transmitters

    A pressure transmitter is one of the most common instruments in a process plant. In order to assure ...

    Sep 16, 2016

    Why and how to calibrate WirelessHART transmitters?

    In one of our previous blog posts we took a general look at how to calibrate smart transmitters. ...

    Sep 14, 2016

    How a modern calibration process improves power plant performance

    Today we will take a quick look into a power plant and discuss why it is beneficial to have a ...

    Sep 09, 2016

    Why must also "Smart" transmitters be calibrated?

    The so called "Smart" transmitters are getting ever more popular in the process industry, but what ...

    Sep 07, 2016

    ISO9001:2015 – how do the changes affect your calibration process?

    The ISO9001 standard was revised in 2015. In this blog, I will examine the main changes. And as I ...

    Mar 01, 2016

    The key aspects of building a calibration system business case

    An investment into calibration equipment and systems must be financially justified, just like any ...

    Feb 23, 2016

    Field Calibration or Workshop Calibration?

    We are sometimes asked if it is better to calibrate process instruments in the field, or in a ...

    Jan 26, 2016

    Calibration of a HART transmitter and the most common misconceptions about a HART communicator

    A HART transmitter is the most common smart transmitter type used in the process industry. What ...

    Jan 18, 2016

    Vibration measurements and calibration

    Seismic velocity measurements provide ideal resolution at typical rotating equipment running ...

    Jan 07, 2016

    How often should instruments be calibrated?

    One of the questions we get asked most frequently is how often a customer should calibrate his ...

    Dec 10, 2015

    Calibrating a square rooting pressure transmitter

    We quite often receive questions regarding the calibration of a square rooting pressure ...

    Dec 07, 2015

    Top 5 reasons why companies update their calibration systems

    As many as every fourth company in the process industry is at the moment considering to make some ...

    Nov 30, 2015

    How to calibrate a pressure switch

    Update March 2020: We have a newer blog post on pressure switches, please find it here: Pressure ...

    Nov 27, 2015

    Why calibrate?

    Hi all, In this blog post we’ll continue to talk about another fundamental topic: Why should you ...

    Nov 27, 2015

    What is calibration?

    Hello readers, As this is a calibration blog, I will begin with writing a short post on a very ...