Calibration

Sensors produce a voltage proportional to the property they measure. Before they can be used, and at intervals afterwards, they need to be calibrated so that the voltage can be accurately converted into the engineering units of the property being measured. In industry companies must be able to show the instruments they use have been calibrated to a particular standard. There are several standards, each calibrated from a device a grade up from it.

Calibration is done by measuring the sensor's response to an input which can be measured to a greater precision the sensor is capable of. It has two parts:

1. Setting two known points at the extremes the instrument will measure

2. Measuring at intervals between the extremes and obtainingt the precision.

The apparatus consists of a linear potentiometer with a travel of 0.375inches (about 9.5mm). It is connected to a signal conditioning circuit with nulling and variable gain. These will be used to set the output between two convenient limits. The output will be measured using a DVM which should be set on the 2V (2000mV) range.

The accurate input are slip blocks which are used to calibrate workshop measuring equipment such as micrometers. These are in thou (1inch = 1000thou). The aim is to get the meter to read in thou.

1. Set the extreme limits - 0 and a little less than 375 thou.

a. With no slip block in place use the trim tool to adjust the nulling pot to give 0V on the meter.

b. Insert a combination of slip blocks to as close to 375thou as those given to you will allow. The value must be no greater than 375.

c. Adjust the gain pot to make the meter read the value of the slip block combination

d. Remove the blocks

e. If the meter reads zero the extremes are set.

If the meter does not read zero repeat a - d as many times as required until both ends do not require a change.

f. The extremes are now set.

2. Measure intervals

a. Place combinations of slip blocks in at roughly 50 thou steps and record the meter reading against the actual slip block size in. Include no slip block and the largest size used in part 1.

b. Run the LabVIEW program calibration fitting

c. Enter the combined slip block thicknesses in the 'actual thou' column and the reading from the meter in the 'mV from meter column'

As data is entered the others will fill with as much data as can be calculated and a graph will be plotted. The program uses the data in the first two columns to calculate a best fit line.

·  column 3, 'best fit thou', fills with the 'perfect' slip block thicknesses to give the data in the 'mV from meter' column assuming the fit in use.

·  column 4, 'actual thou - best fit thou', is the difference between the first and third columns and represents the 'error' in the data.

·  column 5, 'fit coefficients', contains the polynomial coefficients in the form:

thou = const + C1*thou + C2*thou2 + ... + Cn*thoun

The 'largest % error' is the largest absolute value from the third column represented as a percentage of the highest value in the first column. The position of the largest error in the column is also given.

The fit in use is a polynomial of an order given in the 'polynomial order' control. Order 1 is a linear fit, 2 a square, etc. Change the order to see the effect on the line and errors. Data will not be displayed for order 'n-1' if there are fewer than n co-ordiantes entered.

The graph shows white dots for the actual co-ordinates and a red line for the fit. Although in general lower errors can be obtained with higher order polynomials the aim is not to make every co-ordinate fit exactly on a line. Each co-ordinate has an error on it and the general trend of any smooth curve (or straight line) should be preserved.

PIAE2_calibration.doc v2 - Paul Williams 18/09/12