The process of calibration comprises the following steps:
•Selection of a calibration kit matching the connector type of the test port (See Calibration Standards and Calibration Kits). The calibration kit includes such standards as SHORT, OPEN, and LOAD with matched impedance. Magnitude and phase responses i.e. S-parameters of the standards are well known. The characteristics of the standards are represented in the form of an equivalent circuit model, as described in Calibration Standards Model.
•Selection of a calibration method (See Calibration Methods and Procedures) is based on the required accuracy of measurements. The calibration method determines which error terms of the model (or all of them) will be compensated.
•Measurement of the standards within a specified frequency range. The number of measurements depends on the type of calibration.
•The Analyzer compares the measured parameters of the standards against their predefined values. The difference is used for calculation of the calibration coefficients (systematic errors).
•The table of calibration coefficients is saved into the memory of the Analyzer and used for error correction of the measured results of any DUT.
Calibration is always made for a specific channel, as it depends on the channel stimulus settings — particularly on the frequency span. This means that a table of calibration coefficients is being stored for each individual channel.