Many sensors used in industrial automation applications – linear position sensors and long-range distance sensors being two of the most common types – produce an analog output that is representative of the position or distance being measured.
In a previous entry, we discussed and defined the term “true analog resolution” as it relates to sensor technology. By “true analog”, we mean that the sensor produces an analog signal without the use of a Digital-to-Analog Converter, or DAC. Rather, sensors that operate in the true analog world rely on analog circuitry (integrators, comparators, etc.) to generate the output signal. The advantage of this method is that such true analog sensors are able to produce an output signal with “essentially infinite” resolution.
If you want to know how I can use an oxymoron like “essentially infinite”, and keep a straight face, take a minute to review the “Understanding True Analog Resolution” entry from a few weeks ago. Go ahead, I’ll wait…
What is Digitally-Derived Analog?
Other sensors do use a Digital-to-Analog Converter (DAC) to produce an analog output. A DAC converts a digital value, or number, into an analog voltage or analog current that is representative of the digital number.
DAC Resolution – A “bit” of an explanation
Resolution is the smallest increment of position change which can be detected and indicated by the output. Calculating the resolution of a sensor or measurement system that utilizes a DAC is fairly straightforward. A DAC is rated by how many digital numbers, or bits that it’s able to process. Typical DAC values that you’re likely to encounter in the industrial world include 8-bit, 10-bit, 12-bit, and 16-bit.
An 8-bit DAC, for example, is capable of processing a binary (1 or 0) number that has eight places:
In decimal, 28 is 256:
2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 = 256
So, in this example, 256 different digital values can be assigned to a corresponding analog voltage (or current).
If a sensor had a 256” measuring range, and utilized an 8-bit DAC which produced a 0-10V output, the entire 256” range would need to be chopped up into 256 (28) steps. This means that the position resolution would be 256” divided by 256 steps, or 1”.
On paper, it looks something like this:
00000000 = 0.00 V or 0.00”
00000001 = 0.039V or 1.00”
00000010 = 0.078V or 2.00”
And so on, through:
11111110 = 9.961V or 255”
11111111 = 10.00V or 256”
If the same sensor had a 16-bit DAC, which is capable of processing numbers up to 16 places, this same 0-10V output signal can be chopped up into 65,536 pieces (216 = 65,536). Using the same 256” measuring range yields a resolution, in this case, of 256” divided by 65,536 steps, or 0.0039”.
The Numbers Game
Applying this formula to sensors with much smaller working ranges yields some interesting results. Take, for example, a sensor with a 2” working range, utilizing a 16-bit DAC. If you crunch the numbers (2” divided by 65,536), you arrive at a theoretical resolution of 0.00003”. This corresponds to a voltage change of just 0.00015 V, or 0.15 mV(10V divided by 65,536). In the real world, this is unrealistic. The noise level present in the system is almost certainly going to be much higher than 0.15 mV. So, in practice, the amount of noise is still going to be the determining factor.
Only As Strong As the Weakest Link
As may be evident from the above example, practical, usable resolution is going to be determined by the weakest link in the chain. A sensor may have 16-bit output resolution, but if the signal is being applied to an input having only 12-bit resolution, the practical resolution is going to be, at best, 12-bit. Even then, high levels of noise may degrade the resolution even further.
To achieve maximum accuracy, it is necessary to choose the proper sensor, the proper input electronics, and follow proper shielding and grounding practices.