Measurement is necessary for humans to comprehend the external environment, and we have acquired a sense of measurement over millions of years of living. Measurements need the use of equipment that give scientists a numerical value. The issue here is that the outcome of every measurement made with any measuring device is subject to some degree of uncertainty. This degree of indeterminacy is referred to as an error. Accuracy and precision are two critical variables to consider while collecting measurements. Both these words refer to the degree to which a measurement approaches a known or recognised value. Let us examine precision and accuracy examples in depth in this essay.
Accuracy
Accuracy is the degree to which a measurement agrees with the real or right value. A clock is said to be accurate if it strikes twelve precisely when the sun is directly overhead. The clock’s measurement (twelve) and the phenomenon it is intended to measure (the sun at zenith) accord. Without knowledge of or access to the real value, accuracy cannot be debated effectively. (It is worth noting that the real value of a measurement can never be determined).
Accuracy refers to the agreement between the measurement and the real value and does not indicate the instrument’s quality. The instrument may be of excellent quality yet fall short of the genuine worth. It was earlier believed that the clock’s function is to track the sun’s apparent motion across the sky. However, with our time zone arrangement, the sun is directly above at twelve o’clock only if one is at the time zone’s centre. At the eastern time zone’s equator, the sun is directly overhead about 11:30, whereas at the western time zone’s equator, the sun is directly overhead around 12:30. Thus, at either extreme, the twelve o’clock reading contradicts the phenomenon of the sun being at the local zenith, and we may argue that the clock is inaccurate. Here, the accuracy of the clock reading is influenced by our time zone system rather than by a clock flaw.
In the case of time zones, clocks, on the other hand, measure something slightly more abstract than the sun’s location. We define the time zone as correct if the central clock matches the sun, and then all other clocks in the time zone as correct if they match the central clock. Thus, a clock on the Eastern border of a time zone that displays 11:30 while the sun is overhead, remains correct since it agrees with the central clock. A clock reading 12:00 would be inaccurate at that hour. The concept to grasp here is that accuracy simply refers to the agreement between the measured and predicted values, which may or may not indicate anything about the measuring instrument’s quality.
Precision
Precision refers to the similarity of two or more measurements. Using the previous example, if you weigh a specific item five times and consistently obtain 3.2 kg, your measurement is quite exact. Precision is not synonymous with accuracy. You can be extremely precise yet inexact, as explained above. Additionally, you might be correct yet imprecise.
For instance, if your measurements for a specific chemical are on average near to the known value but widely dispersed, you have accuracy without precision.
Consider a basketball player shooting baskets as an excellent comparison for learning accuracy and precision. If a player shoots accurately, the ball will always end up close to or in the basket. If the player shoots precisely, the ball will always land at the same spot, which may or may not be near to the basket. A good player will be both accurate and precise by consistently shooting the ball in the same direction and making it into the hoop.
Error
All measurements are vulnerable to error, which adds to the result’s uncertainty. Errors are classed as either human or technological. Perhaps you’re moving some volume from one tube to another and accidentally spill some of the contents into the second tube: this is human error.
Technical error is classified into two types: random and systematic. As the name indicates, random errors occur infrequently and have no discernible pattern. When there is an issue with the instrument, systematic error arises. For instance, a scale may be incorrectly calibrated and read 0.5 g when there is nothing on it. As a result, all measures would be overstated by 0.5 g. Unless this is taken into consideration in your measurement, your result will contain some mistake.
How are precision, accuracy, and error related?
With a more precise instrument and measurements done in finer increments, as well as more repeatability or reproducibility, the random error will be reduced (precision). Consider a classic laboratory experiment in which you must measure the acidity of a sample of vinegar by watching the volume of sodium hydroxide solution necessary to neutralise a specific volume of the vinegar. You do the experiment and record the result. To be certain, you should repeat the technique with another identical sample from the same bottle of vinegar. If you have conducted this experiment in the laboratory, you will understand how improbable it is that the second try would provide the same result as the first. Indeed, if you do several duplicate (that is, identical in every manner) experiments, you are likely to have dispersed findings.
As previously said, the more measurements performed, the closer we are to determining a quantity’s real value. We may assess the accuracy of the data by doing many measurements (replicates), and then using simple statistics to determine how near the mean value would be to the real value, in the absence of systematic error in the system. As the number of measurements grows, the mean deviates less from the actual value.
Mnemonic to Recall the Distinction
A simple approach to remember the distinction between accuracy and precision is as follows:
Accurate is correct (or close to real value).
Precise is repeating (or repeatable).
Accuracy, Precision, and Calibration
Do you believe it is more important to use an instrument that records exact measurements or an equipment that records accurate measurements? If you weigh yourself three times on a scale, and the result is different each time, but is within a few pounds of your real weight, the scale is accurate. However, it may be preferable to use a precise scale, even if it is not accurate. In this instance, all measurements would be quite near to one another and off by approximately the same amount from the genuine value. This is a typical problem with scales, which frequently have a tare button to reset them to zero.
While scales and balances allow for tare or adjustment to provide accurate and exact measurements, many equipment require calibration. A thermometer is an excellent example. Thermometers are frequently more accurate within a particular range and more erroneous (though not necessarily imprecise) results outside that range. To calibrate an instrument, keep track of the distance between its readings and known or actual values. Maintain a calibration log to guarantee accurate readings. Numerous kinds of equipment require frequent calibration to guarantee that measurements are accurate and precise.
Conclusion
The closer a measurement is to the right value, the better the measurement’s accuracy will be. Estimated deviations from a measurement’s value are known as uncertainty in measurements.
The accuracy of measured values refers to the degree to which the results of many measurements are in accord. The larger the measurement increments, the greater the precision of a measuring tool. The more accurate a tool is, the smaller the measurement increment. A tool’s accuracy can be expressed in significant numbers. Only as many significant figures as the least exact number can be used when multiplying or dividing measured data. Decimal places cannot be added to or subtracted from measured quantities while adding or subtracting.