The data acquisition/analysis program you use is Data Studio. Most commands you use are described in the
manual along with other procedures. But here are some important things to keep in mind for all labs in which
you use Data Studio. Many “problems” are not really problems, but simply failures to understand what the
computer is displaying, and how it can be rearranged to change data from unintelligible to crystal clear.
1. A most important skill to hone is setting appropriate vertical scaling. Most of the time-varying
quantities you’ll study are plotted along the vertical axes in Data Studio, and if the vertical scale on
your plot is too large, good data will look bad. You might, for instance, have very nice data in the
range of -3 to +5, but if the scale on your plot is ±10,000, it will all look like zero. There are at least
two ways to change the scale to better fit your data. The quick way is the scale-to-fit button
near the upper left of the plot window. Clicking first on the plot of interest (if there are multiple plots
in the window), then clicking the scale-to-fit button will cause the bottom limit of the vertical scale to
be the bottom limit of your data, and similarly for the top limit—your data should thus fill the plot
area. The second, “manual” way is a combination of (a) dragging the horizontal axis vertically till
it’s about where you want it and (b) holding the mouse down on the numerical values along the
vertical axis and dragging them up or down, shrinking or expanding that axis, till the important parts
of your data more or less fill the screen. While the scale-to-fit button is faster, sometimes it’s better to
adjust the scale manually. If your data contains any spurious, wildly large data points, the scale-to-fit
procedure will make the scale wildly large. It would be better to manually adjust limits that include all
the “real” data points and exclude the wild ones.
2. A problem more difficult to pin down is that of focusing on the “relevant” data. In many cases you
have sensors running over a span of time, but not all the data is relevant—there may be “junk” at the
beginning or end, and even sometimes in the middle. To extract the important information you need
to keep in mind that not all the data points are meaningful. In some cases you will have to select a
certain region, perhaps to find a slope or an average value. Obviously if you select “junk” along
with real data, your results will suffer. It isn’t possible to provide general guidelines here covering all
the various plots you’ll make in Physics 9 Lab, but it will help if you simply keep this point in mind.
3. Sometimes data is just bad—recording may have been started too early or too late, the sensor may
have lost sight of the object it was watching, something, etc. In these cases, simply repeat your “data
run”. The new data is recorded and the old is stored hidden, though retrievable. Sometimes it is good
to do this just to make sure you’re getting sensible, repeatable data.
4. Finally, here are two familiar bugs and workarounds:
Bug: Plot goes blank, particularly among multiple over-and-under plots.
Workaround: Doubleclick blank plot and hit OK—repeat if necessary.
Bug: Dreaded “Could not initialize a temporary file for data” dialog.
Workaround: The program crashed—use Force Quit from the (upper-left) Apple menu.
As we carry out experiments in Physics 9 Lab, testing theoretical predictions, we will often want to compare
things quantitatively. The procedure is logical and very simple. When one quantity is known or accepted to
be correct, which we’ll call qcorrect, and we wish to compare an experimentally obtained value, qexp, we
calculate a percent error, as follows:
percent error =
qexp − qcorrect
× 100%
Often we will wish to compare two quantities neither of which is known to be correct. In this case we
calculate a percent difference. Calling the two quantities q1 and q2, the logical way to do this is by dividing
the difference by the average:
percent difference =
q1 − q2
× 100%
2 q1 + q2
Results of experimental observations can often be confusing and/or misleading if care is not taken with
what might seem a rather trivial matter, but is actually quite important: significant figures.
In short, the number of significant figures in a value is the number of digits it has. Some examples are
given in Table 1. Note that zeros may or may not count as significant figures. When preceding nonzero
digits, as in the second and third examples, zeros do not count, serving merely as place holders fixing the
proper decimal place (power of ten). Such zeros are known as “leading zeros.” When following or
surrounded by nonzero digits, zeros do count as significant, as in the third and fifth examples.
Numerical Value
Number of Significant Figures
Table 1
Ambiguity may arise when reporting some values. For instance, suppose the mass of an object is reported
as 24,000kg. Since the zeros follow nonzero digits, we would by the preceding arguments say that the
number of significant figures is 5. This would imply that the mass is indeed known to be between 23,999kg
and 24,001kg. However, if 5-digit precision has not in fact been obtained by the mass measurement—if it is
instead known only to be between 23,900kg and 24,100kg, i.e., 3-digit precision—then writing the value as
24,000kg is misleading. This is where scientific notation comes in. If the mass is indeed twenty-four thousand
kilograms, known to three-digit precision only, it is correct to write it as 2.40×103kg. In scientific notation,
the first nonzero digit is to the left of the decimal, all others to the right, and a 10n is appended to account for
the proper power of ten; there are no leading zeros, so all digits are significant figures. Clearly displaying
their numbers of significant figures, the earlier examples written in scientific notation are:
3.125×10 −3ms
1.500×10 −1 kg
6.02×10 2 g
How do we report the sum of 12.3m and 0.58m? Since the first number is not known to the hundredths of a
meter, the answer certainly cannot be known to that decimal place, so the result of the addition should be
rounded to the tenths of a meter place. This logic gives us the following rule for adding numerical values,
correct also for subtraction:
When adding or subtracting numerical values, the sum or difference should be
rounded to the smallest (most precise) decimal place common to all factors.
For example, in the following addition the 5.29kg is known only to the hundredths place, so we round
the final answer to that place.
5.29kg + 7.617kg = 12.907kg → 12.91kg
Note that the result has four significant figures, whereas 5.29kg has only three. An even better example of
this sort of thing is 0.05s+0.07s =0.12s, where the result has more significant figures then either factor. An
increase in the number of significant figures can indeed happen when adding factors.
Subtraction, on the other hand, can lead to a decrease in the number of significant figures. In the
following subtraction we must round to the tenths place, the smallest decimal place common to both factors.
7.63m - 6.9m = 0.73m → 0.7m
The final answer has only one(!) significant figure. If our goal were to keep track of small changes in this
quantity (i.e., the difference in distances), the next closest values would be 0.6m and 0.8m, differing by over
ten percent from 0.07m! This certainly wouldn’t be keeping track of “small” changes! In science we like to
avoid problems like this, so when we know that a difference will have to be taken, we try to make sure that we
know the factors to as many digits of precision as we possibly can.
The significant-figures rule for addition and subtraction cannot be applied to multiplication or division.
In dividing a distance of 125m by a time of 0.3125s we cannot say that because the distance is known to only
the unit-meters place, knowledge of the time more precisely than the unit-seconds place is useless. (We can
add 125 and 0.3125, rounding the 0.3125 to zero, but we certainly couldn’t divide after such rounding!)
The rule for multiplication or division rests on the fact that performing a calculation shouldn’t result in
an answer more precisely known than the factors that go into it. Roughly speaking, a value known to 2
significant figures has a smallest possible change of about a percent (from 98 to 99 is a 1.02% increase); a
value known to 3 significant figures has a smallest change of about a tenth of a percent (from 998 to 999 is a
0.1002% increase); the smallest change for 4 digits is about one hundredth of a percent, etc. Accordingly, the
result of a calculation should have no more significant figures than the factor with the smallest number of
significant figures. (The occasional increase in significant figures when adding, discussed earlier, would seem
to be an exception, but in the end doesn’t significantly increase precision.) The rule is thus as follows:
When multiplying or dividing numerical values, the product or quotient should be rounded off to
the same number of significant figures as the factor with the smallest number of significant figures.
According to this rule, 125m ÷ 0.3125s = 400m/s = 4.00×102m/s. The answer has three significant figures
because, though the time is known to four, the distance is known to only three significant figures.
Note: Often we multiply a measured value by a strict numerical constant, such as the integer 2. In these
cases, the numerical constant is assumed to have effectively an infinite number of significant figures, so the
answer has the same number of significant figures as the measured value.
Having introduced the specific rules, it is still important to note the following overall rule:
Rounding off factors in a complex expression before actually carrying out the calculation
can lead to gross “round-off error,” so round off only at the end of the calculation.
As a matter of practicality, you should also be aware that following with unflinching rigor the rules for
significant figures would sometimes take more time than we can afford to spend on it in Physics 9 Lab, and
would thus detract from more important things, like understanding the physics! Accordingly, we will on
many occasions take a more relaxed approach. Still, we don’t wish to truncate routinely to uselessly crude
precision, as rounding to one significant figure would do; nor do we wish to report values with all eight to
twelve digits given by today’s calculators, which would imply ridiculously high precision. In these cases, as a
rule of thumb, report values to three significant figures. If you have any questions about this policy and how
it applies in a given week’s lab, check with you lab instructor.
To give readings of high precision, several of the instruments we use in Physics 9 Lab employ what is known
as a vernier scale. Each instrument has a “primary scale” marked off in meters, centimeters, etc., as on an
ordinary ruler, and a “vernier scale” with a zero-line that falls somewhere along this primary scale. All but
the final significant figure are read according to where this zero-line falls along the primary scale. The final,
high-precision, significant figure is obtained in an ingenious way: The two scales have slightly different
spacings, so that for every nine rulings on the primary scale, there are ten on the vernier scale. The final
significant figure is the number (0-9) of the line on the vernier scale that best aligns with any line on the
primary scale. A picture is worth a thousand words:
Science rests on the ability to test predictions and have confidence in the results. Often the test is an
experiment yielding a numerical value, but if that value is expected to be 5.00 and the experimentally
measured value is 4.90, what does the result mean? It may mean success or it may mean failure, and to
understand the difference we need to understand something about the quantitative limits of measurement.
Error analysis is a deep and often complicated pursuit. In Physics 9 Lab we’ll learn just the basics,
centering on the concept of uncertainty.
Imagine that we wish to measure a distance x along the ground from Point A to Point B. Obviously we need a
measuring instrument. Suppose it is a measuring tape with a labeled mark every meter and otherwise blank
(not likely to be found at the hardware store). It is stretched out from A to B and the mark nearest Point B is
labeled 137m.
Figure 1
Clearly the distance is not exactly 137m, but it is closer to 137m than to 138m or 136m. All possibilities in
this range are accounted for by reporting the measurement as x = 137m±0.5m—meaning somewhere
between 136.5m and 137.5m. The 0.5m part is known as an absolute uncertainty (as distinct from percent
uncertainty, which we discuss soon) and is denoted by a σ, i.e., σx = 0.5m.
By the same logic, the measurement shown below uses a ruler marked off in divisions of tenths of a
centimeter, 0.1cm, so the value might be anywhere from 0.05cm—half of a tenth of a centimeter—less than
to 0.05cm greater than the value indicated; thus, the reading is 1.6cm±0.05cm.
Figure 2
This common-sense way of reporting an uncertainty due to the inherent limitations of the measuring
instrument serves as a default rule—when statistical uncertainty, discussed later, can be ignored—for all
individual measurements. Using an x as a generic symbol for all quantities we might measure—distance, time,
mass, etc.—this rule is as follows:
Instrumental Uncertainty
When an individual measurement is made of a quantity x, the absolute uncertainty
σ x , instrumental is half of the smallest division of the measuring instrument.
As an alternative to the absolute uncertainty, any uncertainty—whether instrumental or statistical—can be
expressed as percent uncertainty, for which we use the symbol e. It is defined as the percentage that the
absolute uncertainty represents of the actual measured value. Referring to the Point A-to-Point B example,
the absolute uncertainty σ x = 0.5m represents (0.5m / 137m ) × 100% = 0.36% of 137m, and so is
equivalent to a percent uncertainty e x = 0.36%. Thus, 137m±0.5m and 137m±0.36% say the same thing.
Clearly, if we can find percent uncertainty from absolute uncertainty, we can find absolute uncertainty from
percent uncertainty: An e x of 0.36% means 0.0036 times a quantity, and 0.0036 times 137m is 0.5m. Again
using x as a generic symbol for any measured quantity, the relationships between absolute and percent
uncertainty are as follows:
Relating Absolute and Percent Uncertainties:
ex =
× 100%
σ x = x × e x / 100%
Note: Uncertainties need not themselves be very precise to convey their important information, so it is
common in final reporting (though not usually before that) to round them to one significant figure, or two if
the first digit is a 1.
In Physics 9 Lab, time constraints will often restrict us to one measurement of a quantity, but in science we
generally try to avoid making statements based on a single measurement—
Trial Distance x
particularly if repeated measurements vary! Suppose we repeatedly measure the
distance traveled by a boulder hurled from a catapult. We obtain the distances given
in Table 1 and depicted graphically in Figure 3. Clearly there is some uncertainty in
the value we are attempting to measure. How do we assign that value and how do we
define its uncertainty?
Figure 3
The value we assign is the mean, x , defined as the total of the individual values
divided by the number of values. Denoting the individual values by xn and the
number of values by N, this is:
ΣN x
Mean: x = n =1 n
Table 1
The data in the Table 1 gives a total Σ n =1 xn of 1888.0m, and dividing by N = 20
yields a mean of 94.4m.
To define an uncertainty we might be tempted to
use the largest deviations from the mean: 101.7m 94.4m = 7.3m and 83.2m - 94.4m = -11.2m.
However, were we to conduct many additional trials,
we would probably measure even larger deviations, if
only rarely, which would seem to suggest that we
become less certain of the answer as we conduct
more measurements. Seeking a more sensible
conclusion, scientists have agreed upon a logical
definition of statistical uncertainty based on the fact
that quantities have a decided tendency to fluctuate
in a particular way, known as a normal distribution,
in which many measured values are near the mean,
they fluctuate symmetrically on either side of the
mean, and the further a value is from the mean the
less likely it is to be measured. Were we to carry out
a great many trials with our catapult, we would
expect Figure 3 to approach the smooth normal
distribution shown in Figure 4. The characteristic
Figure 4
shape of this distribution is also known by the names
“bell-shaped curve” and “gaussian.” It can be shown that if values fluctuate according to a normal
distribution, while there is always a chance that a value very far from the mean might be measured, the large
majority—68%—will fall within a “distance” from the mean known as the standard deviation, defined as
Standard Deviation:
sx =
Σ nN=1 ( xn − x )2
N −1
In this quantity, each deviation from the mean, xn − x , is squared, then squares of deviations are summed,
then the total is divided by N-1. Adding squares of deviations ensures that every value that deviates from the
mean, whether above or below, counts as a positive magnitude of deviation, and dividing by N then gives
something like an average. (It isn’t worthwhile to go into why it is N-1 rather than just N, except to say that N
is usually large, in which case it makes little difference.) Taking a final square root ensures that the standard
deviation has the same units as x itself (i.e., not x2). Figure 5 gives some idea of how big the standard
deviation is in a normal distribution: 68.3% of the values fall within one standard deviation of the mean,
95.4% within two, etc.
Figure 5
Now, standard deviation is still not the uncertainty we wish to define. If we carry out only the 20 trials of
Table 1, giving the jagged Figure 3, we might not feel completely confident that its mean of 94.4m is correct.
However, if we were to carry out 200 trials, with fluctuations of about the same extent on either side of the
mean—the same standard deviation—but more like the smooth distribution of Figure 4 and with essentially
the same mean, we should feel much more confident that that mean is correct. Some rather involved statistical
arguments show that—ignoring instrumental uncertainty—the uncertainty in the mean becomes progressively
smaller with an increasing number of trials according to the following rule.
Statistical Uncertainty
When a measurement is repeated N times, the absolute uncertainty
σ x , statistical in the mean value is given by: σ x , statistical = sx / N
Again, as N increases, the uncertainty in the resulting mean decreases.
The data in Table 1 contains 20 values, which is usually judged to be about the minimum number giving
statistically significant results. The sum of squares of deviations from the mean Σ nN=1 ( xn − 94.4 m)2 is
371.0m2. Dividing by N-1= 19 and taking a square root gives a standard deviation sx of 4.419m. Finally,
dividing by 20 yields a statistical uncertainty of 0.988m, or, rounding to tenths of a meter, 1.0m. Thus, we
would say that the distance traveled by the boulder is 94.4m±1.0m. Note that, using equations (1) above, the
percent uncertainty is (1.0 m / 94.4 m ) × 100% ≅ 1.1% .
In most cases in Physics 9 Lab, the computer will calculate the standard deviation automatically, so to find
the uncertainty we need only divide by N .
Note that each of the two previous uncertainties—instrumental and statistical—was defined with the other
ignored. We now address the obvious question: What happens if repeated measurements are made and each
one has instrumental uncertainty? Omitting the justifications, the logical way to do this is as follows:
Uncertainty—In General
σ x = (σ x , istrumental )2 + (σ x , statistical )2
How is this logical? Suppose we repeated the Point A-to-Point B measurement many times. It’s quite
possible that, with the measuring tape marked off in divisions no smaller than a meter, we would obtain the
same 137m reading every time. If this were the case, all values would necessarily equal the mean value
x = 137m ; the standard deviation sx would be zero, because all the deviations xn − 137m would be zero; and
∆xstatistical would thus be zero. In such a case we would expect σx to be simply σx, instrumental, all uncertainty
being due to the crudeness of the measuring instrument, and equation (4) agrees. Consider on the other hand
the catapult example. Suppose that each measurement was made—as the data in Table 1 suggests—with an
instrument marked in divisions of tenths of a meter, 0.1m. The instrumental uncertainty would be 0.05m. On
the face of it, this seems rather negligible compared to the statistical uncertainty of 1.0m, and equation (4)
agrees, yielding (0.05m )2 + (1.0 m )2 = 1.00125m ≅ 1.0m . The uncertainty is due almost entirely to the
statistical fluctuations. In cases where neither instrumental nor statistical uncertainty is negligible, equation (4)
gives an overall uncertainty somewhere between the larger of the two and the sum of the two, and this also
makes sense. The overall uncertainty surely shouldn’t be smaller than either one individually, and, based on
the improbability that both uncertainties would err in the “same direction,” it should not be as large as the
sum. (We’ll talk more about uncertainties being “in the same direction” when we discuss combining
• When recording only one measured value of a quantity, as with a scale or a meterstick or a timer, we will
usually assume that if the measurement were repeated the result would never vary. Thus, ignore the possibility
of statistical fluctuation and use the rule for instrumental uncertainty σ x , instrumental given under heading A
• When deliberately obtaining many values of a quantity, unless specifically instructed to consider
instrumental uncertainty, use the rule for statistical uncertainty σ x , statistical given under heading B above.
• When deliberately obtaining many values of a quantity and also instructed about the inherent
instrumental uncertainty, use the general rule for uncertainty given in equation (4).
Almost never in Physics 9 Lab will our analysis end with measurements of only one of the fundamental
quantities most commonly measured: distance, time, and mass. Rather, we combine these quantities, as when
we divide a distance by a time to obtain a speed. But how do we define the uncertainty in our result when
there are uncertainties in each of the two contributing measurements and the result is given by a calculation
involving the two? Usually, we do this in quadrature, meaning adding squares (of uncertainties) then taking a
square root. Still, there are special rules, depending on whether the calculation is addition, multiplication, etc.
Suppose one experiment measures the distance from Point A to Point B and another the distance, in the same
direction, from Point B to Point C.
The results are:
Distance from Point A to Point B: 137m±0.5m
Distance from Point B to Point C: 5.8m±0.1m
The total distance is 143m, but what value do we assign to the uncertainty in the total distance? Before we
answer this, let’s find the two percent uncertainties. They are, respectively, (0.5m / 137m ) × 100% ≅ 0.4%
and (0.1m / 5.8m ) × 100% ≅ 1.7% . The A-B measurement has an absolute uncertainty five times larger than
that of the B-C measurement (0.5m vs. 0.1m), while in percent uncertainty the B-C measurement is larger
(1.7% vs. 0.4%) by about a factor of 4! The question is: Which is more important—the absolute, or the
percent? Suppose we guess that it is the percent uncertainty, so we say that the total distance has a percent
uncertainty of at least 1.7%. This corresponds to an absolute uncertainty in the total 143m of
0.017 × 143m ≅ 2 m . But the total should not be off by this much! The first measurement is within 0.5m and
the second 0.1m, so it is logical that at worst the total would be off by no more than about 0.6m. This is a
good counterexample to the use of percent uncertainty when a sum is calculated. The correct way is through
the absolute, and the rule, which applies also to subtraction, is as follows:
When adding or subtracting quantities x1 and x2, uncertainties are combined
by adding in quadrature their individual absolute uncertainties.
σ combined = σ x1 2 + σ x2 2
Adding things in quadrature always yields a result between the larger of the two and the sum of the two.
For the Point A-to-Point C example we’ve been considering it is (0.5m )2 + (0.1m )2 = 0.51m ≅ 0.5m . It
might seem that it makes more sense to simply add the two uncertainties, but this would be assuming that
both “mistakes” are in the “same direction,” either too large or too small, when it is entirely possible that
one might err on the short side while the other errs on the long side. In the final analysis, if the two
measurements are independent, i.e., not necessarily in the “same direction,” the odds are that the result is
most likely in the range following from the quadrature rule given above. For example, if we had equal
absolute uncertainties, the overall uncertainty would be 2 times (not 2 times) either. On the other hand, if
one of the two absolute uncertainties is significantly larger than the other, as in our Point A-to-Point C
example, the result essentially equals the larger. We will often have occasion to use this fact.
Note that subtracting quantities with comparatively large absolute uncertainties can lead to results almost
worthless. Suppose the two quantities are times when a projectile passes two markers: t1 = 8.0s±0.05s and t2 =
8.1s±0.05s. To find a time interval we subtract the initial time from the final, yielding 0.1s, but what is the
uncertainty? The percent uncertainties are both about 0.6%, and would seem to suggest a rather small
uncertainty in the result, but we don’t combine percent uncertainties when subtracting! Adding the absolute
uncertainties in quadrature gives (0.05s)2 + (0.05s)2 = 2 × 0.05s ≅ 0.07s . This is a percent uncertainty of
(0.07s / 0.1s) × 100% = 70% of the time interval 0.1s itself! We might have crudely argued that the earlier
time could be as early as 7.95s and the later as late as 8.15s, giving an interval of 0.2s, or that the earlier
could be as late as 8.05s and the final as early as 8.05s, giving an interval of zero! The correct uncertainty
isn’t that big, but an uncertainty of seventy percent is still huge!
Suppose we wish to calculate a speed by dividing a distance x of 4.00m±0.005m by a time t of 2.5s±0.05s.
Clearly there is some uncertainty in the resulting speed, 4.00m/2.5s = 1.6m/s, but how do we find it? It is not
through the absolute uncertainties, because we cannot add (in quadrature or otherwise) things with different
units. The rule when multiplying and dividing is as follows:
When multiplying or dividing quantities x and y, uncertainties are combined
by adding in quadrature their individual percent uncertainties.
e combined = e x 2 + e y 2
Note: Multiplying or dividing a measured quantity by a numerical value whose uncertainty is assumed or
defined to be zero is nevertheless multiplication, and so involves the percent uncertainty. The percent
uncertainty is unchanged, which means that the absolute uncertainty in the final result does change. For
instance, multiplying a mass of 0.100kg±0.0005kg, or 0.100kg±0.5%, by 3 would preserve the 0.5% percent
uncertainty, which would mean an absolute uncertainty in the 0.300kg total mass of
0.300 kg × 0.005 = 0.0015kg , three times the absolute uncertainty in the individual mass. While we might
argue that tripling the mass is simply adding it three times, and so should require quadrature addition of the
absolute uncertainties ( (0.0005kg)2 + (0.0005kg)2 + (0.0005kg)2 = 0.0005kg × 3 ≅ 0.0009 kg ), we
would in effect be adding measurements that are definitely not independent (to err on the large side for one
is to err on the large side for all) and quadrature addition does not apply in such cases—the absolute
uncertainties simply add.
Suppose we wish to find the area of a square whose side length x we have measured to be 1.25cm±0.005cm.
The area x2 is (1.25cm)2 = 1.56cm2, but what of the uncertainty? It might be argued that squaring is
multiplication, so we should find the two (equal) percent uncertainties, (0.005cm / 1.25cm ) × 100% = 0.4% ,
then add these in quadrature, giving 0.4% × 2 ≅ 0.6% . However, squaring is another example of two
uncertainties that are definitely not independent. They both always err in the “same direction”; if one length
errs on the large side, then both sides do, since the same value is being used. This being the case, the percent
uncertainty is larger: two (not 2 ) times that of the individual measurement. Omitting the proof, the general
result when raising a quantity to the power b is as follows:
When raising quantity x to power b, the percent uncertainty is multiplied by the factor b.
e xb = b e x
Example 1. Suppose we wish to calculate a momentum, p, which is a mass times a speed, p = m v. The mass
m of the object is measured to be 0.247kg on a scale whose smallest division is one gram. The speed is found
from two measurements of distance from an origin, x1 = 32.4cm and x2 = 91.8cm, both via a measuring tape
marked off in millimeters, and a measurement of the time interval, ∆t = 2.08s , correct to within 0.005s.
From the data gathered, the momentum is calculated as follows:
p = mv = m
x − x1
=m 2
Find the momentum and its absolute uncertainty.
The momentum is
p = 0.247kg
91.8cm − 32.4cm
= 7.05kg
For the uncertainty, it is easiest to start from the “inside” and work outward, so we start with the
subtraction. Subtraction requires that we combine absolute uncertainties in quadrature. Each of the x values
has an absolute uncertainty of half a millimeter, i.e., half the smallest division: σ x1 = σ x2 = 0.05cm . Thus, the
absolute uncertainty in x2 - x1 is:
σ ∆x = (0.05cm)2 + (0.05cm)2 = 0.071cm
We will be dividing ∆x by ∆t, requiring use of percent uncertainties, so we need to convert σ∆x to a percent
uncertainty. The actual value of x2 - x1 is 59.4cm, so
e∆x =
× 100% = 0.12%
Now, the remainder of the calculation involves only multiplication and division. We could combine in
quadrature the percent uncertainties in ∆t and ∆x, then combine in quadrature this percent uncertainty with
that for m, but it is equivalent (and faster!) to just combine in quadrature all three percent uncertainties at
once. But first we need the remaining percent uncertainties. For the mass, the absolute uncertainty, half the
smallest division, is 0.0005kg, and for the time interval it is given to be 0.005s.
em =
× 100% = 0.20%
e∆t =
× 100% = 0.24%
Now, combining in quadrature the percent uncertainties for m, ∆x, and ∆t,
e p = (0.12%)2 + (0.20%)2 + (0.24%)2 = 0.31%
Finally, from the percent uncertainty we can find the absolute uncertainty in p.
σ p = 7.05kg
× 0.0031 = 0.02 kg
Thus, we report the momentum as: p = 7.05 ± 0.02 kg
Example 2. A projectile lands a certain distance from the origin, given by
r = x 2 + y2
where x and y are the coordinates of the landing point measured along two perpendicular coordinate axes.
The coordinates are measured to be:
x = 8.6cm ± 0.05cm
y = 2.7cm ± 0.1cm
Find r and its absolute uncertainty.
Plugging in to find r,
r = (8.6cm )2 + (2.7cm )2 = 9.0cm
For the uncertainty, each factor is squared, so we must double the percent uncertainty. The percent
uncertainties for x and y alone are:
ex =
× 100% = 0.58%
ey =
× 100% = 3.7%
So the percent uncertainties in their squares are
ex 2 = 2 × 0.58% = 1.2%
ey2 = 2 × 3.7% = 7.4%
Now, x2 and y2 are added, so we need to combine in quadrature their absolute uncertainties.
σ x 2 = (8.6cm)2 × 0.012 = 0.89cm 2
σ y2 = (2.7cm)2 × 0.074 = 0.54cm 2
Thus, the absolute uncertainty in x2 + y2 is
σ x 2 + y2 = (0.89cm 2 )2 + (0.54cm 2 )2 = 1.0cm 2
Now, to take a square root—to raise to the one-half power—we multiply the percent uncertainty by one-half.
The value of x2 + y2 is (8.6cm)2+(2.7cm)2 ≈ 81cm2, so its percent uncertainty is
ex 2 + y 2 =
1.0cm 2
× 100% = 1.2%
81cm 2
Accordingly, the percent uncertainty in r is
er = 12 × 1.2% = 0.6%
Finally, the absolute uncertainty in r is
σ r = 9.0cm × 0.006 = 0.05cm
Note that the answer is essentially the same as the absolute uncertainty in x. While the example serves to show
how the rules are carried out, it also shows that in many circumstances, with a bit of thought, we can guess the
result pretty well. The x value is nearly three times the y value, so it predominates in calculating r, and
correspondingly the overall uncertainty is dominated by its uncertainty.