Physics Asked by Devashish Belwal on August 3, 2020
Given question:
A student uses simple pendulum of length 1 m and commits an error of Δl=1mm to determine g(the acceleration due to gravity). He uses a stopwatch with the least count of 1s for this and records 40 seconds for 20 oscillations. For this obervation which of the following statement(s) is(are) true?
(1) Error in ΔT in measuring the time period T is 0.05s
(2) Error in ΔT in measuring the time period T is 1s
(3) Percentage error in the determination of g is 5.1%
(4) Percentage error in the determination of g is 2.6%
Given solution:
Answer (1,3)
ΔT/T = 1/40
And
T = 2s —why?
Therefore ΔT=0.05s
Δg/g ×100 = Δl/l ×100 + 2× ΔT/T ×100
Δg/g ×100 = ((10)^(-3)/1)×100 + 2×(1/40)×100
Δg/g ×100 = 5.1%
I did not get the part where relative error was calculated as least count over observation beacuse if one is taking measurment of time the only time one can make error is either at the start or at the end and that would be human error and or or leact count error(1 sec in this case) but this was what my teacher told me to learn as a formula that error equals least count over observation. But why?
Also
I did not get the part where in the solution the value of T is taken to be 2. Why?
Also in the solution while calulating percentage error the solution takes ΔT=1 . Why?
I think most of these confusions boil down to understanding the terminology.
There is a standard formula in mechanics to compute the period of oscillation of a simple pendulum, given its length. Given this formula, and the information in the question "A student uses simple pendulum of length 1m", you should be able to verify that the period of oscillation is (approximately) 2s.
The question states "He uses a stopwatch with the least count of 1s" -- this information should answer your question about how the value of $Delta T$ is chosen.
"Why divide $Delta T$ by the observation time?" -- there is a real conceptual question here worth thinking about. It's true that we make an error of $Delta T=1s$ on the total observation. However, since we have 10 observations, the error can be divided by the number of observations. Here's an example to illustrate the main point. Suppose we observed the pendulum for 1 oscillation. Then, we don't know if the true period is 9.5s or 10.5s. But now let's suppose we watch the pendulum for 100 periods (we are very bored!). If the true period is 10s, then the reading on our stopwatch will be somewhere between 999.5s and 1000.5s. On the other hand, if the period was 9.5 seconds, then the reading on the stop watch would be somewhere between 949.5s and 950.5s. Thus watching for 100 periods has given us some additional juice! While with an observation of 1 period, it was possible that the true period was 9.5s, when we observe for 100 periods, if we get a reading of 999.7s, it is extremely unlikely that the true period is 9.5s.
Correct answer by Andrew on August 3, 2020
The absolute error for $N=20$ oscillations is $Delta T_{20}=1s$, because this is the precision of the watch. In order to get the error per oscillation we simply divide by $N$. Hence, the absolute error per oscillations is $Delta T_1= Delta T_{N}/N = Delta T_{20}/20 = 0.05 s$.
Why? Here we are not interested in random errors which add up for each oscillation, but we are interested in the uncertainty of the measurement device. By taking the time for many oscillations this uncertainty remains constant (it is always $1s$). Hence, we attribute only $1/N$ to each oscillation.
Sidemark: Digging deeper into statistics you will find that we should actually use a uniform distribution of width $1s$ and take its standard deviation $sigma = sqrt{1/12}s$ as uncertainty of the device. However, let's keep things simple and stick with $Delta T_{20}=1s$.
The argument presented above considers only the uncertainty of the device itself. We do not account for the time shift for an imperfectly start and stop. we could also take these errors into account (finite reaction time). However, these are hopefully "small" compared to $1s$.
The relative error for $N=20$ oscillations is defined as $Delta T_{20}/ T_{20}$, where $T_{20}=40s$. Hence, the relative error for a single oscillations is $Delta T_1/ T_1$, where $T_1 = T_{20}/N = 40s/20 = 2s$. Thus, we obtain
$$
frac{Delta T_1}{T_1} =
frac{0.05s}{2s} = 1/40
$$
Does this help?
Answered by Semoi on August 3, 2020
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP