Exercise 8: Hypotheses testing – answers 1.1 in parameters 1.2 α 1.4 yes 1.5 yes 1.6 a 1.7 no – if H[0] is true, keeping it is the right decision 1.8 b 1.9 β; test power is 1-β 1.10 Test power increases with (a) increasing sample size, (b) with increasing α, (c) if the real value of the tested parameter is further from the value assumed by the null hypothesis (e.g. if the null hypothesis states H[0]: μ = 100, test power is higher, if the real parameter value is μ = 110 or 90, than if it is μ = 105 or 95). 1.11 type I. error; β 2.1 logical negation of alternative hypothesis 2.2 alternative hypothesis – the one we are interested in 2.3 rejecting null hypothesis, even though the null hypothesis is true (that is erroneous “confirmation” of the alternative hypothesis) 2.4 accepting the null hypothesis, even though the null hypothesis is false (that is erroneous rejecting of alternative hypothesis) 2.5 probability of making type I. error 2.6 probability that we avoid making type II. error, that is probability that we reject null hypothesis if it is false 3. b, c, d 4.1 b [for t it doesn’t apply, it depends on degrees of freedom] 4.2 a 4.3 a 5.1 a 5.2 a, e 5.3 no, no 6.1 c 6.2 yes 6.3 a 6.4. t, because we usually don’t know σ 6.5 10, 59, 100 6.6 for two-tailed test 2.09; for one-tailed test 1.73 7.1 0.05 7.2 no, we need more information 7.3 yes, α is the maximum risk of type I. error we are willing to take 7.4 If the null hypothesis was not rejected, we can’t make type I. error 7.5 as in the previous case: if we reject the null hypothesis, we can’t make type II. error 8.1 a 8.2 We ask how probable is obtained sampling characteristic given the null hypothesis, e.g. if we measured in a sample m = 3, we ask how is this value probable, if the null hypothesis μ = 0 was true. If this probability is sufficiently low, we can reject the null hypothesis. 8.3 We are testing the probability of obtaining some statistics value, if the null hypothesis was true, not the probability of the null hypothesis given our statistics. These are not the same things (see Bayes theorem). We have to take this fact into account when interpreting statistical significance. 9.1 H[0]: μ = 6.8 9.2 t 9.3 1.8/ √16 = 0.45 9.4 (6.8 – 8.0)/0.45 = 2.67 9.5 p (t ≥ |α|)=0.05); a =TINV(0.05;15)=2.13; for α = 0.01 is critical t = 2.95. 9.6 at 1% no, at 5% yes. 9.7 95% CI = (m +/- 2.13s[m ]) = (7.04; 8.96); 99% CI = (m +/- 2.95s[m ]) = (6.67; 9.33) 10.1 H[0]: μ = 176.53 10.2 z 10.3 no 10.4 7.62 / √25 = 1.5 10.5 (176.53 – 171.45)/1.5 = 3.39 10.6 yes, p (z ≥ |3.39|)=2*(1-NORM.S.DIST(3.39))=0.0007 10.7 yes, normal distribution is just one and it doesn’t depend on degrees of freedom 10.8 no, it would be twice lower, that is 0.76. 12. type I. error, which happens if we erroneously reject true null hypothesis 13. type I. error because that is influence by alfa. Alfa is the probability of making type I. error, that is rejecting true null hypothesis 15. type I. error 16. b