Statistical inference I and II Statistical inference Stanislav Katina ÚMS, MU 13.03.2024 22:03 Stanislav Katina (ÚMS, MU) Statistical inference I and II 13.03.2024 22:03 1 / 176 Book Aplikovaná štatistická inferencia I Figure 1: Book Aplikovaná štatistická inferencia I Stanislav Katina (ÚMS, MU) Statistical inference I and II 13.03.2024 22:03 2 / 176 Table of contents 1 Overview of testing of statistical hypotheses 2 Three test statistics 3 Confidence intervals 4 Overview of the tests 5 Generalised hypotheses 6 One-sample tests about 𝜇 7 Paired tests about 𝜇 8 One-sample tests about 𝜎2 9 One-sample tests about skewness and kurtosis 10 One-sample tests about correlation coefficient Stanislav Katina (ÚMS, MU) Statistical inference I and II 13.03.2024 22:03 3 / 176 6 One-sample tests about 𝜇 6.5 One-sample tests about 𝜇 (cont.) 𝐻01 vs 𝐻11. Let 𝑋 ∼ 𝑁(𝜇, 𝜎2 ), where 𝜃𝜃𝜃 = (𝜇, 𝜎2 ) 𝑇 . Then the likelihood function is equal to 𝐿(𝜃𝜃𝜃|x) = (2𝜋𝜎2 )−𝑛/2 exp (− 1 2𝜎2 𝑛 ∑ 𝑖=1 (𝑥𝑖 − 𝜇)2 ) and the log-likelihood function ℓ(𝜃𝜃𝜃|x) = − 𝑛 2 ln(2𝜋) − 𝑛 2 ln 𝜎2 − 1 2𝜎2 𝑛 ∑ 𝑖=1 (𝑥𝑖 − 𝜇)2 . If 𝐻01 is true, ̂𝜃𝜃𝜃0 = (𝜃0, ̂𝜃2|0) 𝑇 , where 𝜃0 = 𝜇0 and ̂𝜃2|0 = ̂𝜎2 0 = 1 𝑛 ∑ 𝑛 𝑖=1 (𝑥𝑖 − 𝜇0)2 = 𝑠2 𝑛 + (𝑥 − 𝜇0)2 , where 𝑠2 𝑛 = ̂𝜎2 . The variance ̂𝜎2 0 is the solution of 𝑆2(𝜃𝜃𝜃0) = 𝜕 𝜕𝜎2 ℓ(𝜃𝜃𝜃|x), where using 𝜇 = 𝜇0, 𝜎2 = ̂𝜎2 0 and 1 2 ( 1 ̂𝜎4 0 𝑛 ∑ 𝑖=1 (𝑥𝑖 − 𝜇0)2 − 𝑛 ̂𝜎2 0 ) = 0. we get ̂𝜎2 0. Stanislav Katina (ÚMS, MU) Statistical inference I and II 13.03.2024 22:03 77 / 176 6 One-sample tests about 𝜇 6.6 One-sample tests about 𝜇 (cont.) If 𝐻01 is not true, MLE of 𝜃𝜃𝜃 is then equal to ̂𝜃𝜃𝜃 = ( ̂𝜃1, ̂𝜃2) 𝑇 = (𝑥, ̂𝜎2 ) 𝑇 = ( 1 𝑛 𝑛 ∑ 𝑖=1 𝑥𝑖, 1 𝑛 𝑛 ∑ 𝑖=1 (𝑥𝑖 − 𝑥)2 ) 𝑇 . Then 𝑢LR = −2 ln(𝜆(x)) = 2(ℓ( ̂𝜃𝜃𝜃|x) − ℓ (𝜃𝜃𝜃0|x)) = 2 ((− 𝑛 2 (ln(2𝜋) + ln ̂𝜎2 + 1)) − (− 𝑛 2 (ln(2𝜋) + ln ̂𝜎2 0 + 1))) = 𝑛 ln ̂𝜎2 0 ̂𝜎2 . Stanislav Katina (ÚMS, MU) Statistical inference I and II 13.03.2024 22:03 78 / 176 6 One-sample tests about 𝜇 6.7 One-sample tests about 𝜇 (cont.) One-sample likelihood ratio test statistic as a function of one-sample Student 𝑡-statistic. 𝑈LR = −2 ln (𝜆 (X)) 𝒟 ∼ 𝜒2 1, where 𝐻0 is rejected for high values of the ratio ̂𝜎2 0 ̂𝜎2 . Then ̂𝜎2 0 = ̂𝜎2 + (𝑥 − 𝜇0)2 and then ̂𝜎2 0 ̂𝜎2 = 1 + (𝑥−𝜇0)2 ̂𝜎2 . We now that 𝑠2 = 1 𝑛−1 ∑ 𝑛 𝑖=1 (𝑥𝑖 − 𝑥)2 = 𝑛 𝑛−1 ̂𝜎2 , then ̂𝜎2 0 ̂𝜎2 is increasing function of |𝑡 𝑊 | and then 𝑢LR = −2 ln (𝜆 (x)) = 𝑛 ln ̂𝜎2 0 ̂𝜎2 = 𝑛 ln (1 + 𝑛 𝑛 ∑ 𝑛 𝑖=1 (𝑥𝑖 − 𝜇0)2 𝑠2 𝑛 ) = 𝑛 ln (1 + 𝑢W 𝑛 ) = 𝑛 ln (1 + 𝑡2 𝑊 𝑛 − 1 ) , where 𝑢W = 𝑛 ∑ 𝑛 𝑖=1 (𝑥 𝑖−𝜇0)2 𝑠2 𝑛 = 𝑛 ∑ 𝑛 𝑖=1 (𝑥 𝑖−𝜇0)2 𝑛−1 𝑛 𝑠2 = 𝑛 𝑛−1 𝑡2 𝑊 . 𝐻02 vs 𝐻12. Then 𝑢LR = 0 if 𝑡 𝑊 ≤ 0 and 𝑢LR = 𝑛 2 ln(1 + 𝑡2 𝑊 𝑛−1 ) for 𝑡 𝑊 > 0. 𝐻03 vs 𝐻13. Then 𝑢LR = 0 if 𝑡 𝑊 ≥ 0 and 𝑢LR = 𝑛 2 ln(1 + 𝑡2 𝑊 𝑛−1 ) for 𝑡 𝑊 < 0. Stanislav Katina (ÚMS, MU) Statistical inference I and II 13.03.2024 22:03 79 / 176 6 One-sample tests about 𝜇 6.8 One-sample tests about 𝜇 (cont.) One-sample Score statistic as a function of one-sample Student 𝑡-statistic. We know that 𝑈S = (a 𝑇 𝑆( ̂𝜃𝜃𝜃0)) 𝑇 (a 𝑇 (ℐ( ̂𝜃𝜃𝜃0)) −1 a) a 𝑇 𝑆( ̂𝜃𝜃𝜃0), where ℐ11( ̂𝜃𝜃𝜃0) = 𝑛/ ̂𝜎2 0 and 𝑆1( ̂𝜃𝜃𝜃0) = 𝑆(𝜇0) = 1 ̂𝜎2 0 ∑ 𝑛 𝑖=1 (𝑥𝑖 − 𝜇0). Then 𝑢S = ( 1 ̂𝜎2 0 𝑛 ∑ 𝑖=1 (𝑥𝑖 − 𝜇0)) 2 ̂𝜎2 0 𝑛 = ( 1 ̂𝜎2 0 ( 𝑛 ∑ 𝑖=1 𝑥𝑖 − 𝑛𝜇0)) 2 ̂𝜎2 0 𝑛 = 1 𝑛 ̂𝜎2 0 (𝑛𝑥 − 𝑛𝜇0) 2 = 𝑛 (𝑥 − 𝜇0) 2 ̂𝜎2 0 = 𝑛 (𝑥 − 𝜇0) 2 𝑠2 𝑛 + (𝑥 − 𝜇0)2 = 𝑛 (𝑥 − 𝜇0) 2 𝑠2 𝑛 (1 + (𝑥−𝜇0)2 𝑠2 𝑛 ) = 𝑢 𝑊 1 + (𝑥−𝜇0)2 𝑠2 𝑛 = 𝑛𝑢 𝑊 𝑛 + 𝑢W = 𝑛𝑡2 𝑊 𝑛 − 1 + 𝑡2 𝑊 . Stanislav Katina (ÚMS, MU) Statistical inference I and II 13.03.2024 22:03 80 / 176 6 One-sample tests about 𝜇 6.9 One-sample tests about 𝜇 (cont.) One-sample Wald statistic as a function of one-sample Student 𝑡-statistic. We know that 𝑈W = (a 𝑇 ̂𝜃𝜃𝜃 − a 𝑇 ̂𝜃𝜃𝜃0) 𝑇 (a 𝑇 ℐ( ̂𝜃𝜃𝜃)a) (a 𝑇 ̂𝜃𝜃𝜃 − a 𝑇 ̂𝜃𝜃𝜃0) where ℐ11( ̂𝜃𝜃𝜃) = ℐ( ̂𝜇) = 𝑛/𝑠2 𝑛. Then 𝑢W = (𝑥 − 𝜇0) 2 𝑛 ̂𝜎2 = 𝑛 (𝑥 − 𝜇0) 2 𝑠2 𝑛 = 𝑛 𝑛 − 1 𝑡2 𝑊 . Stanislav Katina (ÚMS, MU) Statistical inference I and II 13.03.2024 22:03 81 / 176 6 One-sample tests about 𝜇 6.10 One-sample tests about 𝜇 (cont.) Confidence intervals: Wald 100 × (1 − 𝛼)% empirical confidence interval for 𝜇 is defined as 𝒞𝒮 (W) 1−𝛼 = {𝜇0 ∶ 𝑈W(𝜇) < 𝜒2 1 (𝛼)} . Likelihood 100 × (1 − 𝛼)% empirical confidence interval for 𝜇 is defined as 𝒞𝒮 (LR) 1−𝛼 = {𝜇0 ∶ 𝑈LR(𝜇) < 𝜒2 1 (𝛼)} . Score 100 × (1 − 𝛼)% empirical confidence interval for 𝜇 is defined as 𝒞𝒮 (S) 1−𝛼 = {𝜇0 ∶ 𝑈S(𝜇) < 𝜒2 1 (𝛼)} . Stanislav Katina (ÚMS, MU) Statistical inference I and II 13.03.2024 22:03 82 / 176 6 One-sample tests about 𝜇 6.11 One-sample tests about 𝜇 (cont.) Knowing the inequality 𝑥 1 + 𝑥 < ln(1 + 𝑥) < 𝑥, where 𝑥 > −1, and 𝑥 = 𝑇2 𝑊 /(𝑛 − 1), the three test statistics can be ordered 𝑈S < 𝑈LR < 𝑈W. Then 𝑈S = 𝑛𝑇2 𝑊 𝑛 − 1 + 𝑇2 𝑊 1 𝑛−1 1 𝑛−1 = 𝑛 𝑇2 𝑊 𝑛−1 1 + 𝑇2 𝑊 𝑛−1 = 𝑛𝑥 1 + 𝑥 , 𝑈LR = 𝑛 ln (1 + 𝑇2 𝑊 𝑛 − 1 ) = 𝑛 ln(1 + 𝑥), 𝑈W = 𝑛 𝑛 − 1 𝑇2 𝑊 = 𝑛𝑥. Note: If 𝐻0 is true, 𝑇W = 𝑋−𝜇0 𝑆 √ 𝑛 𝒟 ∼ 𝑡 𝑛−1. Stanislav Katina (ÚMS, MU) Statistical inference I and II 13.03.2024 22:03 83 / 176 6 One-sample tests about 𝜇 6.12 One-sample tests about 𝜇 (cont.) If 𝐻0 is true, then the probabilities of Type I error are 𝛼W = Pr (𝑢W ≥ 𝜒2 1(𝛼)) = Pr (𝐹1,𝑛−1 ≥ 𝑛 − 1 𝑛 𝜒2 1(𝛼)) , 𝛼LR = Pr (𝑈LR ≥ 𝜒2 1(𝛼)) = Pr (𝐹1,𝑛−1 ≥ (𝑛 − 1) [exp ( 𝜒2 1(𝛼) 𝑛 ) − 1]) and 𝛼S = Pr (𝑈S ≥ 𝜒2 1(𝛼)) = Pr (𝐹1,𝑛−1 ≥ 𝑛 − 1 𝑛 − 𝜒2 1(𝛼) 𝜒2 1(𝛼)) . Note: 𝑈W is often liberal, 𝑈LR slightly liberal a 𝑈S is usually neither liberal nor conservative. Limiting (asymptotical) distribution, i.e., for sufficiently large 𝑛, for all test statistics is 𝜒2 1 distribution. Stanislav Katina (ÚMS, MU) Statistical inference I and II 13.03.2024 22:03 84 / 176 6 One-sample tests about 𝜇 6.13 One-sample tests about 𝜇 (cont.) Let 𝑋 ∼ 𝑁(𝜇, 𝜎2 ), where 𝜎2 is not known. We would like to test the hypotheses: 1 𝐻01 ∶ 𝜇 = 𝜇0 vs 𝐻11 ∶ 𝜇 ≠ 𝜇0, 2 𝐻02 ∶ 𝜇 ≤ 𝜇0 vs 𝐻12 ∶ 𝜇 > 𝜇0, 3 𝐻03 ∶ 𝜇 ≥ 𝜇0 vs 𝐻13 ∶ 𝜇 < 𝜇0. Under 𝐻0 𝑇W = 𝑋 − 𝜇0 𝑆 √ 𝑛 𝒟 ∼ 𝑡 𝑛−1, where 𝑆2 = 1 𝑛−1 ∑ 𝑛 𝑖=1 (𝑋𝑖 − 𝑋) 2 and the distribution 𝑡df, where df = 𝑛 − 1, is called central 𝑡-distribution with 𝑛 − 1 degrees of freedom. 𝑇W is called one-sample Student test statistic (or one-sample 𝑡-statistic) and test one-sample Student 𝑡-test about 𝜇. Stanislav Katina (ÚMS, MU) Statistical inference I and II 13.03.2024 22:03 85 / 176 6 One-sample tests about 𝜇 6.14 One-sample tests about 𝜇 (cont.) If 𝐻0 is not true (under the alternative 𝐻1), this situations leads to non-central 𝑡-distribution with df degrees of freedom and non-centrality parameter 𝜆, 𝑡df,𝜆, where 𝑇W,𝜆 = 𝑍 𝑊 + 𝜆 √ 𝑉 /df = √ 𝑛 (𝑋 − 𝜇) /𝜎 + √ 𝑛 (𝜇 − 𝜇0) /𝜎 𝑆/𝜎 𝒟 ∼ 𝑡df,𝜆, df = 𝑛 − 1, 𝑍 𝑊 = 𝑋−𝜇 𝜎 √ 𝑛 𝒟 ∼ 𝑁 (0, 1), non-centrality parameter 𝜆 = (𝛿/𝜎) √ 𝑛, 𝛿 = 𝜇 − 𝜇0 is minimal detected distance between 𝜇 and 𝜇0, 𝑉 = (𝑛−1)𝑆2 𝜎2 𝒟 ∼ 𝜒2 df and is independent of 𝑍 𝑊 . Let cumulative distribution function of 𝑇W,𝜆 be 𝐺df,𝜆 (𝑡) = Pr (𝑇W,𝜆 ≤ 𝑡). If 𝜆 = 0, then non-central 𝑡-distribution is equivalent to the central (Student) 𝑡-distribution. Stanislav Katina (ÚMS, MU) Statistical inference I and II 13.03.2024 22:03 86 / 176 6 One-sample tests about 𝜇 6.15 One-sample tests about 𝜇 (cont.) If 𝐻02 is not true (under the alternative 𝐻12), then the power function 1 − 𝛽 (𝜇, 𝜎) = Pr 𝜇,𝜎 ( 𝑋 − 𝜇0 𝑆 √ 𝑛 ≥ 𝑡 𝑛−1 (𝛼)) = 1 − 𝐺 𝑛−1,𝜆 (𝑡 𝑛−1 (𝛼)) , where (𝜇, 𝜎) means that the probability is calculated under the condition 𝑋 ∼ 𝑁(𝜇, 𝜎2 ), 𝜇 > 𝜇0. If 𝐻03 is not true (under the alternative 𝐻13), then the power function 1 − 𝛽 (𝜇, 𝜎) = Pr 𝜇,𝜎 ( 𝑋 − 𝜇0 𝑆 √ 𝑛 ≤ −𝑡 𝑛−1 (𝛼)) = 𝐺 𝑛−1,𝜆 (𝑡 𝑛−1 (𝛼)) , where (𝜇, 𝜎) means that the probability is calculated under the condition 𝑋 ∼ 𝑁(𝜇, 𝜎2 ), 𝜇 < 𝜇0. If 𝐻01 is not true (under the alternative 𝐻11), then the power function 1 − 𝛽 (𝜇, 𝜎) = Pr 𝜇,𝜎 ( 𝑋 − 𝜇0 𝑆 √ 𝑛 ≤ −𝑡 𝑛−1 (𝛼/2) ∨ 𝑋 − 𝜇0 𝑆 √ 𝑛 ≥ 𝑡 𝑛−1 (𝛼/2)) = 1 − 𝐺 𝑛−1,𝜆 (𝑡 𝑛−1 (𝛼/2)) + 𝐺 𝑛−1,𝜆 (−𝑡 𝑛−1 (𝛼/2)) , where (𝜇, 𝜎) means that the probability is calculated under the condition 𝑋 ∼ 𝑁(𝜇, 𝜎2 ), 𝜇 ≠ 𝜇0. Stanislav Katina (ÚMS, MU) Statistical inference I and II 13.03.2024 22:03 87 / 176 6 One-sample tests about 𝜇 6.16 One-sample tests about 𝜇 (cont.) Critical regions and power functions related to 𝑇W 𝐻0 𝐻1 𝒲 1 − 𝛽 (𝜇) 𝜇 = 𝜇0 𝜇 ≠ 𝜇0 𝒲1 = {𝑇W; |𝑇W| ≥ 𝑡 𝑛−1 (𝛼/2)} Pr (𝐹1,𝑛−1,𝜆2 ≥ 𝐹1,𝑛−1 (𝛼)) 𝜇 ≤ 𝜇0 𝜇 > 𝜇0 𝒲2 = {𝑇W; 𝑇W ≥ 𝑡 𝑛−1 (𝛼)} Pr (𝑡 𝑛−1,𝜆 ≥ 𝑡 𝑛−1 (𝛼)) 𝜇 ≥ 𝜇0 𝜇 < 𝜇0 𝒲3 = {𝑇W; 𝑇W ≤ −𝑡 𝑛−1 (𝛼)} Pr (𝑡 𝑛−1,𝜆 ≤ −𝑡 𝑛−1 (𝛼)) Note: 𝑡2 𝑛−1(𝛼/2) ≈ 𝐹1,𝑛−1(𝛼) and 𝑡2 𝑛−1,𝜆(𝛼/2) ≈ 𝐹1,𝑛−1,𝜆2 (𝛼). Note: Noncentrality parameter 𝜆 = (𝛿/𝜎) √ 𝑛, 𝛿 = 𝜇 − 𝜇0. p-value = ⎧ { ⎨ { ⎩ 2Pr(𝑇W ≥ |𝑡 𝑊 ||𝐻01), if 𝐻11 ∶ 𝜇 ≠ 𝜇0 Pr(𝑇W ≥ 𝑡 𝑊 |𝐻02), if 𝐻12 ∶ 𝜇 > 𝜇0 Pr(𝑇W ≤ 𝑡 𝑊 |𝐻03), if 𝐻13 ∶ 𝜇 < 𝜇0 𝐻0 𝐻1 ( ̂𝜇 𝐿, ̂𝜇 𝑈 ) 𝜇 = 𝜇0 𝜇 ≠ 𝜇0 𝒞𝒮1−𝛼 = {𝜇0 ∶ 𝜇0 ∈ (𝑥 − 𝑡 𝑛−1 (𝛼/2) 𝑠√ 𝑛 , 𝑥 + 𝑡 𝑛−1 (𝛼/2) 𝑠√ 𝑛 )} 𝜇 ≤ 𝜇0 𝜇 > 𝜇0 𝒞𝒮1−𝛼 = {𝜇0 ∶ 𝜇0 ∈ (𝑥 − 𝑡 𝑛−1 (𝛼) 𝑠√ 𝑛 , ∞)} 𝜇 ≥ 𝜇0 𝜇 < 𝜇0 𝒞𝒮1−𝛼 = {𝜇0 ∶ 𝜇0 ∈ (−∞, 𝑥 + 𝑡 𝑛−1 (𝛼) 𝑠√ 𝑛 )} Stanislav Katina (ÚMS, MU) Statistical inference I and II 13.03.2024 22:03 88 / 176 6 One-sample tests about 𝜇 6.17 One-sample tests about 𝜇 (cont.) −4 −2 0 2 4 0.00.20.40.60.81.0 n = 10 n = 20 n = 30 n = 40 n = 50 powerfunction µ − µ0 H11 : µ − µ0 ≠ 0, µ0 = 0, σ2 = 4, α = 0.05 −4 −2 0 2 4 0.00.20.40.60.81.0 n = 10 n = 20 n = 30 n = 40 n = 50 powerfunction µ − µ0 H12 : µ − µ0 > 0, µ0 = 0, σ2 = 4, α = 0.05 −4 −2 0 2 4 0.00.20.40.60.81.0 n = 10 n = 20 n = 30 n = 40 n = 50 powerfunction µ − µ0 H13 : µ − µ0 < 0, µ0 = 0, σ2 = 4, α = 0.05 Stanislav Katina (ÚMS, MU) Statistical inference I and II 13.03.2024 22:03 89 / 176 6 One-sample tests about 𝜇 6.18 One-sample Student 𝑡-test about mean 𝜇 in Example (independence of 𝜇 and 𝜎2 , coverage probability) Let 𝑋 ∼ 𝑁(𝜇, 𝜎2 ), where 𝜇 = 20 and 𝜎2 = 100. Calculate Pearson correlation coefficient 𝑟 𝑋,𝑆 based on the simulation study. Draw the points (𝑥 𝑚, 𝑠 𝑚) as scatter-plot (grey color), where 𝑚 = 1, 2, … , 𝑀, 𝑀 = 100000. Add the points 𝑡 𝑊,𝑚 = ∣ 𝑥 𝑚−𝜇 𝑠 𝑚 √ 𝑛∣ < 𝑡 𝑛−1(𝛼/2) (black color), and a boundary, to define the points (𝑥 𝑚, 𝑠 𝑚), where 𝑡 𝑊,𝑚 = 𝑡 𝑛−1(𝛼/2). Calculate coverage probability 95% of CI for 𝜇 as a ration ∑ 𝑚 𝐼 (𝑡 𝑊,𝑚 < 𝑡 𝑛−1(𝛼/2)) /𝑀. Use (a) 𝑛 = 5, (b) 𝑛 = 50 and (c) 𝑛 = 100. Stanislav Katina (ÚMS, MU) Statistical inference I and II 13.03.2024 22:03 90 / 176 6 One-sample tests about 𝜇 6.19 One-sample Student 𝑡-test about mean 𝜇 in 0 10 20 30 40 0510152025 n = 5, r = 0.001942 arithmetic average standarddeviation 14 16 18 20 22 24 26 68101214 n = 50, r = −0.001793 arithmetic average standarddeviation 16 18 20 22 24 8910111213 n = 100, r = −0.001838 arithmetic average standarddeviation Scatter-plot of 𝑥 𝑚 and 𝑠 𝑚, 𝑚 = 1, 2, … , 𝑀, 𝑀 = 100000 for 𝑛 = 5 (left), 𝑛 = 50 (middle) and 𝑛 = 100 (right). Lines indicate 𝑡 𝑊,𝑚 = ∣ 𝑥 𝑚−𝜇 𝑠 𝑚 √ 𝑛∣ < 𝑡 𝑛−1(𝛼/2), where the coverage probability is approximately equal to the nominal level 1 − 𝛼 = 0.95. Stanislav Katina (ÚMS, MU) Statistical inference I and II 13.03.2024 22:03 91 / 176 6 One-sample tests about 𝜇 6.20 One-sample Student 𝑡-test about mean 𝜇 in Example (noncentral 𝑡-distribution) Draw the densities of one central and four non-central 𝑡-distributions 𝑡 𝑛−1,𝜆 ( 𝛿 = 𝜇 − 𝜇0 and 𝜆 = 𝛿/(𝜎/ √ 𝑛)) to one figure and distinguish them by different color or line type. Use 𝜇0 = 0, 𝛿 = 0, 0.5, 0.8, 1 and 1.2, 𝜎 = 1.4 and 𝑛 = 26. 0 5 10 0.00.10.20.30.4 x density δ = 0 δ = 0.5 δ = 0.8 δ = 1 δ = 1.2 Densities of non-central 𝑡-distribution with different non-centrality parameter 𝜆 𝛿 Stanislav Katina (ÚMS, MU) Statistical inference I and II 13.03.2024 22:03 92 / 176