FYUG Even Semester Exam, 2025 STATISTICS: Statistical Inference (STADSC-251)

Subject: Statistics

Paper Code: TADSC-257 / STADSC-251

Semester: 4th Semester (FYUG)

Time: 3 Hours | Full Marks: 50

UNIT-I

Question 1 (a) [2 Marks]

Define estimate and estimator.

An estimator is a rule or formula (a function of sample observations) used to estimate an unknown population parameter. An estimate is the specific numerical value obtained by applying the estimator to a particular set of sample data.

Question 1 (b) [2 Marks]

Define consistency.

An estimator T_n is said to be a consistent estimator of a parameter theta if it converges in probability to theta as the sample size n approaches infinity. Formally:

P(|T_n - theta| < epsilon) approaches 1 as n approaches infinity.

Question 1 (c) [2 Marks]

What is factorization theorem of sufficiency?

The Neyman-Factorization theorem states that a statistic T(x) is sufficient for a parameter theta if and only if the joint probability density function (or likelihood L) can be factored into two non-negative functions:

L(x; theta) = g(T(x); theta) * h(x)

where g depends on the data only through T and h is independent of theta.

Question 2 (a)(i) [4 Marks]

Define unbiasedness. Show that [sum x_i (sum x_i - 1)] / [n(n-1)] is an unbiased estimator of theta^2 for a Bernoulli sample.

Definition: An estimator T is unbiased for parameter theta if its expected value is equal to the parameter, i.e., E(T) = theta.

Proof: Let x_i follow Bernoulli(theta). Then sum x_i follows Binomial(n, theta). Let Y = sum x_i.

Thus, the estimator is unbiased.

Question 2 (a)(ii) [2 Marks]

If T_n is consistent for theta, show Psi(T_n) is consistent for Psi(theta).

By the Invariance Property of Consistent Estimators, if T_n converges in probability to theta, and Psi is a continuous function, then Psi(T_n) converges in probability to Psi(theta). This is a direct result of Slutsky's Theorem/Continuous Mapping Theorem.

Question 2 (a)(iv) [4 Marks]

Find lambda for T_3 to be unbiased. Are T_1 and T_2 unbiased? Best among them?

Given T_1 = (x1+x2+x3+x4+x5)/5, T_2 = (x1+x2)/2 + x3, T_3 = (2x1 + x2 + lambda*x3)/3.

For T_3 to be unbiased: E(T_3) = mu.
[2mu + mu + lambda*mu]/3 = mu => 3 + lambda = 3 => lambda = 0.

Checking T_1 and T_2:

Best Estimator: T_1 is the best because it is unbiased and has the minimum variance among the choices (utilizing more sample info equally).

UNIT-II

Question 3 (a) [2 Marks]

Define MVUE.

Minimum Variance Unbiased Estimator (MVUE): An unbiased estimator that has the smallest variance among all other unbiased estimators for a given parameter for all possible values of the parameter.

Question 4 (a)(i) [2 Marks]

Write regularity conditions of Cramer-Rao inequality.

Question 4 (a)(ii) [5 Marks]

Prove the Cramer-Rao Inequality: V(t) >= [gamma'(theta)]^2 / I(theta).

Consider an unbiased estimator T such that E(T) = gamma(theta). Since Integral T * L dx = gamma(theta):

  1. Differentiate both sides w.r.t theta: Integral T * (dL/dtheta) dx = gamma'(theta).
  2. Using (dL/dtheta) = L * (d/dtheta log L), we get: Cov(T, d/dtheta log L) = gamma'(theta).
  3. By Cauchy-Schwarz Inequality: [Cov(X,Y)]^2 <= Var(X)Var(Y).
  4. Var(T) * Var(d/dtheta log L) >= [gamma'(theta)]^2.
  5. Var(T) >= [gamma'(theta)]^2 / E[(d/dtheta log L)^2].

UNIT-III

Question 5 (a) [2 Marks]

Define critical region and level of significance.

Critical Region (w): The set of values of the test statistic for which the null hypothesis is rejected.

Level of Significance (alpha): The probability of committing a Type-I error (rejecting H0 when it is true).

Question 6 (a)(i) [4 Marks]

Find MLE for Poisson distribution.

Likelihood L = Product [exp(-lambda) * lambda^x_i / x_i!]

log L = -n*lambda + (sum x_i)log(lambda) - sum log(x_i!)

d/dlambda(log L) = -n + (sum x_i)/lambda = 0

lambda_hat = (sum x_i) / n = sample mean

UNIT-IV

Question 7 (a) [2 Marks]

Define MP and UMP test.

A Most Powerful (MP) test is a test that has the maximum power against a specific alternative hypothesis for a fixed alpha. A Uniformly Most Powerful (UMP) test is an MP test that remains most powerful for all values of the parameter in the alternative hypothesis space.

Question 8 (a)(i) [5 Marks]

State and prove Neyman-Pearson Lemma.

Statement: For testing H0: theta = theta0 vs H1: theta = theta1, the most powerful test is given by the likelihood ratio L(x, theta1) / L(x, theta0) > k.

Proof Strategy:

UNIT-V

Question 9 (a) [2 Marks]

Define confidence interval and confidence limit.

A Confidence Interval is a range of values, derived from sample statistics, that is likely to contain the value of an unknown population parameter with a certain probability. The end points of this interval are called Confidence Limits.