University of California, Los Angeles Department of Statistics Statistics 100C Instructor: Nicolas Christou Exam 1 21 April 2021 This is a 2-hour exam: Due by 8:15 pm on Thursday, 22 April Name: Problem 1 (25 points) Answer the following questions: a. Consider the single index model for stock i, Ri = αi + βiRm + i. Similarly for stock j, Rj = αj +βjRm+ j . It is given that Rm is random with variance σ 2 m and that the ′s are independent with variances σ2i and σ 2 j respectively. Also the ′s and Rm are uncorrelated. Find cov(Ri, Rj). b. Consider the simple regression model yi = β0 + β1xi + i, i = 1, . . . , n. The Gauss-Markov conditions hold and i ∼ N(0, σ). Let cSe be unbiased estimator of σ. Find c. c. Consider a data set with n = 155. The error sum of squares of the model yi = β0 + i is equal to 1912. The F statistic for testing the hypothesis H0 : β1 = 0 against Ha : β1 6= 0 of the model yi = β0 +β1xi + i is 800. The distribution of the error term in both models is N(0, σ). It is also given that s2x = 134743. Find βˆ1. d. Consider the simple regression model yi = β0 + β1xi + i, i = 1, . . . , n. The Gauss-Markov conditions hold and i ∼ N(0, σ). Suppose we want to predict the mean of m new observations of Y (denoted by Y¯0) for a given level of the predictor x = x0. Show the entire derivation of finding a prediction interval for Y¯0. e. Consider the simple regression model yi = β0 + β1xi + i, i = 1, . . . , n. The Gauss-Markov conditions hold. Show that σˆ2 = ∑n i=1 e2i n has smaller mean squared error (MSE) than S 2 e = ∑n i=1 e2i n−2 . Note: Let θˆ be an estimator of a parameter θ. The mean squared error (MSE) is defined as MSE(θˆ) = var(θˆ)+B2, where B = E(θˆ)− θ. 1 Problem 2 (25 points) Answer the following questions: a. Consider the simple regression model yi = β0 + β1xi + i, i = 1, . . . , n. The Gauss-Markov conditions hold. Find the var(3βˆ0 − 2βˆ1). You must first express 3βˆ0 − 2βˆ1 as a linear combination of Y1, . . . , Yn. b. Consider the simple regression model yi = β0 + β1xi + i, i = 1, . . . , n. The Gauss-Markov conditions hold and also i ∼ N(0, σ), i = 1, . . . , n. Use the likelihood ratio test to test H0 : β0 + β1 = 3 against the alternative H0 : β0 + β1 6= 3 and show that it is equivalent to the F statistic. c. Refer to question (b). Test the hypothesis using the t statistic. d. Refer to questions (b) and (c). Explain how to compute the power of the test using the t and F distributions. Please include all necessary graphs that you need for this question. e. Consider the simple regression model yi = β0 + β1xi + i. The Gauss-Markov conditions hold and also i ∼ N(0, σ). Suppose we are testing the following hypothesis: H0 : β0 = 2 and β1 = 3 Ha : H0 is not true. Use the extra sum of squares principle to test this hypothesis. 2 Problem 3 (30 points) Answer the following questions: a. Consider the simple regression model yi = β0 +β1xi + i. The Gauss-Markov conditions hold. Use the general result cov( ∑n i=1 aiYi, ∑n j=1 bjYj) = ∑n i=1 ∑n j=1 aibjcov(Yi, Yj) to find cov(ei, ej). b. Consider the simple regression model yi = β0 + β1xi + i. The Gauss-Markov conditions hold and also i ∼ N(0, σ). In class we showed that the MLE of σ2 is σˆ2 = ∑n i=1 e2i n and using the variance of ei we showed that E[σˆ 2] = n−2n σ 2. Explain the details we need in order to find this result if we use E[σˆ2] = 1n ∑n i=1 e 2 i and directly expand E[e 2 i ]. c. Consider the simple regression model yi = β0 + β1xi + i. The Gauss-Markov conditions hold. Find∑n j=1 cov 2(Yˆi, Yˆj). Is it related to var(yˆi)? d. Consider the simple regression model yi = β0 + β1xi + i, i = 1, . . . , n. The Gauss-Markov conditions hold. Let θ = cβ1 − β0, where c is a constant. Find an unbiased estimator θˆ of θ. For what value of c will the variance of θˆ be smallest? e. Consider the simple regression model yi = β0 + β1xi + i, i = 1, . . . , n. The Gauss-Markov conditions hold and also i ∼ N(0, σ). Let a0, a1 and a∗ be specified constants, and suppose we want to test the following hypotheses: H0 : a0β0 + a1β1 = a ∗ Ha : a0β0 + a1β1 6= a∗ Derive the t test for testing this hypothesis. 3 Problem 4 (20 points) Answer the following questions: 1. What are the normal equations for the simple regression model with an intercept? 2. What are the Gauss-Markov conditions? 3. Explain the Gauss-Markov theorem. 4. Explain what it means “Best Linear Unbiased Predictor (BLUP).” 5. Explain why βˆ1 follows a normal distribution. 6. What is one result in the centered model that is useful in showing (n−2)S2e σ2 ∼ χ2n−2. 7. What is the distribution of S2e? 8. What general approach for predicting a new Y gives the same predictor obtained using the least squares estimates? 9. Explain the extra sum of squares principle for testing a hypothesis in the simple regression model? 10. Are ∑n i=1 eiyˆi and ∑n i=1 eixi both equal to zero for the models with and without an intercept? How about ∑n i=1 ei? 4
学霸联盟