math代做 – math 408 Final Exam Fall 2019

math 408 Final Exam Fall 2019

math代做 – 这是math相关的代写

math代写 代写math 数学代做

 math 408 Final Exam Fall 2019

Name: ID#

Instructions: (IMPORTANT: READ THIS BEFORE BEGINNING THE EXAM)
This Final exam is a take-home. You may work on it from the time it is posted on
Blackboard (Friday evening December 13th) until it is due at 1 p.m. Monday December
16 th in hard copy at the Math Department office, 108 KAP. You may use your course
textbook (Rice) and your class notes, but nothing else, no other books, not the internet, and
certainly no other humans. You must place a box around your final answer to each part of
every problem with the box clearly labeled with the number and part of the problem; if
there is no box around the answer and it is not clearly labeled, we will not grade it. You
must also include detailed calculations showing how you arrived at your answer. These
calculations must be completely consistent with the answer you entered on the answer
sheet. They must be clear, well organized and easy to follow and there cannot be any
cross outs. First work them out on scrap paper and then carefully and clearly (with
NO CROSS OUTS!) copy them on to the sheets that you will then turn in. If there are
any cross outs, or the calculations are inconsistent with the answer on the answer sheet, or
if I in any way cannot follow your reasoning, you will receive zero points for that part of
the problem. The sheets must be clearly marked as to which part of which problem they
pertain to, and they must be stapled in the correct order. All notation regarding
distributions is consistent with the notation used in the Table of Common Distributions
attached to the exam.
1. (25)
2. (25)
3. (25)
4. (25)

Total (100)

  1. (a) Let be geometric random variable with parameter . Assume you have been provided with independent observations, , = 1 , 2 ,…,. Find the posterior distribution for the Bayesian estimator for , if you assume a prior distribution for that is uniform on the interval [ 0 , 1 ]. (Hint: The Beta distribution may be of some help to you in this endeavor.)
(b) Use the posterior distribution found in part (a) to compute the Bayesian
estimator for the variance of .
(c) Again let  be geometric random variable with parameter  and assume that
you have been provided with  independent observations, , = 1 , 2 ,...,, but
suppose that this time as an estimator for  I decide to use =  1. Show how
the Rao Blackwell Theorem yields a better option for an estimator for , =.
(Hint: It is helpful to note that the Rao Blackwell theorem will yield the same
result if I choose =  for any = 1 , 2 ,...,.)
  1. Let be discrete random variable with probability mass function given by
()={  =^1
1  = 2

where is an unknown parameter that is to be estimated. Suppose also that you have been given independent observations, , = 1 , 2 ,…,. (a) What is the likelihood function, ( 1 , 2 ,…,,)? (Hint: The functions 2 =

{
1 = 1
0 = 2
and  1 ={
0 = 1
1 = 2
may be of some help to you.)

(b) Show that == 1 is a sufficient statistic for . (c) Find the method of moments estimator, , for . (d) Is the estimator found in part (c) biased or unbiased and justify your answer. (e) Find the maximum likelihood estimator, , for . (f) Is the estimator found in part (e) biased or unbiased and justify your answer. (g) Find the posterior distribution for, and the Bayesian estimator for , , if you assume a prior distribution for that is uniform on the interval[ 0 , 1 ]. (h) Show that the estimator found in part (g) is a weighted average of the MLE, found in part (e) and the mean of the uniform distribution on [ 0 , 1 ], the prior distribution assumed for . (i) Is the Bayesian estimator, , biased or unbiased? Justify your answer. (j) Suppose you are given the independent data 1 = 1 , 2 = 2 , and 3 = 2 , (i.e. in this case = 3 ). What are the values of , , , and in this case?

  1. Consider the following sets of 10 independent measurements made by 7 different laboratories:
Y 1 Y 2 Y 3 Y 4 Y 5 Y 6 Y 7
1 4.13 3.86 4 3.88 4.02 4.02 4
2 4.07 3.85 4.02 3.88 3.95 3.86 4.
3 4.04 4.08 4.01 3.91 4.02 3.96 4.
4 4.07 4.11 4.01 3.95 3.89 3.97 4.
5 4.05 4.08 4.04 3.92 3.91 4 4.
6 4.04^ 4.01^ 3.99^ 3.97^ 4.01^ 3.82^ 3.^
7 4.02 4.02 4.03 3.92 3.89 3.98 3.
8 4.06 4.04 3.97 3.9 3.89 3.99 3.
9 4.1^ 3.97^ 3.98^ 3.97^ 3.99^ 4.02^ 4.^
10 4.04 3.95 3.98 3.9 4 3.93 4.
 4.062 3.997 4.003 3.920 3.957 3.955 3.

In the statistical analyses you are asked to do below, wherever required, you may assume normality. Also, here are some related quantities that may be of some use in solving the problems that follow.

( 1 1 )^2 ( 2 2 )^2 ( 3 3 )^2 ( 4 4 )^2 ( 5 5 )^2 ( 6 6 )^2 ( 7 7 )^2
1 0.0046 0.0188 0 0.0016 0.004 0.0042 0
2 0.0001 0.0216 0.0003 0.0016 0 0.009 0.
3 0.0005 0.0069 0 0.0001 0.004 0 0.
4 0.0001 0.0128 0 0.0009 0.0045 0.0002 0.
5 0.0001 0.0069 0.0014 0 0.0022 0.002 0.
6 0.0005 0.0002 0.0002 0.0025 0.0028 0.0182 0.
7 0.0018 0.0005 0.0007 0 0.0045 0.0006 0.
8 0 0.0018 0.0011 0.0004 0.0045 0.0012 0.
9 0.0014 0.0007 0.0005 0.0025 0.0011 0.0042 0.
10 0.0005 0.0022 0.0005 0.0004 0.0018 0.0006 0.
Total 0.0096^ 0.0724^ 0.0048^ 0.01^ 0.0294^ 0.0404^ 0.^
(a) Find a 90% confidence interval for the difference between the two means
 1 and  4.
(b) Test whether or not  1 and  4 are the same at the 5% level of significance and
provide the best estimate for the p-value you can using the tables provided.
(c) Now use ANOVA techniques to test whether or not  1 and  4 are the same at the
5 % level of significance and provide the best estimate for the p-value you can
using the tables provided.
(d) Next let = 1 ,  10 += 4 ,, = 1 , 2 ,..., 10 , and = 0  10 += 1 , =
1 , 2 ,..., 10 , and use linear regression and least squares estimation to fit a linear
model of the form = 0 + 1 + to the 20 data points {(,)}^20 = 1 defined
above and show that  0 =  1 and that  1 = 4  1.

(e) Assume normality and test whether or not 1 is significant at the 5% level and provide a 95% confidence interval for the value of 1.

(f) Recalling that in this case =

 1 + 4
2

, establish the identity

( 4 1 )^2 = 2 (( 1 )^2 +( 4 )^2 ) (1)

(g) Show that the tests performed in parts (b) and (c) are in fact exactly the same. Hint: Consider the computation

2 ( 2 >

| 4  1 |
^1 +^1

)=(( 2 )^2 >(

| 4  1 |
^1 +^1 

)

2

)=

Also, the identity established in Part (f) may be helpful.
  1. Let {}= 1 be an i.i.d. random sample from the exponential distribution with []=.
(a) Find the Cramer-Rao lower bound for the variance of an estimator of . Is this
bound attained?
(b) Show that

+ 1 
^2 is an unbiased estimator for ^2. Is this the MLE for ^2? Why or
why not?
(c) Find the Cramer-Rao lower bound for the variance of an estimator of ^2.

Table of Common Distributions

taken from
Statistical Inference
by Casella and Berger
Discrete Distrbutions
distribution
pmf
mean
variance
mgf/moment
Bernoulli(
p)
px
(

p)

(^1) ;x x = ;1; p 2 ( ;1) pp ( p)( p)+ pe t Beta-binomial( n;; )( x)n ( + ) ( )( ) ( x+ )( n x+ ) ( + + n) +n (+n ) 2 Notes: If X jP is binomial ( n;P )and P is beta( ; ), then X is beta-binomial( n;; ). Binomial( n;p )( x)n px ( p) n ;x x = ;:::;n np np ( p) [( p)+ pe ]tn Discrete Uniform( N ) N; 1 x = ;:::;N N (^2) + (N +1)( N 1) 12 N 1 P i=1N eit Geometric( p) p( p) x ;^1 p 2 ( ;1) p 1 (^1) p p 2 pe t (^1) ( p )e t Note: Y = X 1 is negative binomial( ;p ). The distribution is memoryless : P (X>s jX>t )= P (X>s t). Hypergeometric( N;M;K ) x(M )( N M K x) K(N ) ; x = ;:::;K NKM NKM (N M )(N k) N (N 1) ? M (N K ) x M ; N;M;K > 0 Negative Binomial( r;p )( r+ x 1 x )p (1r p) ;x p 2 ( ;1) r( p ) p r( p) p 2 p (^1) ( p )e t r (y r 1 1 )p (1r p) y ;r Y = X

r Poisson( ) e x x! ; 0 e (e t

Notes: If Y is gamma( ; ), X is Poisson( ), andx is an integer, then P (X )= P (Y y). 1

Continuous Distributions
distribution
pdf
mean
variance
mgf/moment
Beta(
;
)
(
+
)
(
)(
)
x

(1^1

x)

;^1
x
2
(
;1)
;;>
0
+

(
+
) 2
(
+
+1)
1+
P
k=1 1
Q
k
1
r=
+
r
+
+
r 
k! tk
Cauchy(
;
)
 1
1
1+(
x
)

(^2) ; > 0 does not exist does not exist does not exist Notes: Special case of Students’s t with 1 degree of freedom. Also, if X;Y are iid N ( ;1), YX is Cauchy p 2 1 ( (^2) )2p 2 p x (^2) p e^1 ;^2 x x> (^0) ;p 2 Np (^2) p (^1) 21 t 2 p ;t< 21 Notes: Gamma( (^2) ;p 2). Double Exponential( ; ) (^2) 1 e jx j ; > 0 (^2) 2 et (^1) (t ) 2 Exponential( ) e 1 ;x x (^0) ;> 0 2 (^1) t 1 ;t< 1 Notes: Gamma( ; ). Memoryless. Y = X 1 is Weibull. Y = q ^2 X is Rayleigh. Y = log X is Gumbel. F (^1) ; 2 ( 1 + 2 2 ) ( 21 )( 22 ) 2 1 21 x (^1) (^22) 1+( 2 1 )x (^1) + (^22) ; x> 0 2 2 (^2) ; 2

22( 2 2 (^2) ) (^2) (^1) + 2 2 1 ( (^2)

; 2

4 EX n = ( 1

n 2 )( 2 2 n 2 ) ( 21 )( 22 ) 1 2 n ;n< 22 Notes: F (^1) ; 2 = 2 (^1) = 1 2 (^2) = (^2) , where the s are independent.^2 F (^1) ; = t 2 . Gamma( ; ) 1 ( ) x e^1 ;x x> (^0) ;;> 0 2 1 (^1) t ;t< 1 Notes: Some special cases are exponential ( =1)and 2 ( = (^2) ;p = 2). If = (^3) ;Y 2 = q X is Maxwell. Y = X 1 is inverted gamma. Logistic( ; ) 1 e x 1+h e x i (^2) ; > 0 2 (^3) 2 et (1 + t ); jt j< 1 Notes: The cdf is F (x j; )= 1 1+ e x . Lognormal( ; )^2 1 p (^2) xe 1 (log x ) 2 (^2) 2 ; x> (^0) ;> 0 e

2 2 e2( + 2 ) e 2 + 2 EX n = en

n 2 (^2) 2 Normal( ; )^2 1 p (^2) e (x ) 2 (^2) 2 ; > 0 2 et

2 (^2) t 2 Pareto( ; ) x

; x > ; ; > 0 (^1) ;> 1 2 (

(^2

;> 2 does not exist t ( 2 + ) ( (^2) ) p^1 1 (1+ x 2 ) 2 + (^0) ;> 1 2 ;> 2 EX n = ( 2 + )( (^2) )n p ( (^2) ) ;n^2 n even Notes: t 2 = 1 F ; . Uniform( a;b ) ba 1 ;a x b b+ (^2) a (b a ) 2 12 ebt e at (b a) t Notes: If a = ;b = 1, this is special case of beta ( = = 1). Weibull( ; ) x e^1 x ; x> (^0) ;;> 0 (1 +^1 ) 1 2 (1 +h ) 2 2 (1 + ) 1 i EX n = (1 +n )n Notes: The mgf only exists for

2