#### Title

### Contributions to the Study of Bayes Estimates: The Maximum Likelihood Estimate and Rao's Test.

#### Date of Submission

6-22-1982

#### Date of Award

6-22-1983

#### Institute Name (Publisher)

Indian Statistical Institute

#### Document Type

Doctoral Thesis

#### Degree Name

Doctor of Philosophy

#### Subject Name

Computer Science

#### Department

Theoretical Statistics and Mathematics Unit (TSMU-Kolkata)

#### Supervisor

Ghosh, Jayanta Kumar (TSMU-Kolkata; ISI)

#### Abstract (Summary of the Work)

This the sis consists of two parts. In part I we have investigated problems concerning Bayes estimates, especially ex pansion of the integrated risk of the Bayes estinate (also referred to as the Bayes risk or the integrated Bayes risk), approximation of the Bayes estimate and expansion of the posterior distribution. In part II we have introduc ed a new opimun property for estimates and have concluded that the maximum likelihood estimate (m.1.e.) enjoys this property ; in this part we have also investigated what is known as Raos conjecture which saya that the test based on the score function is locally more powerf ul than the likelihood ratio test and the Walds test. Our conclusion is that Raos conjecture is tr ue when the sizes of the above tests are small.Consider a sequence X, ,X,,... of inde pe ndent and identi- cally distributed (i.i.d.) random variablcs (r.v.s). X, having distribution function (d.f.) F(x , e), paramotrized by ee (H) an open subset of R.Let f(x, e) be the density of F(x, e) W.r.t. 3ome sigma finite measure. The problem of expansion of the posterior distribution function for a fixed value of the parameter was considered by Johnson ([1967], [1970]). He proved under certain regularity conditions that with probability ono under e, the suitably centered and soaled posterior distribution posse sses an asymptotic expansion in powers of n-72 (n being the sample size) with standard normal as the leading term. The terms is n-(K+1)/2 Ryk and remainder after (K+ 1) RnK is bounded by some constant M, O O on (E), . Choose a conotant r > 0. Then prococding as in Johnson [1970), undor cortain rogularity conditions which are stronger than Johnson s and depend on r, one can get the following uniform ver sion of the above mentionod result of Johnson,(IRnk! < M) = 1- o (n*) uniformly in e, e ), for %3D some o < M < o. Ono can also obtain an oxpansion for the postorior risk undor squro orror loss function using Johnsons other results. Similar problens wore also considered by Gusev ([1975], [1976 ). Basically he was interest od in getting asymptotic expansions far the Bayes and certain other ostimates, which hold good with largo POProbaility (vido Guoev {1976}.

#### Control Number

ISILib-TH54

#### Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

#### DOI

http://dspace.isical.ac.in:8080/jspui/handle/10263/2146

#### Recommended Citation

Joshi, S. N. Dr., "Contributions to the Study of Bayes Estimates: The Maximum Likelihood Estimate and Rao's Test." (1983). *Doctoral Theses*. 248.

https://digitalcommons.isical.ac.in/doctoral-theses/248

## Comments

ProQuest Collection ID: http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqm&rft_dat=xri:pqdiss:28843026