Date of Submission


Date of Award


Institute Name (Publisher)

Indian Statistical Institute

Document Type

Doctoral Thesis

Degree Name

Doctor of Philosophy

Subject Name



Theoretical Statistics and Mathematics Unit (TSMU-Kolkata)


Ghosh, Jayanta Kumar (TSMU-Kolkata; ISI)

Abstract (Summary of the Work)

This thesis consists of two parts dealing with maximum likeli. hood procedures in two different frameworks. In the first part (Chapters 1,2 and 3) we consider inference about a parameter which is discrete or separated in the sense that no f(x, e) can be obtained as a "limit" of ff(x,e1)}, + e. (A precise definition of what is meant by & "limit" is given in Chapter 1). In the second part (Chapters 4 and 5) we consider the usual eştimetion problem of what may be called in constrast to Part I, a contimuous parameter. We assume, we have an exponential femily having a k density r(x,e) = c(e) exp{,2 P; (0) P3 (x)} with respect to some 0-finite measure and whoge natural parameters depend on the para- meter of interest e in a smooth way ; such famílies have been called curved exponential families by Efron (Annals of Stat., 1975). Our interest in this part is to establish the "second order efficiency" of the maximum likelihood estimate (m. 1. e).A more detailed explanation of the above problems as well as a chapterwise summary of the results obtáined is given below.In Chapter 1, we define a separated parametric space where we confine our attention to the families which are homogeneous in the sense that the sample space does not depend upon the parameter. This chapter is mainly devoted to the study of asymptntic proper- ties of the m.1.e. in the case of one parameter separated families.Assuming the loss function to be of the form, W(a,e) = U if a = e and positive otherwise, a theorem' regarding asymptotic approximation of risk of the m.l.e. R(ee) is established which asserts that under suitable conditions 1im n log Re,e) exists and equals log p (e), where 1 - p(e) is the divergence function introduced by Chernoff (Annals of Math. Stat., 1962). The m.1.e. turns out to be asymptotically minimax under appropriate conditions; the results to this effect are contained in two theorems. The minimaxity provides a new. proof for the main result of Kraft and Puri (Sankhya, A, 1974). In general there are maximum weighted likelihood estimates (m.w.1.e) which are asymptotically better than the m.l.e e,, so that 'e is not in general asymptotically admissible. In the case when e is assumed to be an integer and: the loss to be squared error, analo rues of Bhattacharya-Barankin and Cramer-Rao lower bounds for the variance of an estimate are developed and shown tn be equivalent under very mild conditions. Also for any asymptotically unbiased estimate which attains the Cramer-Rao lower bound asymp totically at some e it is shown 1. Thus there is no estimate attaining the Cramer-Rao lower bound asympto- that its risk tends to infinity at some other point tically at all e. This provides an answer to a question raised by Hammersley (J.R. S. S., Series B, 1950). This chapter is concluded with two examples namely the normal and the Poisson families with integral means. Most of the results and the methods of this chapter have sppeared in Ghosh and Subramanyam, (Sankhya, Series A, 1975.)The case of two parameters one discrete and the other continuous has been taken into consideration in Chapter 2. The mein result of Chapter 1 namely getting the expression for the asyaptotic risk of the m. l.e has been extended here to the more general set up. The extention is applied to an example of Cox (1962) which, is about deciding between Poisson and geometric distributions.


ProQuest Collection ID:

Control Number


Creative Commons License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.


Included in

Mathematics Commons