Date of Submission
2-22-1999
Date of Award
2-22-2000
Institute Name (Publisher)
Indian Statistical Institute
Document Type
Doctoral Thesis
Degree Name
Doctor of Philosophy
Subject Name
Mathematics
Department
Theoretical Statistics and Mathematics Unit (TSMU-Kolkata)
Supervisor
Bose, Arup (TSMU-Kolkata; ISI)
Abstract (Summary of the Work)
A typical problem in statistics is as follows: there is some observable data Xn = (X1,..., Xn), and a parameter of interest θ which is related in such a way to the distribution of Xn that meaningful conclusions about θ can be drawn based on Xn. Sometimes data Xn is observed keeping the objective parameter θ in mind, at other times the parameter appears while trying to model the observed data.Once the data is observed and the parameter fixed, the questions that have to be addressed are as follows:(I) How to estimate θ from the data Xn?(II) Given an estimator Tn = T(Xn) of θ, how good is the estimator, by some standard of *goodness?(III) Given an estimator Tn = T(Xn) of θ, how to obtain confidence sets, test hypothesis and settle other such inferential questions about θ?Question (I) on estimating θ from Xn is fundamental to statistics and even now new ideas about estimation come up as new problems come across the statistical horizon. But the other questions, (II) and (III) are equally important and interesting. A traditional way to answer (II) is to report the variance or mean squared error of Tn, which itself often has to be estimated from the data. The last question may be answered once an estimate of the probability distribution of Tn is obtained. Most often, computation of the variance of Tn or its entire distribution are intractable problems. The classical approach is to use the theory of weak convergence, and approximate the distribution of Tn, by its weak limit as n ⟶∞. A major step is to obtain a result like a, (Tn - θ) = N(0, T2), for some T2 > 0 and some sequence an⟶∞.Often T² is unknown, and one has to use some estimate of it based on Xn.Example 1.1.1 Suppose Xis are independently and identically distributed (hereafter i.i.d) according to some distribution F on the real line with mean θ and unknown variance ζ2 and the parameter of interest is the mean 0. Suppose the estimator Tn = ΣXi/n is used. Using the independence and identical nature of Xis, one obtains 2/n as the variance of Tn, and n1/2(Tn- θ) > N(0, Σ2). An estimator of Σζ2 is given by Σ(Xi– Tn)² /n.Example 1.1.2 Suppose (xi,yi)i = 1,...,n are observed pairs of real values, and the model is yi = Bxi + ei, i = 1,..., n. Here the parameter of interest is ß and it is assumed that e,s are i.i.d. with mean θ and variance 2. The least squared error estimator for B is given by Tn = (Σxi2)-1and the variance of Tn is (Σxi2)-1ζ2. With standard assumptions, one has that (Σxi2)1/2 (Tn – ) ß = N(0,E2). Defining ri = yi - Tntxi, and an estimate of is given by Σr2i /n- (Σri/n)2.In the above examples the variance or the limiting distribution was easily estimated because the form of the statistic T, was simple.
Control Number
ISILib-TH207
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
DOI
http://dspace.isical.ac.in:8080/jspui/handle/10263/2146
Recommended Citation
Chatterjee, Singdhansu Bhusan Dr., "Generalised Bootstrap Techniques." (2000). Doctoral Theses. 162.
https://digitalcommons.isical.ac.in/doctoral-theses/162
Comments
ProQuest Collection ID: http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqm&rft_dat=xri:pqdiss:28842938