Date of Submission

9-22-2014

Date of Award

9-22-2015

Institute Name (Publisher)

Indian Statistical Institute

Document Type

Doctoral Thesis

Degree Name

Doctor of Philosophy

Subject Name

Mathematics

Department

Theoretical Statistics and Mathematics Unit (TSMU-Kolkata)

Supervisor

Basak, Gopal Krishna (TSMU-Kolkata; ISI)

Abstract (Summary of the Work)

This thesis deals with recursive systems used in theoretical and applied probability. Recursive systems are stochastic processes {Xn}n≥1 where the Xn depends on the earlier Xn−1 and also on some increment process which is uncorrelated with the process Xn. The simplest example of a recursive system is the Random Walk, whose properties have been extensively studied. Mathematically a recursive system takes the form Xn = f(Xn−1, n), is the increment/ innovation procedure and f(·, ·) is a function on the product space of xn and n. We first consider a recursive system called Self-Normalized sums (SNS) corresponding to a sequence of random variables {Xn} (which is assumed to be symmetric about zero). Here the sum of Xi is normalized by an estimate of the p th absolute moment constructed from the Xi ’s. The SNS are the most conservative among all normalized sums in the sense that all the moments of the SNS exist even if Xi do not possess any finite moments. We look at the functional version of the SNS called the Self-Normalized Process (SNP) where the Xi ’s come from a very general family called the domain of attraction of the stable distribution with stability index α denoted by DA(α), for α ∈ (0, 2] (for definition see Section 2.2). We show that for any choice of α and p other than 2 the limiting distributions of the SNP are either trivial or do not exist.We consider another recursive system called the Adaptive Markov Chain Monte Carlo (AMCMC) which is used extensively in statistical simulation. The motivation behind this method is to get hold of a Markov Chain (MC) whose stationary distribution (if it exists) is the distribution of interest, also called the target distribution, henceforth denoted as ψ(Xn·). One chooses a proposal distribution which is a conditional probability distribution, say p(·|x) and then given a present value of the chain at xn generates a new value y ∼ p(·|xn). The new value y is accepted with a certain probability, called acceptance probability, which depends on the target distribution. It can be verified that the MC constructed in this way has ψ(·) as the stationary distribution. The usual choice of the proposal given the value of xn is a distribution which is symmetric about the mean xn, say for example, Normal with mean 0 and variance σ 2 . Therefore one has : Xn+1 = Xn + I(U < αn), where ∼ N(0, σ2 ), and U is an Uniform (0, 1) random variable and αn = min{1, ψ(Xn+) ψ(Xn) }. The problem with this choice is that even though in the long run this process Xn may converge to ψ(·) the convergence may be show for bad choices of σ 2 . In practice the choice of the unknown parameters that determine the speed of convergence are made to depend on the present and/or past values of the chain in addition to some additional quantities. This is called Adaptive MCMC (AMCMC) in statistical literature.

Comments

ProQuest Collection ID: http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqm&rft_dat=xri:pqdiss:28843302

Control Number

ISILib-TH438

Creative Commons License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.

DOI

http://dspace.isical.ac.in:8080/jspui/handle/10263/2146

Included in

Mathematics Commons

Share

COinS