## Doctoral Theses

### On the Analysis of Some Recursive Equations in Probability.

9-22-2014

9-22-2015

#### Institute Name (Publisher)

Indian Statistical Institute

Doctoral Thesis

#### Degree Name

Doctor of Philosophy

Mathematics

#### Department

Theoretical Statistics and Mathematics Unit (TSMU-Kolkata)

#### Supervisor

Basak, Gopal Krishna (TSMU-Kolkata; ISI)

#### Abstract (Summary of the Work)

This thesis deals with recursive systems used in theoretical and applied probability. Recursive systems are stochastic processes {Xn}nâ‰¥1 where the Xn depends on the earlier Xnâˆ’1 and also on some increment process which is uncorrelated with the process Xn. The simplest example of a recursive system is the Random Walk, whose properties have been extensively studied. Mathematically a recursive system takes the form Xn = f(Xnâˆ’1, n), is the increment/ innovation procedure and f(Â·, Â·) is a function on the product space of xn and n. We first consider a recursive system called Self-Normalized sums (SNS) corresponding to a sequence of random variables {Xn} (which is assumed to be symmetric about zero). Here the sum of Xi is normalized by an estimate of the p th absolute moment constructed from the Xi â€™s. The SNS are the most conservative among all normalized sums in the sense that all the moments of the SNS exist even if Xi do not possess any finite moments. We look at the functional version of the SNS called the Self-Normalized Process (SNP) where the Xi â€™s come from a very general family called the domain of attraction of the stable distribution with stability index Î± denoted by DA(Î±), for Î± âˆˆ (0, 2] (for definition see Section 2.2). We show that for any choice of Î± and p other than 2 the limiting distributions of the SNP are either trivial or do not exist.We consider another recursive system called the Adaptive Markov Chain Monte Carlo (AMCMC) which is used extensively in statistical simulation. The motivation behind this method is to get hold of a Markov Chain (MC) whose stationary distribution (if it exists) is the distribution of interest, also called the target distribution, henceforth denoted as Ïˆ(XnÂ·). One chooses a proposal distribution which is a conditional probability distribution, say p(Â·|x) and then given a present value of the chain at xn generates a new value y âˆ¼ p(Â·|xn). The new value y is accepted with a certain probability, called acceptance probability, which depends on the target distribution. It can be verified that the MC constructed in this way has Ïˆ(Â·) as the stationary distribution. The usual choice of the proposal given the value of xn is a distribution which is symmetric about the mean xn, say for example, Normal with mean 0 and variance Ïƒ 2 . Therefore one has : Xn+1 = Xn + I(U < Î±n), where âˆ¼ N(0, Ïƒ2 ), and U is an Uniform (0, 1) random variable and Î±n = min{1, Ïˆ(Xn+) Ïˆ(Xn) }. The problem with this choice is that even though in the long run this process Xn may converge to Ïˆ(Â·) the convergence may be show for bad choices of Ïƒ 2 . In practice the choice of the unknown parameters that determine the speed of convergence are made to depend on the present and/or past values of the chain in addition to some additional quantities. This is called Adaptive MCMC (AMCMC) in statistical literature.

ProQuest Collection ID: http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqm&rft_dat=xri:pqdiss:28843302

ISILib-TH438