Date of Submission


Date of Award


Institute Name (Publisher)

Indian Statistical Institute

Document Type

Doctoral Thesis

Degree Name

Doctor of Philosophy

Subject Name

Computer Science


Economic Research Unit (ERU-Kolkata)


Maitra, Ashok (ERU-Kolkata; ISI)

Abstract (Summary of the Work)

The body of methods known as Dynamic Programming (d.p.) was developed by Bellman and was successfully applied by him to solve practical problems in diverse fields (see 12] and the pa pers cited there.). These methods revolved around an intuitively appealing principle which Bellman called the principle of optimatity. A general formulation of d.p. problem was given by Blackwell (3, s] which is narrower in scope than Bollman's but includes many important applications i12) and offers a proper framework for asking the many inte- resting and mathematically sophisticated questions that arise in d.p.. Strauch, Brown, Maitra, Furukawa, Dantas, Hinderer etc., have investigated sone of these questions in detail. Using slightly different terminology from 13), a few other authors 1ike Derman, Veinott, Ross and Fisher have also worked on similar problems.A general theory of gambling has been developed by Dubins and Savage in their book 'How to gamble if you must' ([9]). Some problems which arise from their work have been investigated by Strauch, Sudderth, Freedman, Ornstein etc. Tadditive probability theory and this, in itself, has led to much interest in the study of finitely additive measures.In Chapter I of this thesis we use slight modification of the gambling terminology to discuss simultaneously both dup. and gambling problems over discrete time. After introducing the necessary definitions (section 1) we state the general problem precisely (section 2), give examples ( section 3) and prove certain general results regarding th optimal reward functions and optimality equations (section 4 and 5). In section 6 we collect some facts frommea sure theory which are used in the noxt three sections where we study measurable gambling systems. In many of the section we frequently make digressions into related questions of interest. Consequently these sections contain results vhi we do not use later.points of time and states vary continuously over time. Our methods aro assentially those of [3). In soctiong 3 and-5- we allow both states and actions to vary continuously over time assuming that the states x(t) (in R;) satisfy a stochastic differential equation specified by known diffusion coefficients. The resulting problem is essentially a problem of stochastic optimal control. We prove results mainly relating to measurability.In both chapters and especially in chapter II there are quite a few questions that remain unanswered and we hope the se and the large number of interesting open problems listed in the book of Dubins and Savage would attract larger contributions to these aroas.


ProQuest Collection ID:

Control Number


Creative Commons License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.


Included in

Mathematics Commons