Date of Submission


Date of Award


Institute Name (Publisher)

Indian Statistical Institute

Document Type

Doctoral Thesis

Degree Name

Doctor of Philosophy

Subject Name



Theoretical Statistics and Mathematics Unit (TSMU-Kolkata)


Ghosh, Anil Kumar (TSMU-Kolkata; ISI)

Abstract (Summary of the Work)

Measures of dependence among several random vectors and associated tests of independence play a major role in different statistical applications. Blind source separation or independent component analysis (see, e.g., Hyv¨arinen et al., 2001; Shen et al., 2009), feature selection and feature extraction (see, e.g., Li et al., 2012), detection of serial correlation in time series (see, e.g., Ghoudi et al., 2001) and finding the causal relationships among the variables (see, e.g., Chakraborty and Zhang, 2019) are some examples of their wide-spread applications. Tests of independence has vast applications in other areas of sciences as well. For instance, to characterize the genetic mechanisms of a complex disease, a biologist or a medical scientist often needs to carry out some tests of independence to investigate the causal relationship among multiple quantitative traits and test for their association with disease genes (see, e.g., Hsieh et al., 2014). Proper understanding of the structure of dependence among several groups of variables often helps a psychologist or social scientist to construct a meaningful structural equation model (see, e.g., De Jonge et al., 2001) for data analysis. In order to develop a micro-economic model for health care and health insurance, an economist needs to study the dependence (or independence) between several measures of health-care utilization and the insurance status of the house-hold for a variety of socio-economic and health-status variables (see, e.g. Cameron et al., 1988). In this thesis, we deal with this problem of testing independence among several random vectors. This is a well-known problem in statistics and machine leaning literature, and several methods of are available for it. But most of these existing methods deal with two random vectors (or random variables) only. Moreover, instead of testing for independence, many of them only test for uncorrelatedness between two vectors. Now a days, we often deal with data sets having dimension larger than sample size. Many existing tests cannot be used in such situations. Keeping all these in mind, in this thesis, we propose and investigatesome methods that can be used for testing independence among several random vectors of arbitrary dimensions. Later we shall see that these proposed tests can also be used for testing independence among several random functions or random elements taking values in infinite dimensional Banach or Hilbert spaces. Consider a d-dimensional random vector X = (X(1) , X(2) , . . . , X(p) ) with sub-vectors X(1) , X(2) , . . . , X(p) of dimensions d1, d2, . . . , dp, respectively (d1 + d2 + . . . + dp = d). Suppose that we have n independent observations x1, x2, . . . , xn on X, and based on these observations, we need to construct a test for independence among the sub-vectors X(1) , X(2) , . . . , X(p) . This is a well studied problem in statistics, especially for p = 2 and d1 = d2 = 1. Pearson’s product moment correlation coefficient is arguably the most simplest and popular measure of association between two random variables, and one can easily construct a test of independence based on this measure (see, e.g., Anderson, 2003). 1904),


ProQuest Collection ID:

Control Number


Creative Commons License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.


Included in

Mathematics Commons