Many testing, estimation and confidence interval procedures discussed in the multivariate statistical literature are based on the assumption that the observation vectors are independent and normally distributed. The main reason for this is that often sets of multivariate observations are, at least approximately, normally distributed. Multivariate normal distribution with a structured mean and a structured covariance matrices arise in many applied areas, such as biology, medicine, sociology, economics, engineering, and other fields. In many examples maximum likelihood estimators cannot be obtained explicitly and must rely on numerical optimization algorithms. We are therefore interested in other explicit estimators as alternatives to the maximum likelihood estimators.
The mean is often given by some linear or bilinear structure, e.g., in the general Linear Model, Mixed Linear Model or in the Growth Curve Model. Another area, of special interest, related to these models is Small Area Estimation. The demand for small area statistics is for both cross-sectional and repeated measures data. Small area estimates for repeated measures data may be used by public policy makers for different purposes such as funds allocation, new educational or health programs and in some cases, they might be interested in a given group of population.
Patterned covariance matrices
Multivariate data can often be expressed as a vector, matrix or more general as a tensor of higher order, for example spatio temporal models. For these data the covariance matrix is naturally given as a Kronecker product of matrices expressing dependent structures in each mode. Hence, the Kronecker structure can for example be used in the purpose to model dependent multilevel observations. The covariance matrices in some modes can also be assumed to follow some linear structured covariance matrix, e.g., banded, Toeplitz, circular Toeplitz, special structure with zeros or some mix, depending on the application. Furthermore, the estimator for the covariance matrix is especially important since inference on the mean parameters strongly depends on the estimated covariance matrix and the dispersion matrix for the estimator of the mean is a function of it.
High-dimensional statistical analysis
Nowadays, when data is more easily collected and stored, high dimensional analysis is of great interest in the above models. In high dimensional analysis the size of the sample space is smaller than the size of the parameter space. In these cases classical methods will fail and new modern theories must be developed. A part of the high dimensional research area is Random Matrix Theory which is a useful tool in, for example, financial mathematics, theoretical physics and wireless communication, as well as in other disciplines. In random matrix theory one analyses the spectral distribution, i.e., the distribution of the eigenvalues for a random matrix.