DTP 97-59, SPhT/98-001

Correlation functions of eigenvalues of multi-matrix models,

and the limit of a time dependent matrix

B. Eynard^{*}^{*}e-mail: Bertrand.E

Department of Mathematical Sciences

University of Durham, Science Laboratories

South Road, DURHAM DH1 3HP, U.K.

Abstract:

The universality of correlation functions of eigenvalues of large random matrices has been observed in various physical systems, and proved in some particular cases, as the hermitian one-matrix model with polynomial potential. Here, we consider the more difficult case of a unidimensional chain of matrices with first neighbour couplings and polynomial potentials.

An asymptotic expression of the orthogonal polynomials allows to find new results for the correlations of eigenvalues of different matrices of the chain.

Eventually, we consider the limit of the infinite chain of matrices, which can be interpreted as a time dependent one-matrix model, and give the correlation functions of eigenvalues at different times.

PACS: 05.40.+j ; 05.45.+b

Keywords: Random matrices, Multi-matrix model, Time dependent correlations, Universal correlations, Orthogonal polynomials.

11/97, for Journal of Physics A

It has now since a long time been observed experimentally or numerically [1,2,3] that the distribution of energy levels of disordered systems is universal in some regime. For instance the connected correlation function of two levels separated by a small number of other levels does not depend on the details of the system, only on its symmetries, while the density of levels is very dependent of the specific details of the system. It was thus conjectured that the correlation functions can be obtained from a gaussian random matrix model (the matrix might be the hamiltonian, the scattering or transfert matrix). Such a conjecture would be a kind of a central-limit-theorem for large random matrices.

This conjecture has been proved in the special case of a one hermitian matrix model with polynomial potential [4,5] and for a two-matrix-model [6]. It has also been noted that the connected correlation functions of more than two eigenvalues should present a stronger universality than the density itself.

The analysis of [4,6] was based on the method of orthogonal polynomials. The correlation functions were expressed in terms of kernels, depending on two variables, which are given as sums of polynomials. Those results are exact and have been known for a long time. The problem was to find an asymptotic expansion in the large limit ( is the size of the matrices), indeed those kernels involve the sum of polynomials of degree running from to . In the one-matrix model case [4], the Darboux-Christoffel theorem allows to rewrite the kernel with only two polynomials of degree and . Asymptotic expressions of orthogonal polynomials are then used to evaluate the universal correlation functions in the short range regime.

In [6] it was claimed that this analysis could probably be extended to a more general case, which is the chain of random hermitian matrices , where each matrix is coupled linearly to the following one .

In particular, when the number of matrices of the chain becomes infinite and the coupling are chosen appropriately, this model can be viewed as a time-dependent random matrix. The coupling between neighbouring matrices of the chain being then a kinetic term of the form .

Here, we will generalize the analysis of [4,6] to the chain of matrices. The paper is organized as follows:

The first section concerns the discrete chain, and the second section, the continuous-time limit. In the first section, we first present the matrix-model, remind the orthogonal polynomial method, then we relate the correlation functions to the orthogonal polynomials via the kernels and generalize the Darboux Christoffel theorem in order to rewrite those kernels as a sum of a finite number of terms. A WKB approximation of the orthogonal polynomials allows to find asymptotic expressions of the kernels, and thus to find the correlation functions in the large limit. We then conclude by examining the universal properties of those correlations.

1. The Chain of matrices

Let us first present the model and introduce the notations coherent with those of [6].

Consider a linear chain of random hermitian matrices , with a probability:

where the are polynomial potentials, is the coupling constant between neighbouring matrices, and is the partition function. ( In the next section, we will consider the continuum limit of this model: the index will become a continuous variable: the time , and with , the linear term will become a kinetic term ).

The Itzykson-Zuber formula [7] allows to integrate out the angular variables (the unitary group), and leaves us with the joint probability for the eigenvalues ( let us note the eigenvalue of the matrix ):

where the are the Vandermonde determinants:

We would now like to compute the conditional probabilities of some subset of these eigenvalues. We thus have to integrate (1.2) over all the eigenvalues which do not belong to . For instance, the density of the eigenvalues of is:

the correlation function of two eigenvalues of is:

and the correlation function of two eigenvalues of two matrices and is:

As in the one matrix case [6,4], all these densities and correlation functions can be calculated by the orthogonal polynomials method [8], let us recall this method [9].

1.1. Orthogonal polynomials

Consider two families of polynomials and , of degree , beginning with the same leading term, and which obey the orthogonality relation:

we define the wave functions and by:

(note that the normalizations differ from [6]). with the help of the orthogonality relation (1.3) , we can define two families of Hilbert Spaces , and orthogonal functions in each of them:

We shall note them with the convenient Dirac notations:

In the space , we have the orthogonality relation:

In each of these spaces, we can define the usual operators (acting on the right hand side: the ket):

, the operator which multiplies by .

From now on, we will drop the index for the bras and kets.

1.2. Equations of motion

From the former definitions we immediately obtain the equations of motion:

and with an integration by parts:

Let us now introduce more convenient notations. Since we began with polynomials and , we know how the multiplications or derivations by or will act: multiplication by raises the degree of by 1, and can be decomposed onto the base of the with :

(where is the ratio of the leading coefficients of and , and the are coefficients to be determined later).

Let us write this in operatorial notations. In this purpose we introduce the operator which decreases the level ( a kind of anihilation operator):

Although is not invertible, we shall abusively write , for it will make no difference when we go to the large limit, and it will considerably simplify the notations ^{*}^{*}we have , and is not defined only on . One solution could be to define a state , provided that all the vanish, which is true. .

We can then write:

Remember that acts on the ket , i.e. on the polynomial , its adjoint acts on the bra . Note also that the first term, is the same for both and because we have chosen the polynomials and with the same leading coefficient.

Similarly, noting that the operator

we can can express the operators and in power series of :

We might as well write any of the operators with such notations:

but let us first go to the large limit.

1.3. large limit

In the classical limit , all those operators become numbers. Indeed, the commutators and are proportional to which thus plays the role of ^{**}^{**}
actually, this is true only if the support of the density is connected, i.e. we assume we have a one-cut solution, for a -cut solution, we would need to consider the operators as matrices.
e.g. for a symetric double well, one needs to distinguish between even and odd values of , which introduces two sets of coefficients and ..
We then write:

The bounds on are easily derived from the equations of motion and boundary conditions. We also consider the limit where is large and close to , so that the don’t depend anymore on , they are just coefficients.

In addition, There exist a remarkable relation (the proof from the cannonical commutation relations is not difficult but of no interest for what follows):

Let us rewrite in the classical limit the equations of motion (1.9) and boundary conditions (1.11)(1.12) previously written for operators. We have the following system of equations:

with the boundary conditions:

One can verify that we have exactly as many equations as unknowns. If we were able to solve this system of algebraic equations and determine all the , we could define functions , of an auxillary variable . We will see below the important role they play.

1.4. WKB approximation

One can find (by a simple generalization of [6], i.e. by a kind of saddle point method for matrix integrals to the explicit expressions given in appendix B) some asymptotic expressions of the in the limit large and :

We shall not prove those asymptotic expressions, but just give some intuitive explanations.

- First, observe that at leading order, all of them have the form:

which is simply the solution of the differential equation

- The term comes from the definition of :

- Moreover, observe that the approximation for can be derived from by steepest descent in , and the expressions for the ’s can be derived from the ’s by and .

- Finally, the normalization constants and the are just what is needed to satisfy the normalization condition

Remark that all that is nothing else than WKB approximation.

Remember that, in quantum mechanics, the wave function of a particle outside a potential well decreases exponentially, while inside the well it is a stationary wave, i.e. a superposition of two opposite progressing waves. This is also what we have here:

- the sum over means that you have to consider the values of , solutions of which have this property. When belongs to (the suport of the density of eigenvalues of the matrix), the equation has no real solution, it has only pairs of complex conjugate solutions, which give the stationary wave. The sum of the two complex solutions will give rise to some real expression for , involving cosinus and sinus functions instead of exponentials (cf [4,6]). When is outside , you have to keep only the solution which decreases exponentially at infinty.

From now on, we will consider only the first case, i.e. .

1.5. Kernels

Remember that we have introduced the orthogonal polynomials in order to integrate the joint-density (1.2) over a subset of the variables. In this purpose, let us as usual [9] rewrite the Vandermonde determinants:

Since linear combinations of columns preserve the determinant, we can rewrite:

The is a normalization which comes from the fact that the polynomials and are not monic (actually ). Any partial integration of (1.2) can thus be written as an integral over the and . Since they are orthogonal, the integration is easily performed, and the final result can be written in terms of Kernels defined by:

and

In the case discussed in [6] there were only four kernels (the ), indeed, the which were just numbers were absorbed into the normalizations. But in the general case, the contain integrations and cannot be absorbed. Note that the are the propagators from to ():

We thus have the following projection relations:

1.6. Correlation functions

In terms of these kernels, the joint density (1.2) of all the eigenvalues of all the matrices can be rewritten:

To obtain the densities and correlation functions of some set of eigenvalues, we have to partially integrate with respect to the other eigenvalues, and this can be done [8] with the help of the projection rules . The general result is given in appendix A. Here, we will only consider the one and two-points functions.

The density of eigenvalues (the one-point function) of the matrix is:

and the two point connected correlation function of one eigenvalue of the matrix , and one eigenvalue of the matrix is:

We now have to evaluate the kernels and in the large limit. The first step will be a generalization of the Darboux-Christoffel theorem, which allows one to rewrite as a sum of a small number of terms, instead of the sum of terms as in (1.19) . The second step will be to use the WKB approximations for the ’s. The kernels will be evaluated by steepest descent.

1.7. Generalization of the Darboux-Christoffel theorem for the kernels

As in [6] the Darboux-Christoffel theorem can be generalized. Formally, we write that

and we sum up the geometrical series in (1.19) :

(we have called the operator acting on the second variable). Multiplying both sides of (1.25) by would give in the left hand side a differential polynomial acting on (indeed can be rewritten as a polynomial in and with the help of eq.(1.7),(1.9)), and in the right hand side a polynomial in and , i.e. a small number of and with , (an explicit example is given in Appendix C). However we shall not do it, but use directly (1.25) in the large limit, where and become numbers and .

The kernels can thus be approximated by:

and using the WKB asymptotic expressions of and :

where as usually, the and stand for the complex solutions of and .

One can also find an asymptotic expression for the kernel by steepest descent:

where

where the are determined by the saddle point equation:

and is the determinant of the matrix of second derivatives of with respect to the ’s:

In the particular case we have:

Substituting (1.26) and (1.27) into (1.22) and (1.23) we can now evaluate the correlation funtions.

1.8. Correlation functions in the short distance limit

case

Setting in (1.26) gives:

where and are the complex solutions of and . When is close to , at leading order, we keep only the values of and such that is small, then this reduces to:

In particular, when we obtain the density:

When is close to but different, we can compute the two-point connected correlation function:

i.e.

we recover the universal two point correlation function in the short distance regime.

case

It is now meaningless to consider the limit close to since they are eigenvalues of different matrices. Generically, is of order , which means that the connected correlation function is of order , and we can say that in the large limit and are uncorrelated.

The only limit in which the correlation may become larger than is the case where becomes small. The equation defines a function . The problem is that this function takes complex values in the interesting domain - for example we see from eq(1.28) that or take complex values, ( this fact has already been debated in [6] ) - and we have not found any physical interpretation, except in the case of the continuous model described in the next section.

However, let us assume that is small (we will also assume ). We introduce the scaling variable:

In the limit , the Taylor expansion of the term appearing in the exponential in (1.26) gives:

Therefore, we have: