Matrix Functions

Matrix functions take one or more matrices (or attributes) as arguments. Unlike most other LINGO functions, which primarily return scalar (or single-value) results, some matrix functions can return multiple results, with some of the results being matrices or vectors.  Matrix functions supported are:

Cholesky factorization,
Matrix determinants,
Eigenvalues and eigenvectors,
Matrix inversion,
Multiple linear regression,
Matrix multiplication,
Matrix transpose, and
QR factorization
Note :All matrix functions are only supported in calc sections of models.  At present, matrix functions are not valid inside model (optimizable)  sections.
Note:All matrix functions work entirely with dense matrices.  Matrices defined on sparse sets are not supported.

 

L[, err] = @CHOLESKY( A)

The @CHOLESKY function performs a Cholesky factorization of the symmetric, positive-definite matrix A, where LL' = A, and L is a lower-triangular matrix with real and positive diagonal elements.  One can think of L as the "square root" of matrix A.  Both A and L must be square and of the same dimension.  The upper triangle of L will be filled with zeros.

The optional argument, err, is an error flag that will be 0 if the factorization was successful, otherwise it will be 1.  If err is present, any error will be ignored by LINGO and will be left for your model to handle. If err is not present, then, on any error, LINGO will display an error message and terminate the run.

In this example below, we take a small 3x3 matrix, A, compute its Cholesky factor, L, then show that LL' is equal to the original matrix A.

MODEL:

! Compute the Cholesky factorization of matrix A.

! Back check by taking the Cholesky factor, L, and

! multiplying it with its transpose, L', to get

! the original matrix A;

SETS:
 S1;

 S2(S1,S1): A, A2, L, LT;

ENDSETS

 

DATA:

  S1 = 1..3;

  A = 10 11 12

      11 54 51

      12 51 86;

ENDDATA

 

CALC:

  ! Get the Cholesky factor;

  L = @CHOLESKY( A);

 

  ! L transpose;

  LT = @TRANSPOSE( L);

 

  ! Compute L * L';

  @FOR( S2( I, J):

      A2( I, J) = @SUM( S1( J2): L( I, J2) * LT( J2, J));

  );

ENDCALC

 

CALC:

  @SET( 'TERSEO', 2)  ; ! Just display errors;

  @WRITE( "Original A matrix:", @NEWLINE( 1));

  @TABLE( A);

  @WRITE( @NEWLINE( 1), "Cholesky factor L:", @NEWLINE( 1));

  @TABLE( L);

  @WRITE( @NEWLINE( 1), "L transpose:", @NEWLINE( 1));

  @TABLE( LT);

  @WRITE( @NEWLINE( 1), "L * L' = A:", @NEWLINE( 1));

  @TABLE( A2);

ENDCALC

END

Model: CHOLESKY

The output from this model is reproduced below, where we can see, as per definition, LL' = A:

Original A matrix:

            1         2         3

  1  10.00000  11.00000  12.00000

  2  11.00000  54.00000  51.00000

  3  12.00000  51.00000  86.00000

 

Cholesky factor L:

            1         2         3

  1  3.162278  0.000000  0.000000

  2  3.478505  6.473021  0.000000

  3  3.794733  5.839623  6.123627

 

L transpose:

            1         2         3

  1  3.162278  3.478505  3.794733

  2  0.000000  6.473021  5.839623

  3  0.000000  0.000000  6.123627

 

L * L' = A:

            1         2         3

  1  10.00000  11.00000  12.00000

  2  11.00000  54.00000  51.00000

  3  12.00000  51.00000  86.00000

 

A = @MTXMUL( B, C)

The @MTXMUL function multiplies matrix B by matrix C and places the result in matrix A.  The matrices must all be defined on dense sets.  The dimensions of the matrices must also agree, for example, if B is an m x n matrix, then C must be an n x p, and A must be an m x p matrix.  Furthermore, m, n and p must be greater than 1, i.e., inner and outer products are not currently supported.

In the Cholesky example above, you will recall that we computed L x L', i.e., the product of matrix L by its transpose, L'.  We explicitly wrote out the formula for matrix multiplication with the following code:

  ! Compute L * L';

  @FOR( S2( I, J):

      A2( I, J) = @SUM( S1( J2): L( I, J2) * LT( J2, J));

  );

A simpler and more efficient way to have done this would have been to use the @MTXMUL function as follows:

  ! Compute L * L';

  A2 = @MTXMUL( L, LT);

 

@DETERMINANT( A)

The @DETERMINANT function returns the determinant value for matrix A.

In the example below, we make use of Cramer's Rule for finding the solution to a system of linear equations of the form Ax=b.  According to Cramer's Rule, the solution for x( i) may be calculated by computing the determinant of the A matrix with column i replaced by the right-hand side values b. This determinant value is then divided by the determinant of the original A matrix to get the final value for x( i).

MODEL:

 

! Use Cramer's Rule to find the solution

 to a system of linear equations, Ax=b;

 

SETS:

  DIM: X, B, B2;

  DXD( DIM, DIM): A, A2;

ENDSETS

 

DATA:

  DIM = 1..3;

  B = 12 0 -3;

  A =

     1  -2   5

     4   1  -2

    -2   1  -1

  ;

ENDDATA

 

CALC:

  ! Determinant of the original matrix;

  D = @DETERMINANT( A);

 

  ! Determine if a unique solution exists;

  @IFC( D #EQ# 0:

     @STOP( '  No unique solution exists!!!');

  @ELSE

     @WRITE( '  A unique solution exists...');

  );

  @WRITE( @NEWLINE( 1));

 

  ! Copy matrix;

  @FOR( DXD( I, J): A2( I, J) = A( I, J));

 

  @FOR( DIM( J):

     ! Replace column J with the B values;        

     @FOR( DIM( I):

        A2( I, J) = B( I);

     );

 

     ! Calculate solution for X( J);

     X( J) = @DETERMINANT( A2) / D;

 

     ! Restore column J in the original matrix;

     @FOR( DIM( I):

        A2( I, J) = A( I, J);

     );

  );

 

  ! Back check the solution;  

  @FOR( DIM( I):

     B2( I) = @SUM( DIM( J): A( I, J) * X( J));

     @IFC( @ABS( B2( I) - B( I)) #GT# 1.E-6:

        @STOP( 'Solution fails!!!!');

     );

  );

  @WRITE( '  Solution verified...', @NEWLINE( 1),

   '  Solution = ');

 

  @FOR( DIM( I): @WRITE( X( I), '   '));

 

  @WRITE( @NEWLINE( 2));

ENDCALC

 

END

Model: CRAMER

We first checked to see if a unique solution to the system exists by computing the determinant of the original matrix and checking that it was not 0.  We then used Cramer's Rule to compute the solution vector, x. Finally, we took the x values and verified that they solve the equations.  The output from the model is:

A unique solution exists...

Solution verified...

Solution = 1   2   3

 
LAMDAR, VR, LAMBDAI, VI, ERR = @EIGEN( A)

The @EIGEN function returns eigenvectors and eigenvalues for the matrix A, where:

A                a nxn matrix,
LAMBDAR        a vector of length n for returning the real parts of the eigenvalues,
VR                a nxn matrix for returning the real parts of the eigenvectors (one vector per column),
LAMBDAI        a vector of length n for returning the imaginary parts of the eigenvalues,
VI                a nxn matrix for returning the imaginary parts of the eigenvectors, and
err                an error flag that will be 1 if a numeric problem occurs, otherwise 0.

As long as at least one left-hand side argument is present, all other arguments may be omitted.  If err is present, then LINGO will not halt the run if a numeric error occurs; it will be up to your model to handle the error.  If err is not present and a numeric error occurs, then LINGO will halt model execution.

In the next example, we compute the eigenvalues and eigenvectors for a 3x3 symmetric matrix.

MODEL:

 

SETS:

  DIM: LAMBDA;

  DXD( DIM, DIM): A, V, AV, LV;

  DXDL( DXD) | &1 #GT# &2: PROD;

ENDSETS

 

DATA:

  TOL = 1.E-6;

  DIM = 1..3;

  A =  3 4 1

       4 0 9

       1 9 5;

ENDDATA

 

CALC:

  ! Compute the eigenvalues and eigen vectors;

  ! Note: The matrix is symmetric, so the vectors

  ! and values will not have imaginary parts, so

  ! we can omit the imaginary arguments.;

  LAMBDA, V,,, ERR = @EIGEN( A);

 

  ! Numeric error?;

  @IFC( ERR #NE# 0: @STOP( 'Numeric error'));

 
  ! Display them with the @TABLE function.;

  @WRITE( @NEWLINE( 1), ' Eigenvalues:', @NEWLINE( 2));

  @TABLE( LAMBDA);

 

  @WRITE( @NEWLINE( 1), ' Eigenvectors:', @NEWLINE( 1));

  @TABLE( V);

 

  ! Check that A * v = Lambda * v holds.;

  NZ = 0;

  @FOR( DXD( I, J):

     LV( I, J) = LAMBDA( J) * V( I, J);

     AV( I, J) = @SUM( DIM( K): A( I, K) * V( K, J));

     @IFC( @ABS( LV( I, J) - AV( I, J)) #GT# TOL: NZ = NZ + 1);

  );

  @WRITE( @NEWLINE( 1));

  @IFC( NZ #GT# 0:

     @WRITE( ' Lambda * v = A * v is not verified.');

  @ELSE

     @WRITE( ' Lambda * v = A * v is verified.');

  );

 
  ! Symmetric matrices have orthogonal eigenvectors.

  ! Verify this by computing the inner products.;

  NZ = 0;

  @FOR( DXDL( I, J):

     PROD( I, J) = @sum( DIM( K): V( K, I) * V( K, J));

     @IFC( @ABS( PROD( I, J)) #GT# TOL: NZ = NZ + 1);

  );

  @WRITE( @NEWLINE( 1));

  @IFC( NZ #GT# 0:

     @WRITE( ' Eigenvectors are NOT orthogonal.', @NEWLINE( 2));

  @ELSE

     @WRITE( ' Eigenvectors are orthogonal.', @NEWLINE( 2));

  );

ENDCALC

END

Model: EIGENEX1

Symmetric matrices have the property that their eigenvalues and eigenvectors are all real valued, allowing us to ignore the imaginary components.  Looking at the call to @EIGEN:

LAMBDA, V,,, ERR = @EIGEN( A);

we see that two left-hand side arguments are omitted, corresponding to the imaginary parts.  Note, we could have passed these two arguments, however, they would have simply been filled with all zeros.

By the definition of eigenvalues and eigenvectors, it's true that Av = Lambda * v, where A is the original nxn matrix, v is an nx1 eigenvector of A, and Lambda is the eigenvalue corresponding to v. The model verifies this relationship and displays the message:

Lambda * v = A * v is verified.

Another nice property of symmetric matrices is that the eigenvectors are all orthogonal.  The model verifies this property by calculating all the inner products between the eigenvectors and comparing these products to 0.

The output from the model follows:

Eigenvalues:

 

 1   12.92039

 2   2.587500

 3  -7.507889

 

Eigenvectors:

                 1               2               3

 1      -0.3179176      -0.9145159       0.2501781

 2      -0.6062189  -0.6813624E-02      -0.7952686

 3      -0.7289904       0.4044926       0.5522307

 

Lambda * v = A * v is verified.

Eigenvectors are orthogonal.

 

AINV[, err] = @INVERSE( A)

The @INVERSE function returns the inverse of matrix A.  Both AINV and A must be square, nxn matrices.

The optional argument, err, is an error flag that will be 0 if the invert operation was successful, otherwise it will be 1.  A situation where err would be nonzero is when A is a singular matrix.  If err is present, any error will be ignored by LINGO and will be left for your model to handle. If err is not present, then, on any error, LINGO will display an error message and terminate the run.

The example below may be familiar from the @DETERMINANT example above, where we solve a 3x3 system of linear equations. In this example, however, we will not use Cramer's Rule to solve the system.  Instead, we will compute the inverse of the A matrix, and then multiply the inverse by the right-hand side vector b to get the solution vector x. We also verify that a unique solution exists by computing the determinant of A, as well as verify that our solution does indeed solve the original set of equations.

MODEL:

 

SETS:

  DIM: X, B, B2;

  DXD( DIM, DIM): A, AINV;

ENDSETS

 

DATA:

  DIM = 1..3;

  B = 12 0 -3;

  A =

     1  -2   5

     4   1  -2

    -2   1  -1

  ;

ENDDATA

 

CALC:

  ! Determinant of the original matrix;

  D = @DETERMINANT( A);

 

  ! Determine if a unique solution exists;

  @IFC( D #EQ# 0:

     @STOP( '  No unique solution exists!!!');

  @ELSE

     @WRITE( '  A unique solution exists...');

  );

  @WRITE( @NEWLINE( 1));

 

  ! Get the matrix inverse;

  AINV = @INVERSE( A);

 

  ! Use the inverse to compute the solution vector: x = AINV * b;

  @FOR( DIM( I): X( I) = @SUM( DIM( J): AINV( I, J) * B( J)));

 

  ! Back check the solution;  

  @FOR( DIM( I):

     B2( I) = @SUM( DIM( J): A( I, J) * X( J));

     @IFC( @ABS( B2( I) - B( I)) #GT# 1.E-6:

        @STOP( 'Solution fails!!!!');

     );

  );

  @WRITE( '  Solution verified...', @NEWLINE( 1),

   '  Solution = ');

 

  @FOR( DIM( I): @WRITE( @FORMAT( X( I), '10.5g')));

 

  @WRITE( @NEWLINE( 2));

ENDCALC

 

END

Model: INVERSE

The output from the model follows:

A unique solution exists...

Solution verified...

Solution =          1         2         3

Again, as with the @DETERMINANT example, the solution is [1,2,3].

 

B, b0, RES, rsq, f, p, var = @REGRESS( Y, X)

The @REGRESS function is used to perform multiple linear regression (MLR), a technique that models the linear relationship, Y = b0 + B*X, between one dependent variable and one or more independent variables.  MLR is frequently used in forecasting, e.g., by economists in macroeconomics when modeling of the economy.

Using the syntax above, we have the following definitions:

B        The vector of coefficient terms for the regression.  If there are p independent variables, then B is a vector of length p.
b0        The constant term of the regression.  If a constant term is not desired, then you can omit this argument and LINGO will force the constant term to 0.
RES        The residuals, or error terms. These are the differences between the predicted and actual observed values for the dependent variable.
rsq        The R-squared statistic, a measure of strength of the relationship between the model and dependent variable. The fraction of the original variance removed by the forecast formula versus using just b0 alone.
f        The F-value, a measure of the overall significance of the model.
p        The p-value for the regression, a measure of the significance of the F-test.
var        Estimate of variance of the residual terms.
Y        The observed values for the dependent variable.  If there are n observations, then Y should be a vector of length n.
X        The independent variable values.  If there are n observations and p independent variables, then X should be an nxp matrix.

Note, that any of the left-hand side arguments may be omitted if they are not of interest.  The only requirement is that at least one left-hand side argument must be present.

The goal of MLR is to fit a linear function to the observed data, X, for predicting the independent observations, Y. The form of the function is:

 Y = b0 + ∑ i B( i) * X( i) + RES

The fit is determined by the set of coefficients that minimizes the sum of the squared residual values.  For more information, see: https://en.wikipedia.org/wiki/Line.

In the LINGO sample models folder, there is a regression example called REGRLONG, shown below. This model makes use of the Longley dataset:

http://www.itl.nist.gov/div898/strd/lls/data/LINKS/DATA/Longley.dat

This is a series of seven macroeconomic statistics, with one being total employment, the dependent variable.  The Longley dataset is commonly used for testing statistical software, and the link above contains the certified results for regressions based on the data.

MODEL:

 

! Multiple linear regression;

! The output is the set of regression coefficients, B,

for the model:

   Y(i) = B(0) + B(1)*X(i,1) + B(2)*X(i,2)+... + ERROR(i)

 

We use the Longley macroeconomic data set for:

http://www.itl.nist.gov/div898/strd/lls/data/LINKS/DATA/Longley.dat

 

Certified Regression Statistics for Longley data:

                                        Standard Deviation

 Parameter         Estimate                      of Estimate

  B(0)          -3482258.63459582          890420.383607373

  B(1)           15.0618722713733          84.9149257747669

  B(2)          -0.358191792925910E-01     0.334910077722432E-01

  B(3)          -2.02022980381683          0.488399681651699

  B(4)          -1.03322686717359          0.214274163161675

  B(5)          -0.511041056535807E-01     0.226073200069370

  B(6)           1829.15146461355          455.478499142212

 

Residual

Standard Deviation   304.854073561965                

 

R-Squared            0.995479004577296                

 

Certified Analysis of Variance Table:

 

Source of   Degrees of       Sums of              Mean

Variation    Freedom          Squares             Squares

Regression      6      184172401.944494    30695400.3240823

Residual        9      836424.055505915    92936.0061673238

 

  F Statistic

330.285339234588

;

 

SETS:

 OBS : Y, RES;

 VARS;

 DEPVAR( VARS);    ! The dependent variable;

 EXPVAR( VARS): B; ! The explanatory variables;

 OXV( OBS, VARS): DATATAB; ! The data table;

 OXE( OBS, EXPVAR): X;

ENDSETS

 

DATA:

 

! Names of the variables;

 VARS = EMPLD PRICE GNP__ JOBLS MILIT POPLN  YEAR_;

 

! Dependent variable. Must be exactly 1;

 DEPVAR = EMPLD;

 

! Explanatory variables. Should not include

  dependent variable;

 EXPVAR = PRICE GNP__ JOBLS MILIT POPLN YEAR_;

 

! The dataset;

 DATATAB =

   60323   83  234289  2356  1590  107608  1947

   61122  88.5 259426  2325  1456  108632  1948

   60171  88.2 258054  3682  1616  109773  1949

   61187  89.5 284599  3351  1650  110929  1950

   63221  96.2 328975  2099  3099  112075  1951

   63639  98.1 346999  1932  3594  113270  1952

   64989  99   365385  1870  3547  115094  1953

   63761 100   363112  3578  3350  116219  1954

   66019 101.2 397469  2904  3048  117388  1955

   67857 104.6 419180  2822  2857  118734  1956

   68169 108.4 442769  2936  2798  120445  1957

   66513 110.8 444546  4681  2637  121950  1958

   68655 112.6 482704  3813  2552  123366  1959

   69564 114.2 502601  3931  2514  125368  1960

   69331 115.7 518173  4806  2572  127852  1961

   70551 116.9 554894  4007  2827  130081  1962;

 

ENDDATA

 

CALC:

   !Suppress some of Lingo's output;

   @SET( 'TERSEO', 2);

 

   !Set up data for the @REGRESS function

    by moving the dependent variable into

    Y and the independents into X;

   @FOR( DEPVAR( K):

      @FOR( OBS( I):

         Y( I) = DATATAB( I, K);

         @FOR( OXE( I, J): X( I, J) = DATATAB( I, J))

      );

   );

 

   ! Do the regression;

   B, B0, RES, RSQUARE, F, P, VAR = @REGRESS( Y, X);

 

   ! Print a report;

   @WRITE( '   Explained error/R-Square: ',

    @FORMAT( RSQUARE, '16.8g'), @NEWLINE(1));

   @WRITE( '                F statistic: ',

    @FORMAT( F, '16.8g'), @NEWLINE(1));

   @WRITE( '                    p value: ',

    @FORMAT( P, '16.8g'), @NEWLINE(1));

   @WRITE( '          Residual variance: ',

    @FORMAT( VAR, '16.8g'), @NEWLINE(1));

   @WRITE( '         Residual std error: ',

    @FORMAT( VAR ^ .5, '16.8g'), @NEWLINE(2));

   @WRITE( '   B( 0):     ', @FORMAT( B0, '16.8g'),

    @NEWLINE( 1));

   @FOR( EXPVAR( I):

    @WRITE( '   B( ', EXPVAR( I), '): ',

    @FORMAT( B( I), '16.8g'), @NEWLINE( 1));

   );

ENDCALC

END

Model: REGRLONG

The interesting parts of this model are the following two expressions:

   !Set up data for the @REGRESS function

    by moving the dependent variable into

    Y and the independents into X;

   @FOR( OBS( I):

      Y( I) = DATATAB( I, @INDEX( EMPLD));

      @FOR( OXE( I, J): X( I, J) = DATATAB( I, J))

   );

 

   ! Do the regression;

   B, B0, RES, RSQUARE, F, P, VAR = @REGRESS( Y, X);

Note, that the data is provided as a single table, containing both the dependent and independent observations.  The first expression partitions the raw data into a vector Y of dependent employment observations, and a 16x6 X matrix of independent observations.  At this point, the data is in the correct format to submit to @REGRESS, and we perform the regression with the statement:

    ! Do the regression;

B, B0, RES, RSQUARE, F, P, VAR = @REGRESS( Y, X);

Results from the regression are then displayed:

  Explained error/R-Square:         0.995479

               F statistic:        330.28534

                   p value:    4.984031e-010

         Residual variance:        92936.006

        Residual std error:        304.85407

 

  B( 0):           -3482258.6

  B( PRICE):        15.061872

  B( GNP__):     -0.035819179

  B( JOBLS):       -2.0202298

  B( MILIT):       -1.0332269

  B( POPLN):     -0.051104106

  B( YEAR_):        1829.1515

 

T = @TRANSPOSE( A)

The @TRANSPOSE function returns the transpose of matrix A, where T(i,j) = A(j,i) for every (i,j).  Matrix A does not have to be square, but if A is an m x n matrix, then T must be of dimension nxm.  For an example of @TRANSPOSE, refer to the section above on @CHOLESKY, where we use @TRANSPOSE to get the transpose of the Cholesky factor.  This transpose is then multiplied by the original Cholesky factor to get the original matrix back.

Q,R[, err] = @QRFACTOR( A)

The @QRFACTOR function returns a QR factorization of matrix A, where A = QR, with R being upper triangular and Q being orthogonal (i.e., Q'Q is the identity matrix).  The matrices must all be defined on dense sets, with A and Q being m x n matrices and R an n x n matrix.

The optional argument, err, is an error flag that will be 0 if the factorization operation was successful, otherwise it will be 1.  If err is present, any error will be ignored by LINGO and will be left for your model to handle. If err is not present, then, on any error, LINGO will display an error message and terminate the run.

The sample model QRFACTOR contained in the LINGO Samples folder illustrates the use of the @QRFACTOR function.