TransWikia.com

Order of eigenvalues when using different methods

Data Science Asked by sonarclick on March 14, 2021

I’m doing PCA in a covariance matrix where each column and row represents tenors of the yield curve. I have coded the Jacobi rotation method and I also have a QR algorithm based on numpy.linalg.qr in order to be able to calculate the eigenvalues and eigenvectors. When I sort the eigenvalues in descending order they match. Before the sort they don’t. I know that usually the order of eigenvalues does not matter but since, in my case, these are tenors I would like to know which ones contribute more to the variance.

Here’s the code in python for the Jacobi method:

import numpy as np

def eigen_decomp_Jacobi(A, tol=1.0e-9):
        
        a = A.copy()
        
        def maxElem(a):
            '''
                Find largest off-diag. element a[k,l]
            '''
            n = len(a)
            aMax = 0.0
            for i in range(n - 1):
                for j in range(i + 1, n):
                    if abs(a[i, j]) >= aMax:
                        aMax = abs(a[i, j])
                        k = i
                        l = j
            return aMax, k, l
    
        def rotate(a, p, k, l):
            '''
                Rotate to make a[k,l] = 0
            '''
            n = len(a)
            aDiff = a[l, l] - a[k, k]
            
            if abs(a[k, l]) < abs(aDiff) * 1.0e-36:
                t = a[k, l] / aDiff
            else:
                phi = aDiff / (2.0 * a[k, l])
                t = 1.0 / (abs(phi) + np.sqrt(phi ** 2 + 1.0))
                if phi < 0.0:
                    t = -t
            
            c = 1.0 / np.sqrt(t ** 2 + 1.0)
            s = t * c
            tau = s / (1.0 + c)
            temp = a[k, l]
            a[k, l] = 0.0
            a[k, k] = a[k, k] - t * temp
            a[l, l] = a[l, l] + t * temp
            
            for i in range(k): # Case of i < k
                temp = a[i, k]
                a[i, k] = temp - s * (a[i, l] + tau * temp)
                a[i, l] = a[i, l] + s * (temp - tau * a[i, l])
            
            for i in range(k + 1, l): # Case of k < i < l
                temp = a[k, i]
                a[k, i] = temp - s * (a[i, l] + tau * a[k, i])
                a[i, l] = a[i, l] + s * (temp - tau * a[i, l])
            
            for i in range(l + 1, n): # Case of i > l
                temp = a[k, i]
                a[k, i] = temp - s * (a[l, i] + tau * temp)
                a[l, i] = a[l, i] + s * (temp - tau * a[l, i])
            
            for i in range(n): # Update transformation matrix
                temp = p[i, k]
                p[i, k] = temp - s * (p[i, l] + tau * p[i, k])
                p[i, l] = p[i, l] + s * (temp - tau * p[i, l])
    
        n = len(a)
        maxRot = 5 * (n ** 2) # Set rotation number limit
        p = np.identity(n) * 1.0 # Initialize transformation matrix
        
        for i in range(maxRot): # Jacobi rotation loop
            aMax, k, l = maxElem(a)
            
            if aMax < tol:
                print("Number of rotations: " + str(i+1))
                return np.diagonal(a), p
            
            rotate(a, p, k, l)
            
        print('Jacobi method did not converge')

and here’s the QR algorithm:

def convergence(arr, tolerance):
    for i in range(len(arr)):
        for j in range(len(arr[i])):
            if i == j:
                continue
            else:
                if abs(arr[i][j]) > tolerance:
                    return False
    return True

def eigen_decomp_np(A, tolerance=1e-7):
    A_k = A.copy()
    m, n = A.shape
    Q_k = np.eye(n)
    i = 1
    
    while(True):
        Q, R = np.linalg.qr(A_k)
        Q_k = Q_k @ Q
        A_k = R @ Q
        if(convergence(A_k, tolerance)):
            print("Number of iterations: " + str(i))
            break
        else:
            i += 1

    eigenvalues = np.diag(A_k)
    eigenvectors = Q_k
    return eigenvalues, eigenvectors

Using the following symmetric square matrix (this is just an example and not my real matrix) and code:

A = np.array([[2.0,3.0,1.0],[3.0,1.0,5.0],[1.0,5.0,3.0]])
w1, v1 = eigen_decomp_np(A)
w, v = eigen_decomp_Jacobi(A)
w1, v1

I get the following results for the eigenvalues:

QR: [ 8.28493698, -3.66728043, 1.38234345]

Jacobi: [ 1.38234345, -3.66728043, 8.28493698]

The eigenvalues are the same but in different order. The Jacobi method has the correct order that matches the tenors. So if we had three tenors here for each column, say 1 year, 5 years and 10 years then most of the variance is explained by the 10 year yield according to the Jacobi implementation. But when using the QR algorithm most of the variance would be explained by the 1 year tenor.

Is there a way for the QR algorithm to produce the same order as the Jacobi implementation i.e. matching the tenors?

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP