8 Most Essential SciPy Functions for Linear Algebra

A brief guide for Python developers

A guitar against light reflection, which is giving off linear lines
By Dima Dimax from Pexels

Leaving .NET development is not something I ever missed because C++ is not getting any easier.

I tiptoed into Python in December 2012 while developing .NET frameworks using C# and C++. As my challenges grew in implementing tasks based on linear algebra, I was on a Python-specific pipeline hunt for any software or open source library. Enter SciPy.

If you want to implement linear algebra operations in Python, consider starting with SciPy (pronounced sigh pi), which stands for Scientific Python.

It is a Python-based ecosystem of open source software for mathematics, science, and engineering [1] built on top of NumPy. While it is optimized for linear algebra tasks, that is not all it is capable of.

I have written extensively about linear algebra in the context of artificial intelligence across natural language processing, machine learning, and deep learning on Medium.

In this article, I will focus on simplifying what I consider the essential functions (illustrating the how-to for the ones I believe to be most often implemented for linear algebra tasks using SciPy).

I apply this approach when I have to analyze a system of linear equations with multiple variables to compute the inverse of a matrix.

From this point forward, I will include the output of the code in the gist, noted as OUTPUT. I will import numpy as np only once.

Code by the author

I have observed crashes when working with the check_finite parameter inside linalg.inv() if you disable it. The check_finite parameter allows for identifying the use of only finite numbers.

Apply this to compute a Toeplitz matrix using a Levinson Recursion procedure. I need to validate this specific type of matrix when I am interested in solving linear equations with constant coefficients.

Code by the author

To create a Toeplitz matrix, you can observe above that I only clarified the first row and the first column since SciPy requires their specification.

3. scipy.linalg.eig()

Implement this when you have a square matrix and compute its eigenvalues and eigenvectors.

Code by the author

The goal here is to solve for system changes over time such that eigenvalues measure how quickly a system evolves. I have typically used it for evaluating monetary risk management, a recurring financial engineering task, while conducting data analysis using deep learning.

I decompose a matrix into an upper triangular matrix and a unitary lower triangular matrix to arrive at the Schur decomposition, which assists with finding the eigenvalues and eigenvectors.

Code by the author

We call for T and Z: T is the upper triangular matrix, and Z is the unitary matrix.

The Cholesky decomposition is a way of decompressing a Hermitian, positive-definite matrix to derive a lower triangular matrix [3].

Code by the author

This factorization takes advantage of the variable that if A is a Hermitian, positive-definite matrix [4], a lower triangular matrix L exists, such that:

A = LL*
Created by the author

where * denotes conjugate transpose.

A polar decomposition is a factorization of a matrix and returns two matrices: (1) unitary and (2) Hermitian positive-definite.

Code by the author

This approach is essential in linear algebra because it applies across tasks when we want to show every matrix has an eigenvector basis. Additionally, develop it to find approximations for certain types of matrices.

Daily, I employ it in artificial intelligence tasks like machine learning implementation pipelines and pattern recognition actions.

We employ this to solve for the Sylvester equation. In artificial intelligence tasks (in my case), I use it to compute minimum-cost paths [6] between nodes in a network.

Code by the author

As an illustration, my use case for integrating this function is for detecting and correcting errors in data sets when I am building natural language processing implementation pipelines or training machine learning models (e.g., support vector machines and principal component analysis) for predictive analytics.

A Leslie matrix shows the rates of change between different age groups in a population over time. It was originally developed to describe human populations.

Code by the author

I use it to conduct predictive analysis on population change over time. I utilize the latter information to develop data-informed insights about specific products or services (at a minimum, based on age groups) for users.

Please share your thoughts with me if you recommend any edits for this post or recommendations on further expanding this topic area.

Also, I regularly post about linear algebra on Medium. If you would like to read more, I recommend you review the following post, “Linear Algebra for AI: NLP and ML Use Cases Simply Explained”: [2].

Please kindly consider subscribing to my newsletter

1. scipy. GitHub — Scipy/scipy: SciPy library main repository. GitHub. From

2. Tilbe, Anil. (2022, July 24). Linear algebra for AI: NLP and ML use cases.

3. Weisstein. Conjugate transpose — from Wolfram MathWorld. From

4. Positive definite completions of partial Hermitian matrices. Linear Algebra and Its Applications, 58, 109–124.

5. Horn, R. A., & Johnson, C. R. (1985). Matrix analysis. Cambridge University Press.

6. A recurrent neural network for solving Sylvester equation with time-varying coefficients. IEEE Xplore. From

7. Rachidi et al. On the Leslie matrices, Fibonacci sequences and population dynamics.

News Credit

%d bloggers like this: