BigO and Beyond: How to Compute Fibonacci Sequence Efficiently with Matrix Exponentiation

Rob Wilson
7 min readMay 11, 2023

--

This article continues from my previous topic talking about dynamic programming. I encourage you to read that first to get the full context of what we are discussing today!

First of all, I can’t promise that this knowledge will make you see the world in vertical lines of green code; however, in this article we are going to cover BigO notation in more detail and then use that to drive us towards the most efficient way to find the nth number of the Fibonacci sequence using matrix exponentiation. Matrix Exponentiation is a powerful technique in mathematics and computer science for quickly computing large powers of a square matrix. It has many applications in various fields such as physics, economics, cryptography, and machine learning. In this article, we will explore the use of matrix exponentiation and demonstrate how it can be used to calculate the nth Fibonacci number in O(logn) time complexity, a significant improvement over the naive recursive approach we saw in the last article which has a time complexity of O(2^n), and even more performant again over the dynamic programming approach which has a time and space complexity of O(n).

A primer on BigO

I briefly introduced BigO notation in my previous article but I want to go over it again here. BigO notation is a way for us to objectively describe the time or space complexity of an algorithm. In other words, it is a measure of how much time and resources it takes to solve a problem of a certain size.

The letter “O” stands for “order of” and it is used to describe the upper bound or the worst-case scenario of an algorithm’s time complexity. The number inside the parenthesis describes how the running time of the algorithm scales as the size of the input increases.

If we had a program or algorithm that always took 10 steps to output the result no matter the input then we would say that has a constant time, and we would say the BigO is O(10). If the algorithm always took 100 steps no matter the input we would similarly give that a BigO of O(100).

In BigO notation, we drop the constants and lower order terms from the running time of the algorithm, as they become less significant for large input sizes. This means that O(10) and O(100) are both considered to have a BigO notation of O(1), because they have a constant running time, regardless of the input size. This rule of generalisation is consistent across BigO notation.

A line chart showing BigO notation. It shows exponential curves are bad, linear lines are fair and logarithmic and constant lines are good.
Source: https://www.bigocheatsheet.com/

For example, consider an algorithm that simply prints out the first element of an array. The running time of this algorithm is constant, or O(1), because it does not depend on the size of the array. Even if the array has a million elements, the algorithm will still take the same amount of time to locate the first element.

Next up is linear time. Let’s say we have an algorithm that performs a simple operation on each item in a list of size n. The running time of this algorithm would be proportional to the number of items in the list, or O(n).

Consider the following example:

const arrayOfPets = ["Dog", "Cat", "Fish", "Rock"]
const hasPet = (pet: string): boolean => arrayOfPets.includes(pet)

In this example the includes method on the array is considered an algorithm with O(n) running time as it will need to potentially look up n elements in the array to determine if the given input pet is contained within the array. Now you could get lucky, if the input happened to be “Dog” then it will only take a single step, however BigO is concerned with the worst-case scenario. As an aside, there is a notation for best case called Big Omega and this method has a Ω(1) running time.

Finally, an algorithm with a BigO notation of O(log n) is considered very fast because it scales logarithmically with the input size, while an algorithm with a BigO notation of O(2^n) is considered slow for large inputs as this exponentially increased the number of steps as the input grows. There are other notations however they are beyond the scope of this article.

Optimising our Fibonacci Function

Let’s remind ourselves of our two implementations to get the nth number of the fibonacci sequence and show their BigO notations:

// O(2^n) - Exponentially increases its running time as `num` increases
const fib = (num: number): number => {
if (num === 0 || num === 1) return 1

const res = fib(num - 1) + fib(num - 2)

return res
}
// O(n) - Linearly increases its running time as `num` increases
const fibMemo = (num: number, memo: Record<number, number> = {}): number => {
if (num === 0 || num === 1) return 1

if (memo[num]) {
return memo[num]
}

const res = fibMemo(num - 1, memo) + fibMemo(num - 2, memo)

memo[num] = res
return res
}

It turns out we can achieve this result with a O(log n) algorithm by utilising matrix exponentiation. I ran a benchmark and we can clearly see the graph agrees with the theory and maps neatly to exponential, linear and logarithmic curves. As we increase the input it eventually reaches a steady complexity of ~20 steps no mater how much further we increase the number num .

I eventually went up to an input of 1000 but because that would take literally billions of years for the baseline algorithm to run, I only ran it on the memoized and matrix versions and here were the results for that run. The result for those interested was 7.0330367711422765e+208 so it’s a fairly large number to put it mildly.

Let’s take a look at the code for this O(log n) implementation:

const fibLog = (n: number): number => {
if (n <= 1) return n;

const matrix = [
[1, 1],
[1, 0],
];

const result = matrixPower(matrix, n - 1);
return result[0][0];
};

const matrixPower = (matrix: number[][], power: number): number[][] => {
if (power === 1) return matrix;

if (power % 2 === 0) {
const half = matrixPower(matrix, power / 2);
return multiplyMatrices(half, half);
}

const half = matrixPower(matrix, Math.floor(power / 2));
const full = multiplyMatrices(half, half);
return multiplyMatrices(full, matrix);
};

const multiplyMatrices = (
matrixA: number[][],
matrixB: number[][]
): number[][] => {
const rows = matrixA.length;
const cols = matrixA[0].length;

const result = [
[0, 0],
[0, 0]
];

for (let row = 0; row < rows; row++) {
for (let col = 0; col < cols; col++) {
for (let i = 0; i < cols; i++) {
result[row][col] += matrixA[row][i] * matrixB[i][col];
}
}
}
return result;
};

As we can see right off the bat the implementation is far more complex and a lot more difficult for us to visualise and reason about in our mental model. Also there is a triple nested for loop so how is that performant!? Let’s break this down.

Making Sense of the Matrix

The idea here is to represent the Fibonacci sequence using a matrix and then use matrix exponentiation to compute the nth Fibonacci number. A matrix is a multi-dimensional array, that is, an array of arrays. In this case the outer array contains 2 inner arrays that are each 2 elements in length like so:

const matrix = [
[1, 1],
[1, 0],
];

Somewhat anti-climactic I know. I was hoping for at least some new kung-fu knowledge but alas its just an array. When looking at this matrix as a grid, we can see it has 2 rows and 2 columns. This specific matrix represents the coefficients of the Fibonacci recurrence, or in other words it describes how to calculate the next number in the sequence based on the previous two numbers. The first ‘row’ corresponds to f(n+1) and f(n), while the second row corresponds to f(n) and f(n-1).

In this implementation, we use the matrixPower function to compute the nth-1 power of the Fibonacci matrix, and the multiplyMatrices function to multiply two matrices. Both of these functions have time complexity O(log n). Thanks to the rule of generalisation, the overall time complexity of the fib function is now also O(log n).

Taking a closer look at the matrixPower function, it takes two inputs, a matrix and a power and returns the matrix raised to that power. It works recursively by first checking if the power is 1 which is our base case. If it is then it just returns the matrix. If the power is even, it recursively computes half the power and multiplies it by itself. If the power is odd, it recursively computes half the power minus one, multiplies it by itself, and then multiplies it by the original matrix. The resultant matrix takes the form:

const result = [
[n, n-1],
[n-1, n-2]
]

Therefore we can extract the nth Fibonacci number by taking result[0][0] (or result[1][0] + result [1][1])

Diving deeper into multiplyMatrix, this function takes two matrices and returns their product. It works by iterating over each row of the first matrix and each column of the second matrix, multiplying the corresponding elements and summing the results to get the element in the resulting matrix. row and col represent the row and column of the resulting matrix, while i represents the shared dimension between the two matrices. When using matrix exponentiation, the multiplyMatrix function is called log(n) times, where n is the input Fibonacci number.

Final Thoughts

We’ve explored the concept of BigO notation in more detail, which helps us objectively describe the time or space complexity of an algorithm. We’ve seen how matrix exponentiation can be a powerful technique to efficiently compute large powers of a square matrix and specifically, we’ve demonstrated how it can be used to calculate the nth Fibonacci number in O(log n) time complexity, which is a significant improvement over the naive recursive approach and even more performant than the dynamic programming approach we discussed previously.

By optimising our Fibonacci function with matrix exponentiation, we can compute the nth Fibonacci number with just a handful of steps regardless of how large n is. This approach can be used as a template for designing and optimizing algorithms in other areas. With the knowledge gained in this article, you can now explore and apply matrix exponentiation to solve problems that may come across your path.

--

--

Rob Wilson
Rob Wilson

Written by Rob Wilson

Full stack developer trying to scratch the insatiable itch of curiosity and knowledge seeking