algorithm to multiply matrices
matrix multiplication on a cross wire two dimensional array
Because matrix multiplication is such a central operation in many numerical algorithms, much work has been invested in making matrix multiplication algorithms efficient. Applications of matrix multiplication in computational problems are found … Wikipedia
🌐
Wikipedia
en.wikipedia.org › wiki › Matrix_multiplication_algorithm
Matrix multiplication algorithm - Wikipedia
3 days ago - Directly applying the mathematical definition of matrix multiplication gives an algorithm that takes time on the order of n3 field operations to multiply two n × n matrices over that field (Θ(n3) in big O notation). Better asymptotic bounds on the time required to multiply matrices have been known since the Strassen's algorithm in the 1960s, but the optimal time (that is, the computational complexity ...
🌐
Wikipedia
en.wikipedia.org › wiki › Computational_complexity_of_matrix_multiplication
Computational complexity of matrix multiplication - Wikipedia
January 12, 2026 - The optimal number of field operations ... As of January 2024, the best bound on the asymptotic complexity of a matrix multiplication algorithm is O(n2.371339)....
Top answer
1 of 2
16

The space usage is at most for all Strassen-like algorithms (i.e. those based on upper bounding the rank of matrix multiplication algebraically). See Space complexity of Coppersmith–Winograd algorithm

However, I realized in my previous answer that I did not explain why the space usage is ... so here goes something hand-wavy. Consider what a Strassen-like algorithm does. It starts from a fixed algorithm for matrix multiplication that uses multiplications for some constant . In particular, this algorithm (whatever it is) can WLOG be written so that:

  1. It computes different matrices which multiply entries of the first matrix by various scalars and matrices from the second matrix of a similar form,

  2. It multiplies those linear combinations , then

  3. It multiplies entries of by various scalars, then adds all these matrices up entrywise to obtain .

(This is a so-called "bilinear" algorithm, but it turns out that every "algebraic" matrix multiplication algorithm can be written in this way.) For each , this algorithm only has to store the current product and the current value of (initially set to all-zeroes) in memory at any given point, so the space usage is .

Given this finite algorithm, it is then extended to arbitrary matrices, by breaking the large matrices into blocks of dimensions , applying the finite algorithm to the block matrices, and recursively calling the algorithm whenever it needs to multiply two blocks. At each level of recursion, we need to keep only field elements in memory (storing different matrices). Assuming the space usage for matrix multiplication is , the space usage of this recursive algorithm is , which for solves to .

2 of 2
4

More generally, fast matrix multiplication can be done on processors in memory per processor. However, the communication between processors is then suboptimal. Optimal communication can be achieved by using more memory. As far as I know, it is not known whether optimal communication and optimal memory can be achieved simultaneously. Details are in http://dx.doi.org/10.1007/PL00008264

🌐
Baeldung
baeldung.com › home › core concepts › math and logic › matrix multiplication algorithm time complexity
Matrix Multiplication Algorithm Time Complexity | Baeldung on Computer Science
March 18, 2024 - The naive matrix multiplication algorithm contains three nested loops. For each iteration of the outer loop, the total number of the runs in the inner loops would be equivalent to the length of the matrix. Here, integer operations take time. In general, if the length of the matrix is , the total time complexity would be .
🌐
ACM Digital Library
dl.acm.org › doi › fullHtml › 10.1145 › 3631908.3631910
Large Matrix Multiplication Algorithms: Analysis and Comparison
This arises from the need to iterate through the rows, columns, and elements of the matrices. Space Efficiency: The standard algorithm does not require additional memory, giving it an advantage in terms of space efficiency over algorithms that require auxiliary storage.
Top answer
1 of 1
4

The "naive" matrix multiplication for involves multiplying and adding terms for each of entries in . So the complexity is . And then multiplying this matrix by requires multiplying and adding terms for each of entries. So the total complexity is . (EDIT: This conclusion was incorrect and based on a silly arithmetic mistake that I made. The correct answer, as explained in the comments, is . Many thanks to @HoldenLee and @Rami Zouari for catching this.)

However, this may not be optimal algorithm. But unfortunately, there's very little information online about the efficiency of non-square matrix multiplication. If all three matrices were square, then the fastest known algorithm for multiplying two of them has complexity $\approx O(N^{2.3729})$; this means that multiplying three matrices will have complexity just under . If the matrices have dimensions that are multiples of each other (or close to multiples) then we can use the square algorithms and block multiplication to speed up the implementation.

I managed to find a paper from 2012 which gets better than results for multiplying by matrices, for those values $M<N^{0.30298}$. Assuming that either or is less than or equal to $M^{0.30298}$, you could perform one multiplication naively and then use the results of the paper to perform the other multiplication quickly. Depending on the exact values in question, that might be able to get you a result that is better by maybe a factor of . But in general, I don't think that there are any known algorithms for your specific case that get any better than .

🌐
HeyCoach Blog
heycoach.in › blog › space-complexity-of-matrix-multiplication
Space Complexity Of Matrix Multiplication
January 16, 2025 - In-Place Multiplication: There are advanced algorithms that can perform matrix multiplication in-place, but they’re like trying to fit a king-sized bed into a tiny studio apartment! Memory Usage: The total space complexity can be expressed as O(m * n + n * p + m * p), which simplifies to ...
Find elsewhere
🌐
ScienceDirect
sciencedirect.com › science › article › pii › S0885064X02000079
On the complexity of the multiplication of matrices of small formats - ScienceDirect
December 21, 2002 - The currently best upper bound for 4×4-matrix multiplication follows by applying Strassen's algorithm two times. This yields the upper bound 49. Any improvement of this result immediately yields an algorithm with less than O(nlog27) arithmetic operations. Investigating the bilinear complexity of the multiplication of matrices of some small format is an interesting and challenging problem, see e.g.
🌐
University of Auckland
cs.auckland.ac.nz › software › AlgAnim › mat_chain.html
Data Structures and Algorithms: Matrix Chain Multiplication
one containing the index of last ... each of the O(n2) costs and entries in the best matrix for an overall complexity of O(n3) time at a cost of O(n2) space....
🌐
arXiv
arxiv.org › abs › 2309.06317
[2309.06317] The Time Complexity of Fully Sparse Matrix Multiplication
September 12, 2023 - Our main contribution is a new algorithm that reduces sparse matrix multiplication to dense (but smaller) rectangular matrix multiplication. Our running time thus depends on the optimal exponent $\omega(a,b,c)$ of multiplying dense $n^a\times n^b$ by $n^b\times n^c$ matrices. We discover that when $m_{out}=\Theta(m_{in}^r)$ the time complexity of sparse matrix multiplication is $O(m_{in}^{\sigma+\epsilon})$, for all $\epsilon > 0$, where $\sigma$ is the solution to the equation $\omega(\sigma-1,2-\sigma,1+r-\sigma)=\sigma$. No matter what $\omega(\cdot,\cdot,\cdot)$ turns out to be, and for all $r\in(0,2)$, the new bound beats the state of the art, and we provide evidence that it is optimal based on the complexity of the all-edge triangle problem.
🌐
IJERT
ijert.org › optimizing-the-complexity-of-matrix-multiplication-algorithm
Optimizing the Complexity of Matrix Multiplication Algorithm – IJERT
April 24, 2018 - Also, parallel computations [8] helps to reduce the Time and Space Complexity of the algorithm. DCT using strassens algorithm provide faster output rather than the naïve. ... The mathematical definition of matrix multiplication algorithm [4] states that if C = AB for n×m matrix A and m×p
🌐
Wikipedia
en.wikipedia.org › wiki › Computational_complexity_of_mathematical_operations
Computational complexity of mathematical operations - Wikipedia
3 weeks ago - The following complexity figures assume that arithmetic with individual elements has complexity O(1), as is the case with fixed-precision floating-point arithmetic or operations on a finite field. In 2005, Henry Cohn, Robert Kleinberg, Balázs Szegedy, and Chris Umans showed that either of two different conjectures would imply that the exponent of matrix multiplication is 2.
🌐
Cornell
courses.cs.cornell.edu › cs6810 › 2023fa › Matrix.pdf pdf
Computational Complexity of Matrix Multiplication Andy He and Evan Williams
Then the elementary formula for matrix multiplication C = AB can be ... Proposition 2.1 Let n be a positive integer. Suppose that there exists a bilinear algorithm that computes · the product of two n × n matrices with bilinear complexity t. Then, ... To prove this proposition we will need ...
🌐
University of Colorado Boulder
home.cs.colorado.edu › ~srirams › courses › csci3104-spr15 › lectures › l2Note.pdf pdf
CSCI 3104: Algorithms, Lecture 2 Topics Covered: Analysis of Algorithms ◦
Multiplication Algorithm · # Python Code · def matrixMultiply(a,b): for i in range(0,m): for j in range(0,p): c[i][j] = 0 · for k in range( 0,n): c[i][j] = c[i][j] + a[i][k]*b[k][j] return c · 1. Does it work? Correctness Proof · 2. How long does it take to execute on inputs of size...? Running Time Complexity · 3. How much memory does it require? Space Complexity ·