🌐
NumPy
numpy.org › doc › 2.3 › reference › generated › numpy.matmul.html
numpy.matmul — NumPy v2.3 Manual
numpy.matmul(x1, x2, /, out=None, *, casting='same_kind', order='K', dtype=None, subok=True[, signature, axes, axis]) = <ufunc 'matmul'>#
🌐
PyTorch
docs.pytorch.org › reference api › torch.matmul
torch.matmul — PyTorch 2.10 documentation
January 1, 2023 - torch.matmul(input, other, *, out=None) → Tensor# Matrix product of two tensors. The behavior depends on the dimensionality of the tensors as follows: If both tensors are 1-dimensional, the dot product (scalar) is returned.
Discussions

Apple adds matmul acceleration to A19 Pro GPU
I think M5 will have it as well, since M5 will be based on A19 (right??). More on reddit.com
🌐 r/LocalLLaMA
57
225
September 9, 2025
numpy.matmul() outputting unexpected array
You actually made a matrix consisting of a single column by accident. Think about it, cos(30)-sin(30) is just a constant. I think you meant to write this: rotation = np.array([ [cos(radians(30)), -sin(radians(30))], [sin(radians(30)), cos(radians(30))] ]) More on reddit.com
🌐 r/learnprogramming
8
1
June 5, 2021
🌐
SIGARCH
sigarch.org › dont-put-all-your-tensors-in-one-basket-hardware-lottery
All in on MatMul? Don’t Put All Your Tensors in One Basket! | SIGARCH
October 13, 2025 - As Sara Hooker puts it, “we may be in the midst of a present-day hardware lottery”. The catch? Modern chips zero in on DNNs’ commercial sweet spots. They are exceptionally good at cranking through heavy-duty MatMuls and the garnish ops, such as non-linear functions, that keep you from getting hit by Amdahl’s Law.
🌐
NumPy
numpy.org › doc › 2.1 › reference › generated › numpy.matmul.html
numpy.matmul — NumPy v2.1 Manual
numpy.matmul(x1, x2, /, out=None, *, casting='same_kind', order='K', dtype=None, subok=True[, signature, axes, axis]) = <ufunc 'matmul'>#
🌐
Modular
modular.com › blog › matrix-multiplication-on-nvidias-blackwell-part-1-introduction
Modular: Matrix Multiplication on Blackwell: Part 1 - Introduction
January 8, 2026 - All LLMs, be it Meta's Llama, Alibaba's Qwen, Deepseek, Anthropics' Claude, OpenAI's ChatGPT, or Google's Gemini, utilize matrix multiplications at their core. These matmuls might be disguised under multiple names, for instance, the Multi-Layer ...
🌐
Wikipedia
en.wikipedia.org › wiki › Matrix_multiplication
Matrix multiplication - Wikipedia
January 27, 2026 - In mathematics, specifically in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in ...
Find elsewhere
🌐
PyMC
pymc.io › projects › docs › en › latest › api › generated › pymc.math.matmul.html
pymc.math.matmul — PyMC dev documentation
pymc.math.matmul(x1, x2, dtype=None)[source]# Compute the matrix product of two tensor variables. Parameters: x1, x2 · Input arrays, scalars not allowed. dtype · The desired data-type for the array. If not given, then the type will be determined as the minimum type required to hold the objects ...
🌐
Codecademy
codecademy.com › article › numpy-matrix-multiplication-a-beginners-guide
NumPy Dot Product and Matrix Multiplication: Complete Guide | Codecademy
This demonstrates the same matrix multiplication process as np.matmul(), where rows from the first matrix are combined with columns from the second matrix using dot products to create each element in the result matrix.
🌐
arXiv
arxiv.org › abs › 2406.02528
[2406.02528] Scalable MatMul-free Language Modeling
July 25, 2025 - However, these models pose substantial computational and memory challenges, primarily due to the reliance on matrix multiplication (MatMul) within their attention and feed-forward (FFN) layers.
🌐
GNU
gcc.gnu.org › onlinedocs › gfortran › MATMUL.html
MATMUL (The GNU Fortran Compiler)
RESULT = MATMUL(MATRIX_A, MATRIX_B) Description: Performs a matrix multiplication on numeric or logical arguments. Class: Transformational function · Arguments: Return value: The matrix product of MATRIX_A and MATRIX_B. The type and kind of the result follow the usual type and kind promotion ...
🌐
Openvino
docs.openvino.ai › 2025 › documentation › openvino-ir-format › operation-sets › operation-specs › matrix › matmul-1.html
MatMul — OpenVINO™ documentation
<layer ... type="MatMul"> <input> <port id="0"> <dim>10</dim> <dim>1024</dim> </port> <port id="1"> <dim>1024</dim> <dim>1000</dim> </port> </input> <output> <port id="2"> <dim>10</dim> <dim>1000</dim> </port> </output> </layer>
🌐
NVIDIA
docs.nvidia.com › deeplearning › cudnn › frontend › latest › operations › Matmul.html
Matmul — NVIDIA cuDNN Frontend
std::shared_ptr<Tensor_attributes> Matmul(std::shared_ptr<Tensor_attributes> a, std::shared_ptr<Tensor_attributes> b, Matmul_attributes);
🌐
Medium
medium.com › @debopamdeycse19 › dot-vs-matmul-in-numpy-which-one-is-best-suited-for-your-needs-dbd27c56ca33
Dot vs Matmul in Numpy: Which One is Best Suited for Your Needs? | by Let's Decode | Medium
November 13, 2023 - Comparison of Matmul and Dot: Matmul and dot in Numpy serve distinct purposes. While matmul combines matrices, the dot product operates on vectors. The choice between them depends on your specific problem and requirements.
🌐
Medium
medium.com › @amit25173 › understanding-numpy-dot-and-numpy-matmul-7424f979ede4
Understanding numpy.dot() and numpy.matmul() | by Amit Yadav | Medium
March 6, 2025 - Consistent behavior: Whether you’re dealing with 2-D or higher-dimensional arrays, matmul() always performs matrix multiplication.