Add, multiply, invert, decompose, and analyze matrices instantly — with step-by-step solutions.
A matrix is one of the most fundamental structures in mathematics, engineering, computer science, and data science. At its simplest, a matrix is a rectangular array of numbers arranged in rows and columns. But behind this simple definition lies extraordinary power: matrices encode transformations, model systems of equations, power 3D graphics, form the backbone of machine learning algorithms, and describe the behavior of physical systems. Our Matrix Calculator puts the full range of matrix operations right in your browser — no software to download, no account required, and no limits on how often you use it. Students encountering linear algebra for the first time will find this tool invaluable. Whether you are working through homework on determinants, confirming the inverse of a 3×3 matrix, or checking your row reduction steps for RREF, this calculator walks you through each operation with clearly formatted numeric grids and step-by-step row operation logs. You can compare your manual work with the computed result, identify where you went wrong, and build genuine intuition for how matrices behave under each operation. Engineers and scientists use matrices constantly in practical work. Structural analysis, circuit simulation, finite element methods, control theory, signal processing, computer vision — all of these fields reduce real-world problems to matrix equations. When you need to solve a linear system Ax = b to find unknown forces in a structure, decompose a matrix with LU factorization for efficient repeated solving, or check whether a system is underdetermined by computing the matrix rank, this calculator handles it all. The step-by-step RREF output with Gaussian elimination stages is especially useful for verifying the process used in numerical solvers. Data scientists and machine learning practitioners work with matrices as the primary data container. Feature matrices, weight matrices, and transformation matrices are everywhere in neural networks and dimensionality reduction. Computing eigenvalues and eigenvectors is the mathematical core of Principal Component Analysis (PCA), spectral clustering, PageRank, and many optimization algorithms. Our eigenvalue calculator provides results for 2×2 matrices in closed form and uses QR iteration for larger matrices, returning both the eigenvalues and the corresponding normalized eigenvectors. Game developers and graphics programmers need matrix operations for every frame they render. Rotation matrices, transformation matrices, and projection matrices govern how objects move and appear on screen. The preset 2×2 rotation matrix and identity matrix quick-fills make it easy to experiment with these common patterns. The matrix power operation (A^n) is useful for computing repeated transformations efficiently without manual multiplication. Using the calculator is straightforward. Enter the dimensions for Matrix A and Matrix B using the row and column selectors, fill in the cell values, and choose the operation from the tab buttons. The result appears instantly with a plain-English interpretation that explains what the result means in practical terms. For RREF, eigenvalues, and linear system solving, the step-by-step panel shows every row operation applied during the computation. The heatmap overlay option shades result matrix cells by their absolute value, making it easy to spot which entries are large and which are near zero — a useful visual tool when working with large matrices or trying to understand the structure of a transformation. For eigenvalue results, a horizontal bar chart visualizes the relative magnitudes of the eigenvalues at a glance. All computation runs entirely in your browser using TypeScript algorithms — Gaussian elimination, LU decomposition via Doolittle's method, QR iteration for eigenvalues, Gauss-Jordan elimination for RREF and inverse, and Householder reflections for QR decomposition. No data is sent to a server, and no internet connection is required after the page loads.
Understanding Matrix Operations
What Is a Matrix?
A matrix is a two-dimensional array of numbers (or, more generally, elements from a mathematical field) arranged in rows and columns. An m×n matrix has m rows and n columns, for a total of m×n elements. Matrices are denoted with capital letters (A, B, C) and their elements with lowercase subscripts: A[i][j] refers to the element in row i and column j. Square matrices have the same number of rows and columns (n×n) and have special properties including determinants, inverses, eigenvalues, and traces. Rectangular matrices appear in systems of equations, data tables, and image representations. Special matrices include the identity matrix I (1s on diagonal, 0s elsewhere), the zero matrix O (all zeros), symmetric matrices (A = A^T), and diagonal matrices (non-zero only on the diagonal). Matrices form the algebraic structure underlying most of modern applied mathematics.
How Are Matrix Operations Computed?
Different matrix operations use different algorithms. Addition and subtraction work element-by-element and require identical dimensions. Multiplication uses the dot product of each row of A with each column of B, requiring that the column count of A equals the row count of B. The determinant of a square matrix is computed via cofactor expansion (for small matrices) or LU decomposition (for larger ones) — it is a single number encoding the signed volume scaling factor of the linear transformation. The inverse A⁻¹ is found using Gauss-Jordan elimination on the augmented matrix [A | I], transforming it to [I | A⁻¹]. RREF uses Gaussian elimination with back-substitution to reduce every pivot to 1 and every non-pivot entry in pivot columns to 0. Eigenvalues λ satisfy det(A − λI) = 0; for 2×2 matrices this yields a simple quadratic formula; for larger matrices numerical QR iteration is used.
Why Do Matrix Operations Matter?
Matrices are not just abstract algebra — they are practical computational tools used throughout science and technology. The determinant tells you whether a system of equations has a unique solution (det ≠ 0) or is singular (det = 0). The inverse allows you to solve linear systems Ax = b as x = A⁻¹b, directly find transformation inverses in graphics, and compute control gains in engineering. Eigenvalues and eigenvectors reveal the fundamental directions of a transformation and are the mathematical heart of PCA, spectral methods, stability analysis, and quantum mechanics. RREF is the standard tool for solving systems of any size, determining rank, finding null spaces, and checking linear independence. LU decomposition speeds up repeated solutions of Ax = b with different right-hand sides, since factoring once allows many fast forward/back substitutions.
Limitations and Numerical Precision
This calculator uses standard 64-bit floating-point arithmetic (JavaScript's built-in number type), which can introduce small rounding errors in the last several decimal places. Results are displayed rounded to your chosen decimal precision. Very ill-conditioned matrices — those where some rows or columns are nearly linearly dependent — can produce results that appear numerically unstable. For example, a matrix with a very small but non-zero determinant may appear singular due to floating-point noise. The calculator uses a threshold of 1e-12 to detect near-zero pivots during elimination. Matrix dimensions are capped at 5×5 for client-side performance, keeping all computations fast in the browser. For larger matrices, desktop software like MATLAB, Octave, Python (NumPy), or Julia would be more appropriate. The eigenvalue algorithm (QR iteration) converges well for most real symmetric matrices but may give less accurate results for matrices with clustered eigenvalues.
Matrix Formulas
Matrix Multiplication
(AB)ᵢⱼ = Σₖ aᵢₖ · bₖⱼ
Each entry of the product matrix is the dot product of the corresponding row of A and column of B. Requires columns of A to equal rows of B.
2×2 Determinant
det([[a, b], [c, d]]) = ad − bc
The determinant of a 2×2 matrix equals the product of the main diagonal minus the product of the anti-diagonal. A zero determinant means the matrix is singular.
Transpose
(A^T)ᵢⱼ = Aⱼᵢ
The transpose swaps rows and columns. An m×n matrix becomes n×m. Symmetric matrices satisfy A = A^T.
Matrix Inverse
A⁻¹ = adj(A) / det(A)
The inverse exists only when det(A) ≠ 0. For 2×2: A⁻¹ = (1/(ad−bc)) · [[d, −b], [−c, a]]. For larger matrices, use Gauss-Jordan elimination.
Matrix Operations Reference
Matrix Operations Summary
Quick reference for common matrix operations, their requirements, and result dimensions.
| Operation | Formula/Method | Requirement | Result |
|---|---|---|---|
| Addition A + B | Cᵢⱼ = Aᵢⱼ + Bᵢⱼ | Same dimensions (m×n) | m×n matrix |
| Multiplication A×B | (AB)ᵢⱼ = Σ aᵢₖ·bₖⱼ | A cols = B rows (m×p · p×n) | m×n matrix |
| Scalar c·A | (cA)ᵢⱼ = c · Aᵢⱼ | Any matrix | Same dimensions |
| Transpose A^T | (A^T)ᵢⱼ = Aⱼᵢ | Any matrix (m×n) | n×m matrix |
| Determinant | Cofactor expansion or LU | Square matrix (n×n) | Scalar |
| Inverse A⁻¹ | Gauss-Jordan on [A|I] | Square, det ≠ 0 | n×n matrix |
| Trace tr(A) | Σ Aᵢᵢ (sum of diagonal) | Square matrix (n×n) | Scalar |
Special Matrices
Named matrix types that have important properties in linear algebra and applications.
| Matrix Type | Definition | Key Property |
|---|---|---|
| Identity (I) | 1s on diagonal, 0s elsewhere | AI = IA = A for any compatible A |
| Zero (O) | All entries are 0 | A + O = A; A·O = O |
| Diagonal | Non-zero only on main diagonal | Easy to invert: just reciprocate diagonal entries |
| Symmetric | A = A^T (equal to its transpose) | Always has real eigenvalues |
| Orthogonal | A^T · A = I | Preserves lengths and angles (rotations/reflections) |
| Upper Triangular | All entries below diagonal are 0 | Determinant = product of diagonal entries |
Worked Examples
Multiply two 2×2 matrices
Compute A × B where A = [[1, 2], [3, 4]] and B = [[5, 6], [7, 8]].
C₁₁ = (1)(5) + (2)(7) = 5 + 14 = 19
C₁₂ = (1)(6) + (2)(8) = 6 + 16 = 22
C₂₁ = (3)(5) + (4)(7) = 15 + 28 = 43
C₂₂ = (3)(6) + (4)(8) = 18 + 32 = 50
A × B = [[19, 22], [43, 50]]
Find determinant and inverse of a 3×3 matrix
Given A = [[2, 1, 0], [1, 3, 1], [0, 1, 2]], find det(A) and A⁻¹.
Expand along row 1: det = 2(3·2 − 1·1) − 1(1·2 − 1·0) + 0 = 2(5) − 1(2) = 8
Since det(A) = 8 ≠ 0, the inverse exists
Set up augmented matrix [A | I] and apply Gauss-Jordan elimination
After row operations: A⁻¹ = (1/8)·[[5, −2, 1], [−2, 4, −2], [1, −2, 5]]
det(A) = 8, A⁻¹ = (1/8)·[[5, −2, 1], [−2, 4, −2], [1, −2, 5]]
Find eigenvalues of a 2×2 matrix
Find the eigenvalues of A = [[4, 1], [2, 3]] by solving det(A − λI) = 0.
A − λI = [[4−λ, 1], [2, 3−λ]]
det(A − λI) = (4−λ)(3−λ) − (1)(2) = λ² − 7λ + 10
Factor: (λ − 5)(λ − 2) = 0
Eigenvalues: λ₁ = 5 and λ₂ = 2
Eigenvalues: λ₁ = 5, λ₂ = 2
How to Use the Matrix Calculator
Select an Operation Category
Click one of the four tabs at the top — Two Matrix (for A+B, A-B, A×B, c×A), Single Matrix (for transpose, determinant, inverse, power, trace), Analysis (for rank, RREF, eigenvalues, LU), or Solve Ax=b. The input panel will show only the controls you need.
Set Matrix Dimensions and Enter Values
Use the Rows and Cols dropdowns next to each matrix label to set dimensions (1×1 to 5×5). Click each cell and type a value — decimals and fractions like 1/3 or -2.5 are accepted. Use the Random button to auto-fill with test integers, or load a Quick Preset like Rotation 2×2 or Magic Square 3×3.
Choose the Specific Operation and Click Calculate
Click the operation button that appears below the matrix grids — for example, A + B, Determinant, or RREF. The result appears instantly on the right. The 'What does this mean?' panel below the result gives a plain-English explanation of the mathematical meaning of the output.
Review Steps, Export, or Chain Operations
If step-by-step row operations are available (RREF, linear system), click the Steps accordion to see each pivot and elimination move. Use 'Export CSV' to download the result matrix, 'Copy LaTeX' for academic documents, or 'Copy result → Matrix A' to feed the result into a new calculation.
Frequently Asked Questions
Why can't I multiply two matrices with mismatched dimensions?
Matrix multiplication A×B is defined only when the number of columns in A equals the number of rows in B. This is because the operation computes each result entry as a dot product of a row of A with a column of B — which requires the row and column to have the same length. If A is an m×p matrix and B is a p×n matrix, the result C is m×n. If aCols ≠ bRows, the dot product is undefined and the operation cannot proceed. This contrasts with addition, where both matrices must have identical dimensions (both m×n) so entries can be paired up element-by-element.
What does it mean when the determinant is 0?
A zero determinant means the matrix is singular — it does not have an inverse. Geometrically, the linear transformation squashes space: a 2D transformation with det=0 collapses the plane onto a line or a point, destroying information. Algebraically, if det(A)=0, the rows of A are linearly dependent (some row is a linear combination of the others), and the system Ax=b either has no solution or infinitely many solutions — never a unique one. The rank will be less than n. This is why the calculator shows an error ('singular matrix') when you try to invert a matrix with a zero determinant.
What is the difference between rank, RREF, and determinant?
These three outputs describe different aspects of the same matrix. The rank is a single integer — the number of linearly independent rows (or columns), found by counting the non-zero rows in RREF. RREF (Reduced Row Echelon Form) is the full reduced matrix itself, showing exactly which variables are basic (determined by pivots) and which are free (can be set arbitrarily). The determinant is a single scalar defined only for square matrices; it equals zero precisely when rank < n. Rank applies to any matrix shape; RREF applies to any matrix; determinant requires a square matrix. Together they characterize the solution space of Ax=0 and Ax=b completely.
How are eigenvalues computed for matrices larger than 2×2?
For 2×2 matrices, eigenvalues are computed in closed form using the quadratic formula on the characteristic polynomial λ² − tr(A)λ + det(A) = 0. For 3×3 to 5×5 matrices, this calculator uses the QR iteration algorithm, which is the standard numerical method used in professional linear algebra software. QR iteration repeatedly factors the matrix as Q×R (orthogonal times upper triangular) and replaces it with R×Q, converging toward an upper triangular form whose diagonal entries are the eigenvalues. The process runs up to 500 iterations with convergence tolerance 1e-8. Complex eigenvalues (from matrices with real entries that have complex conjugate eigenvalue pairs) appear for 2×2 matrices in the form a + bi and a − bi.
What is LU decomposition used for?
LU decomposition factors matrix A into the product of a lower-triangular matrix L (with 1s on the diagonal) and an upper-triangular matrix U. The main practical use is efficient repeated solving of Ax=b: once A = LU, solving for any right-hand side b requires two triangular substitution steps — forward substitution through Ly=b then backward substitution through Ux=y — each of which takes only O(n²) operations. This is far faster than recomputing the full decomposition each time. The determinant of A equals the product of the diagonal entries of U (times the sign from any row swaps during partial pivoting). LU decomposition is the algorithm underlying most scientific computing libraries for linear system solving.
What are the preset example matrices and why are they useful?
The calculator includes four presets for quick experimentation. The Rotation 2×2 matrix [[0,-1],[1,0]] rotates vectors 90 degrees counterclockwise — useful for learning how matrix multiplication implements rotations. The Magic Square 3×3 has rows, columns, and diagonals all summing to 15, and has a determinant of -360 and rank 3. The Identity 3×3 is the neutral element of matrix multiplication: A×I = I×A = A for any compatible A. The Fibonacci 2×2 matrix [[1,1],[1,0]] raised to the power n gives the nth Fibonacci number in position [0][0] — a beautiful demonstration of matrix powers. Load any preset, then modify values to explore how the results change.
Related Tools
Matrix Determinant Calculator
Dedicated calculator for computing matrix determinants with cofactor expansion steps.
Matrix Inverse Calculator
Find the inverse of a square matrix using Gauss-Jordan elimination with step-by-step row operations.
Matrix Multiplication Calculator
Multiply two matrices with detailed dot-product breakdowns for each result entry.
Equation Solver
Solve linear, quadratic, and systems of equations step-by-step.
Linear Equation Solver
Solve systems of linear equations with elimination and substitution methods.