Linear algebra explains how vectors, matrices, and linear transformations work. If you are searching for linear algebra basics, the core idea is simple: it studies quantities with several components and the rules for combining or transforming them in a consistent way.

The word "linear" matters because it makes behavior predictable. If a rule is linear, adding inputs adds outputs in the same pattern, and scaling an input scales the output by the same factor.

Vectors And Matrices In Plain Language

A vector is an ordered list of numbers. In practice, a vector can represent a position, a velocity, a list of measurements, or coefficients in a problem.

For example, this is a vector in 22 dimensions:

[21]\begin{bmatrix} 2 \\ -1 \end{bmatrix}

A matrix is a rectangular array of numbers. A matrix can store coefficients, describe a system of equations, or act as a rule that transforms one vector into another.

This is a 2×22 \times 2 matrix:

[1203]\begin{bmatrix} 1 & 2 \\ 0 & 3 \end{bmatrix}

The difference is worth keeping straight: a vector is one mathematical object, while a matrix is usually used to organize or apply rules to vectors.

What "Linear" Means In Linear Algebra

In linear algebra, "linear" does not just mean "looks like a line." It means a rule respects addition and scalar multiplication.

If TT is a linear transformation, then for vectors uu, vv and scalar cc,

T(u+v)=T(u)+T(v)T(u + v) = T(u) + T(v)

and

T(cu)=cT(u)T(cu) = cT(u)

Those two conditions are why matrices are so useful. Multiplying by a matrix gives a compact way to describe transformations with exactly that behavior.

One quick check follows from this definition: every linear transformation sends the zero vector to the zero vector. A rule like T(x)=x+1T(x) = x + 1 fails that test, so it is not linear in this setting.

The Core Ideas You Need First

A scalar is a single number, a vector is a list of numbers, and a matrix is an array of numbers. Mixing up those roles causes a lot of beginner mistakes.

Linear Combination

A linear combination is built by scaling vectors and then adding them. For example, 2u3v2u - 3v is a linear combination of uu and vv.

This idea matters because many questions reduce to one test: can a target vector be built from the vectors you already have?

Matrix As A Transformation

When a matrix multiplies a vector, it mixes the vector's components using fixed coefficients. That is why a matrix is often described as a transformation.

Linear Systems

A system such as

x+2y=53xy=4\begin{aligned} x + 2y &= 5 \\ 3x - y &= 4 \end{aligned}

can be written in matrix form. Linear algebra gives you tools to solve that system and to tell whether it has one solution, no solution, or infinitely many solutions.

Worked Example: Matrix Times Vector

Take the matrix

A=[1203]A = \begin{bmatrix} 1 & 2 \\ 0 & 3 \end{bmatrix}

and the vector

v=[41].v = \begin{bmatrix} 4 \\ 1 \end{bmatrix}.

To compute AvAv, multiply each row of the matrix by the vector:

Av=[1203][41]=[14+2104+31]=[63].Av = \begin{bmatrix} 1 & 2 \\ 0 & 3 \end{bmatrix} \begin{bmatrix} 4 \\ 1 \end{bmatrix} = \begin{bmatrix} 1 \cdot 4 + 2 \cdot 1 \\ 0 \cdot 4 + 3 \cdot 1 \end{bmatrix} = \begin{bmatrix} 6 \\ 3 \end{bmatrix}.

The output is a new vector whose entries are linear combinations of the input entries. Here, the first output entry is 14+21=61 \cdot 4 + 2 \cdot 1 = 6, and the second is 04+31=30 \cdot 4 + 3 \cdot 1 = 3.

So the matrix takes the input vector and maps it to

[63].\begin{bmatrix} 6 \\ 3 \end{bmatrix}.

That is the basic pattern behind matrix-vector multiplication: each output entry is built from one row of the matrix.

Common Linear Algebra Mistakes

Treating Matrix Multiplication Like Entry-By-Entry Multiplication

Matrix multiplication is not usually done by multiplying matching positions. It uses row-by-column combinations, so the structure matters.

Ignoring Dimensions

You can only multiply a matrix and a vector when the number of matrix columns matches the number of vector entries. If the dimensions do not match, the product is undefined.

Assuming Every System Has Exactly One Solution

That is only true under certain conditions. Some linear systems have no solution, and some have infinitely many solutions.

Using "Linear" Too Loosely

A rule is not linear just because it looks simple. Terms like x2x^2, products like xyxy, or a constant shift such as x+1x + 1 can break linearity.

Where Linear Algebra Basics Are Used

Linear algebra appears whenever a problem involves many related quantities and rules that act on them systematically.

It is used in computer graphics for rotations and projections, in engineering for systems of equations, in physics for state models, and in data science for matrix-based methods.

You do not need advanced theory to benefit from the basics. If vectors, matrices, and matrix-vector multiplication make sense, later topics become much easier to learn.

Try A Similar Problem

Try multiplying

[2110][32].\begin{bmatrix} 2 & 1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} 3 \\ 2 \end{bmatrix}.

Then ask yourself what each output entry represents. If this example clicked, try your own version with a different 2×22 \times 2 matrix and see how the output changes.

Need help with a problem?

Upload your question and get a verified, step-by-step solution in seconds.

Open GPAI Solver →