Subspaces in Linear Algebra

A subspace is a subset of a vector space that still behaves like a vector space under the same operations of addition and scalar multiplication (you can find the link to the Vector Space post at the bottom of this article). Subspaces help us break down large vector spaces into smaller, manageable parts while preserving their algebraic properties.
What Is a Subspace?
Formally, a subspace of a vector space \(V\) over a field \(F\) is a subset \(W \subseteq V\) such that:
- \(W\) itself is a vector space over \(F\).
- The operations of vector addition and scalar multiplication are the same as those already defined in \(V\).
Some trivial but important examples are:
- \(V\) itself (every vector space is a subspace of itself).
- The set \({0}\), called the zero subspace, which contains only the zero vector.
The Shortcut: Checking Only Three Properties
At first, you might think we need to verify all eight vector space axioms (like associativity, distributivity, etc.) to prove that a subset is a subspace. Fortunately, this isn’t necessary.
Why? Because many axioms (like commutativity, distributivity, and scalar associativity) automatically hold in any subset of \(V\). That means, to check if \(W\) is a subspace, we only need to confirm three key properties:
- Zero vector is included: $$
0 \in W
$$ - Closed under addition: $$
x, y \in W \quad \Rightarrow \quad x+y \in W
$$ - Closed under scalar multiplication: $$
c \in F, x \in W \quad \Rightarrow \quad cx \in W
$$
If these three conditions hold, then \(W\) is guaranteed to be a subspace of \(V\).
Connection to Vector Space Axioms
The full set of vector space axioms ensures that a structure behaves like a vector space. But for subspaces, most of these axioms (commutativity, associativity, distributivity, etc.) are “inherited” directly from \(V\).
That’s why checking only zero vector, closure under addition, and closure under scalar multiplication is enough(if you are interested, see the proof in the appendix).
Example:
Consider \(V = \mathbb{R}^3\), the 3-dimensional real vector space.
- Let $$
W = \{ (x,y,0) \mid x,y \in \mathbb{R} \}
$$ Then this subset \(W\) of \(V\) is subspace, because:- \(0 = (0,0,0) \in W\).
- Adding two vectors \((x_1,y_1,0) + (x_2,y_2,0) = (x_1+x_2, y_1+y_2, 0) \in W\).
- Multiplying a vector by a scalar \(c(x,y,0) = (cx,cy,0) \in W\).
All three conditions hold, so \(W\) is a subspace of \(\mathbb{R}^3\).
Why Subspaces Matter
Subspaces are foundational in linear algebra because:
- They provide natural settings for solutions of linear systems (solution sets form subspaces).
- They help define important concepts like span, basis, and dimension.
- They are essential in applications like computer graphics, data science, and AI, where vectors often live in high-dimensional spaces, and subspaces allow dimensionality reduction.
Key Takeaway
To determine if a subset is a subspace, remember the Subspace Test:
- Contains the zero vector.
- Closed under addition.
- Closed under scalar multiplication.
That’s all you need!
Appendix
Earlier, we saw that to check whether a subset \(W\) of a vector space \(V\) is a subspace, it’s enough to verify just three conditions:
- \(0 \in W\) (the zero vector is in \(W\),
- \(x+y \in W\) whenever \(x,y \in W\) (closed under addition),
- \(cx \in W\) whenever \(c \in F\) and \(x \in W\) (closed under scalar multiplication).
In order to prove this, we must first look at some theorems about vector spaces.
Theorem 1.1
(Numbering follows Friedberg’s Linear Algebra 4th.)
If \(x, y, z\) are vectors in a vector space (V) such that
$$
x + z = y + z
$$
then
$$
x = y
$$
Let’s prove this first. You might be thinking, “Wait… a proof? Do we really need that? It feels so tedious!” But don’t worry—it’s not as bad as it looks. Remember, vector spaces are built on just 8 basic rules (the axioms), and those rules don’t automatically guarantee that something like \(x = y\) follows. That’s why we take a moment to prove it, using only the definitions of a vector space.
From the vector space definition, we know that every vector has an additive inverse. So, there exists a vector \(v \in V\) such that
$$
z + v = 0
$$
Now we compute:
$$
x = x + 0 \quad \text{(by zero vector)} \
$$
$$
= x + (z + v) \quad \text{(since } z+v = 0\text{)}\
$$
$$
= (x+z) + v \quad \text{(by associativity)}\
$$
$$
= (y+z) + v \quad \text{(since } x+z = y+z\text{)}\
$$
$$
= y + (z+v) \quad \text{(by associativity again)}\
$$
$$
= y + 0\
$$
$$
= y
$$
Thus,
$$
x = y
$$
corollay 1 (Uniqueness of the zero vector)
Assume \(0\) and \(0’\) are both additive identities in \(V\); i.e., for every \(x\), \(x+0=x\) and \(x+0’=x\) so, \(x+0=x+0’\). And by cancellation law
$$
(0 = 0′)
$$
Corollary 2 (Uniqueness of additive inverses)
Let \(x\in V\). Suppose \(y\) and \(y’\) are both inverses of \(x\); i.e., \(x+y=0\) and \(x+y’=0\). So, \(x+y=x+y’\). By the Cancellation Law
$$
y = y’
$$
Theorem 1.2
When working with vector spaces, certain properties that seem “obvious” still need to be proven from the axioms. Theorem 1.2 lists three of them:
- \(0x = 0\) for every vector \(x \in V\).
- \((-a)x = -(ax) = a(-x)\) for any scalar \(a \in F\) and vector \(x \in V\).
- \(a0 = 0\) for every scalar \(a \in F\).
Let’s go through the proofs together.
(a) Why does \(0x = 0\)?
Think about multiplying any vector (x) by the scalar (0).
By the vector space axioms, we know:
$$
0x + 0x = (0+0)x = 0x = 0x + 0 = 0 + 0x
$$
So we conclude by the theorem 1.1
$$
0x = 0.
$$
(b) Why does \((-a)x = -(ax) = a(-x)\)?
Let’s start with the first part. The vector \(-(ax)\) is defined as the additive inverse of \(ax\). In other words:
$$
ax + (-(ax)) = 0.
$$
But from the axioms we can also write:
$$
ax + (-a)x = [a + (-a)]x = 0x = 0.
$$
Since the additive inverse is unique, this means: $$
ax + (-(ax)) = 0 = ax + (-a)x
$$
$$
(-a)x = -(ax).
$$
Now for the second part:
$$
a(-x) = a[-(1x)] = a[(-1)x] = [a(-1)]x = (-a)x.
$$
So everything matches up nicely!
(c) Why does \(a0 = 0\)?
This one looks just like part \(a\). If we take:
$$
a0 + a0 = a(0+0) = a0 = a0 + 0,
$$
then, just like before, we see that \(a0\) must be the zero vector:
$$
a0 = 0.
$$
And that’s it! These little results may feel obvious, but proving them directly from the axioms gives us a stronger foundation. They’ll keep showing up later whenever we manipulate vectors and scalars.
All set. Now let’s prove: When is a subset a subspace?
We learned that a vector space is defined by eight axioms (such as commutativity of addition, distributivity of scalar multiplication, etc.), which include closure under addition and scalar multiplication. Now, when we ask whether a subset \(W \subseteq V\) is a subspace, do we need to re-check all eight axioms? Fortunately, the answer is no.
Here’s why:
- Out of the eight axioms, six always hold automatically for any subset of a vector space. For example, the commutative law \(x + y = y + x\) holds in \(V\), so it must hold in \(W\) as well.
- That leaves only four properties to check for \(W\):
- \(W\) is closed under addition. \(x + y \in W\) whenever \(x, y \in W\)
- \(W\) is closed under scalar multiplication. \(c x \in W\) whenever \(c \in F\) and \(x \in W\).
- \(W\) contains the zero vector.
- Every vector in \(W\) has an additive inverse in \(W\).
At this point, you might wonder: Why not just keep these four conditions as the definition? Well, mathematicians don’t like redundancy. It turns out that condition (4) is automatically satisfied if the first three are true. That insight leads us to Theorem 1.3.
Theorem 1.3
Let \(V\) be a vector space and \(W \subseteq V\). Then \(W\) is a subspace of \(V\) if and only if the following three conditions hold:
- \(0 \in W\).
- \(x + y \in W\) whenever \(x, y \in W\).
- \(c x \in W\) whenever \(c \in F\) and \(x \in W\).
Proof
(⇒) Suppose \(W\) is already a subspace. Then it must be a vector space under the same operations as \(V\).
- Closure under addition and scalar multiplication ensures conditions (2) and (3).
- For the zero vector, let \(0′ \in W\) be the additive identity in \(W\). For any \(x \in W\), we must have \(x + 0′ = x\). But in \(V\), the only additive identity is \(0\). Thus, \(0′ = 0\), which means \(0 \in W\). So all three conditions hold.
(⇐) Now suppose conditions (1), (2), and (3) hold. Then:
- Closure under addition and scalar multiplication shows that the algebraic structure of \(W\) is preserved.
- Condition (1) ensures that \(W\) contains the zero vector.
- Finally, what about additive inverses? For any \(x \in W\), condition (3) gives \((-1)x \in W\). But \((-1)x = -x\), which is exactly the additive inverse of \(x\).
Therefore, \(W\) satisfies all requirements to be a subspace of \(V\).
Why This Matters
Theorem 1.3 is powerful because it simplifies the work: instead of checking eight axioms, or even four, we only need to verify three simple conditions. This is why, in practice, when you want to show a subset is a subspace, you just check:
- Does it contain the zero vector?
- Is it closed under addition?
- Is it closed under scalar multiplication?
If yes, then congratulations — it’s a subspace!
References & Further Reading
The theorem numbering in this post follows *Linear Algebra* (4th Edition) by Friedberg, Insel, and Spence. Some explanations and details here differ from the book. If you want a deeper and more rigorous treatment of linear algebra, this book is an excellent reference.