Linear Dependence and Linear Independence

In our previous post, we explored Linear Combinations—the art of mixing vectors together to reach new points in space. It showed us what is possible to build. But in mathematics, as in business, just knowing what is possible isn’t enough; we need to know what is efficient. Imagine trying to navigate a city where your GPS gives you three instructions: “Go North,” “Go South,” and “Go North again.” You get to your destination, but you’ve wasted time and fuel. In Linear Algebra, this redundancy is called Linear Dependence.

Today, we are learning the antidote: Linear Independence.

Why does this matter? Because when we combine the building power of Linear Combinations with the strict efficiency of Linear Independence, we unlock the holy grail of vector spaces: the Basis. A Basis is the “genetic code” of a space—the minimum distinct set of vectors required to describe everything else without an ounce of waste. Let’s dive in.

1. Definition: Liearly dependence

A subset \(S\) of a vector space \(V\) is considered linearly dependent if there exists a relationship between a finite number of distinct vectors within that set (\(u_1, u_2, \dots, u_n\)) such that they sum to the zero vector using scalar coefficients (\(a_1, a_2, \dots, a_n\)) that are not all zero.

Mathematically, this is expressed as:

$$
a_1u_1 + a_2u_2 + \dots + a_n u_n = 0
$$

(Where at least one \(a_i \neq 0\)).

Trivial Representation

The equation \(a_1u_1 + \dots + a_n u_n = 0\) is always true if every single scalar is zero (\(a_1 = a_2 = \dots = a_n = 0\)). This specific case, where 0 is represented as a linear combination of vectors by setting all coefficients to zero, is called the trivial representation of 0.

Therefore, for a set to be linearly dependent, you must be able to find a nontrivial representation of 0 (a way to get zero without zeroing out all the scalars).

2. Example: Linear Dependence in a Matrix Vector Space

When we talk about a “vector space,” the “vectors” do not have to be arrows or columns of numbers; they can be entire matrices. Let’s look at the vector space \(V = M_{2\times2}\) (the set of all \(2 \times 2\) real matrices).

The Set of Vectors (Matrices):

Let \(S\) be a subset of \(V\) containing the following three matrices:

$$
A = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}, \quad B = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}, \quad C = \begin{bmatrix} 2 & 5 \\ 0 & 0 \end{bmatrix}
$$

Demonstrating Linear Dependence:

To prove this set is linearly dependent, we need to find scalars \(a_1, a_2, a_3\) (not all zero) such that:

$$a_1 A + a_2 B + a_3 C = \mathbf{0}$$

(Note: \(\mathbf{0}\) here is the \(2 \times 2\) zero matrix)

By observation, we can see that matrix \(C\) is built from 2 of matrix \(A\) and 5 of matrix \(B\).

$$
2 \cdot \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} + 5 \cdot \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} = \begin{bmatrix} 2 & 5 \\ 0 & 0 \end{bmatrix}
$$

We can rearrange this into the standard form equal to the zero vector:

$$
2A + 5B – 1C = \mathbf{0}
$$

Conclusion:

Here, our scalars are \(a_1 = 2\), \(a_2 = 5\), and \(a_3 = -1\).

Since these scalars are not all zero (in fact, none of them are), we have found a nontrivial representation of the zero matrix. Therefore, the set \({A, B, C}\) is linearly dependent.

Based on the text provided in the image, here is the formal definition and a practical example applied to a polynomial vector space.

3. Definition: Liearly independece

A subset \(S\) of a vector space is defined as linearly independent simply if it is not linearly dependent.

4. Properties

1. The Empty Set is Linearly Independent

  • The Logic: This is a classic example of “vacuous truth” in mathematics. To be linearly dependent, a set must contain vectors that can be combined with non-zero scalars to equal the zero vector. Since the empty set has no vectors at all, it is impossible to satisfy the condition for dependence. Therefore, by default, it is independent.
  • Why it matters: It serves as the logical “base case” for induction proofs in linear algebra.

2. A Single Nonzero Vector is Independent

  • The Math: If you have a set \({u}\) where \(u \neq 0\), the only way to satisfy the equation \(au = 0\) is if the scalar \(a\) is \(0\). In other words, a single nonzero vector is independent. (See the appendix for the proof.)

3. The “Trivial Representation” of Zero

  • The Math: A set \({v_1, \dots, v_n}\) is linearly independent if and only if the equation: $$
    c_1v_1 + c_2v_2 + \dots + c_nv_n = 0
    $$ has only the solution where every scalar \(c_i = 0\) (the trivial solution).

5. Example: Linear Independence in a Polynomial Vector Space

Let’s use the vector space \(P_2\), which consists of all polynomials with a degree of 2 or less (e.g., \(ax^2 + bx + c\)).

The Set of Vectors (Polynomials):

Let \(S\) be a subset of \(P_2\) containing the following two polynomials:

$$
p_1(x) = 1 + x
$$

$$
p_2(x) = 1 – x
$$

Demonstrating Linear Independence:

To prove these are linearly independent, we set up the linear combination equal to the zero polynomial (\(\mathbf{0}\)) and check if the scalars must be zero.

$$
a_1 p_1(x) + a_2 p_2(x) = \mathbf{0}
$$

Substitute the polynomials:

$$
a_1(1 + x) + a_2(1 – x) = 0
$$

Group the terms by powers of \(x\):

$$
(a_1 + a_2) \cdot 1 + (a_1 – a_2) \cdot x = 0
$$

For a polynomial to be identically zero (equal to zero for all \(x\)), every coefficient must be zero. This gives us a system of linear equations:

  1. \(a_1 + a_2 = 0\) (Constant term)
  2. \(a_1 – a_2 = 0\) (Coefficient of \(x\))

Adding the two equations together:

$$
(a_1 + a_2) + (a_1 – a_2) = 0 \Longrightarrow 2a_1 = 0 \Longrightarrow a_1 = 0
$$

Substitute \(a_1 = 0\) back into the first equation:

$$
0 + a_2 = 0 \Longrightarrow a_2 = 0
$$

Conclusion:

Since the only solution is \(a_1 = 0\) and \(a_2 = 0\) (the trivial representation), the set \({1+x, 1-x}\) is linearly independent.

Appendix

We want to prove that a single nonzero vector set \({u}\) is Linearly Independent.

To prove this, we assume the opposite is true and show that it leads to an impossible result.

1. The Assumption (Opposite)

Assume that the set \({u}\) is Linearly Dependent.

By definition, this means there exists a scalar \(a\) that is not zero (\(a \neq 0\)) such that:

$$
a \cdot u = \mathbf{0}
$$

2. The Algebraic Manipulation

Since we assumed \(a \neq 0\), we can divide by it (or multiply by its inverse, \(a^{-1}\)). The image shows this calculation:

$$
u = 1 \cdot u
$$

(Replace 1 with \( a^{-1}a\))

$$
u = (a^{-1}a)u
$$

(Use associative property to group \(a\) and \( u \))

$$
u = a^{-1}(au)
$$

(Substitute \(au = \mathbf{0} \) from our assumption in step 1)

$$
u = a^{-1}(\mathbf{0})
$$

(Any scalar times the zero vector is zero)

$$
u = \mathbf{0}
$$

3. The Contradiction

The math above concludes that \(u = \mathbf{0}\).

But, the property started by stating that \(u\) is a nonzero vector (\(u \neq \mathbf{0}\)).

Because we reached a logical contradiction (saying \(u\) is zero when we know it isn’t), our initial assumption must be false. Therefore, the set cannot be dependent—it must be Linearly Independent.

Leave a Reply

Your email address will not be published. Required fields are marked *