Understanding the Basis

In linear algebra, we often want to find the simplest set of building blocks that can construct every single element in a space without any redundancy. This set is called a Basis. We are going to define a Basis using two ingredients: Linear Independence and Generating (Spanning).

1. The Formal Definition

According to the text, a subset \(\beta\) of a vector space \(V\) is a basis if it satisfies two essential conditions:

  1. Linear Independence: No vector in the set can be written as a combination of the others. There is no “wasted” information.
  2. Generates (Spans) \(V\): Every vector in the space \(V\) can be built using a linear combination of the vectors in \(\beta\).

Think of a basis as a coordinate system. It tells you exactly how to “reach” any point in the space.

Example A: The Standard Basis for \(\mathbb{F}^n\)

In a standard coordinate space like \(\mathbb{R}^3\) or \(\mathbb{F}^n\), we use vectors that have a \(1\) in one position and \(0\) elsewhere.

  • \(e_1 = (1, 0, \dots, 0)\)
  • \(e_2 = (0, 1, \dots, 0)\)
  • … and so on. This is the most “natural” way to describe space, which is why we call it the standard basis.

Example B: The Space of Matrices (\(M_{m \times n}\))

If our “vectors” are actually matrices, the basis consists of matrices (\(E^{ij}\)) that contain a single \(1\) at the intersection of the \(i\)-th row and \(j\)-th column, with zeros everywhere else. By scaling and adding these “unit matrices,” you can create any matrix imaginable.

Let’s make this clear with a simple \(2 \times 2\) example.

  • Example for \(M_{2 \times 2}\): Consider the space of all \(2 \times 2\) matrices. We need four “unit” matrices to serve as our basis building blocks:
    • \(E^{11} = \begin{pmatrix} \mathbf{1} & 0 \\ 0 & 0 \end{pmatrix}\) (1 in row 1, col 1)
    • \(E^{12} = \begin{pmatrix} 0 & \mathbf{1} \\ 0 & 0 \end{pmatrix}\) (1 in row 1, col 2)
    • \(E^{21} = \begin{pmatrix} 0 & 0 \\ \mathbf{1} & 0 \end{pmatrix}\) (1 in row 2, col 1)
    • \(E^{22} = \begin{pmatrix} 0 & 0 \\ 0 & \mathbf{1} \end{pmatrix}\) (1 in row 2, col 2)
  • Why is this a basis? Because you can build any \(2 \times 2\) matrix using a linear combination of these four.
    • For instance: \(\begin{pmatrix} 5 & -3 \\ 2 & 8 \end{pmatrix} = 5 \cdot E^{11} + (-3) \cdot E^{12} + 2 \cdot E^{21} + 8 \cdot E^{22}\)
    • Just like the standard basis vectors for \(\mathbb{R}^n\), these \(E^{ij}\) matrices isolate each component, making them the perfect, independent building blocks for matrix space.

Example C: Polynomial Spaces (\(P_n(F)\) and \(P(F)\))

When dealing with polynomials, we look at the powers of \(x\).

  • For polynomials of degree at most \(n\), the basis is \({1, x, x^2, \dots, x^n}\).
  • For the space of all polynomials (\(P(F)\)), the basis is the infinite set \({1, x, x^2, \dots}\).

Example D: The Zero Vector Space

We’ve looked at spaces with many vectors, but what about the simplest space of all? The zero vector space, \({0}\), contains only the zero vector. What are its building blocks? This is a fascinating case of mathematical minimalism. The basis for the zero vector space is the empty set, \(\emptyset\). Why? Because the span of the empty set is defined as \({0}\), and the empty set is considered linearly independent by convention. It’s the ultimate example of “doing more with less”—or in this case, doing something with nothing!

2. The Uniqueness

Once, we form basis, then we can draw some good characteristic, uniqueness. It tells us that for any vector in the space, there is exactly one recipe (set of scalars) to build it based our basis. This effectively gives every vector a unique “ID card” or “address” in the space, based on basis. We can say more formally this way;

Let \(\beta = {u_1, u_2, \dots, u_n}\) be a subset of a vector space \(V\). Then \(\beta\) is a basis for \(V\) if and only if every vector \(v \in V\) can be uniquely expressed as a linear combination of the vectors in \(\beta\).

This implies that:

$$v = a_1u_1 + a_2u_2 + \dots + a_nu_n$$

for unique scalars \(a_1, \dots, a_n\).

(See appendix for proof)

3. Understanding Dimension

Now that we know a Basis is the essential set of building blocks for a vector space, a natural question arises: How many building blocks do we actually need?

This counting game leads us to one of the most intuitive concepts in linear algebra: Dimension.

Defining Dimension

According to the definitions in our text, if a vector space has a basis consisting of a finite number of vectors, we call it finite-dimensional.

The dimension of the space, denoted as \(\dim(V)\), is simply the unique number of vectors in that basis. It is a single number that tells you how “large” or “complex” the space is.

Simple Examples from the Text

Let’s look at the dimension of the spaces we discussed earlier. You just need to count the elements in the basis!

1. The “Zero” Space (\({0}\))

  • Basis: The empty set \(\emptyset\).
  • Count: 0.
  • Result: The dimension is 0. It contains no “directions” to move in.

2. The Standard Space (\(\mathbb{F}^n\))

  • Basis: The standard basis \({e_1, e_2, \dots, e_n}\).
  • Count: There are exactly \(n\) vectors.
  • Result: \(\dim(\mathbb{F}^n) = n\). This matches our intuition (e.g., \(\mathbb{R}^3\) is 3-dimensional).

3. The Matrix Space (\(M_{m \times n}\))

  • Basis: The unit matrices \(E^{ij}\) (one for each entry position).
  • Count: A matrix with \(m\) rows and \(n\) columns has \(m \times n\) total entries.
  • Result: \(\dim(M_{m \times n}) = mn\).
    • Quick check: A \(2 \times 3\) matrix space acts just like a 6-dimensional space (\(2 \times 3 = 6\)).

4. The Polynomial Space (\(P_n(F)\))

  • Basis: \({1, x, x^2, \dots, x^n}\).
  • Count: Be careful here! We start counting from \(x^0\) (which is 1) up to \(x^n\). The total count is \(n + 1\).
  • Result: \(\dim(P_n(F)) = n + 1\).
    • Note: Polynomials of degree 2 require 3 coefficients (\(ax^2 + bx + c\)), so the dimension is 3.

4. Conclusion

We started with a definition of Basis (Independence + Span), proved that it gives us a Unique way to represent vectors, and finally used the size of that basis to define Dimension. These three concepts form the tripod that supports all of linear algebra.

Appendix: Proof for the Uniqueness

Let \(\beta = {u_1, u_2, \dots, u_n}\) be a subset of a vector space \(V\). Then \(\beta\) is a basis for \(V\) if and only if every vector \(v \in V\) can be uniquely expressed as a linear combination of the vectors in \(\beta\).

We are going to prove in the way, like if beta is basis then uniqueness valid, and if uniquenss valid then beta is basis

Proof: From Basis to Uniqueness (\(\Rightarrow\))

Step 1: Proving Existence (Using Spanning)

First, we establish that a representation exists. By definition, since \(\beta\) is a basis for \(V\), it generates (spans) the entire space \(V\).

This means that for any vector \(v \in V\), there exist scalars \(a_1, \dots, a_n\) such that:

$$v = a_1u_1 + a_2u_2 + \dots + a_nu_n$$

Step 2: Proving Uniqueness (Using Linear Independence)

Now, we must prove that this is the only way to write \(v\).

Suppose, for the sake of argument, that there is another representation of \(v\) using different scalars \(b_1, \dots, b_n\):

$$v = b_1u_1 + b_2u_2 + \dots + b_nu_n$$

If we subtract the second equation from the first, the vector \(v\) cancels out on the left side, leaving the zero vector:

$$0 = (a_1 – b_1)u_1 + (a_2 – b_2)u_2 + \dots + (a_n – b_n)u_n$$

Here is where the definition of a basis becomes crucial. Since \(\beta\) is a linearly independent set, the only linear combination of its vectors that equals zero is the trivial one (where all coefficients are zero).

Therefore, every coefficient in the equation above must be zero:

$$a_1 – b_1 = 0, \quad a_2 – b_2 = 0, \quad \dots, \quad a_n – b_n = 0$$

This implies:

$$a_1 = b_1, \quad a_2 = b_2, \quad \dots, \quad a_n = b_n$$

Conclusion:

The two representations are identical. Thus, the expression of \(v\) as a linear combination of \(\beta\) is unique.

Proof: From Uniqueness to Basis (\(\Leftarrow\))

We want to prove:

If every vector \(v \in V\) can be uniquely expressed as a linear combination of \(\beta\), then \(\beta\) is a basis.

Proof:

To define \(\beta\) as a basis, we must satisfy two conditions:

  1. \(\beta\) generates (spans) \(V\).
  2. \(\beta\) is linearly independent.

Step 1: Checking Span

The hypothesis states that “each \(v \in V\) can be uniquely expressed as a linear combination of vectors of \(\beta\).”

Since every \(v\) can be expressed, it automatically follows that \(\beta\) spans \(V\).

$$\text{span}(\beta) = V$$

(Check!)

Step 2: Checking Linear Independence

We need to verify that the only way to form the zero vector is with all zero scalars.

Consider the equation:

$$c_1u_1 + c_2u_2 + \dots + c_nu_n = 0$$

We know that the zero vector \(0\) can be written as a linear combination where all scalars are zero:

$$0u_1 + 0u_2 + \dots + 0u_n = 0$$

Now, look at our hypothesis again: Uniqueness.

Since the representation of any vector (including the zero vector) must be unique, there cannot be a second, different way to write \(0\).

Therefore, the coefficients in our first equation must match the coefficients in the zero equation.

$$c_1 = 0, c_2 = 0, \dots, c_n = 0$$

Since the only solution is the trivial solution, \(\beta\) is linearly independent.

(Check!)

Conclusion:

Since \(\beta\) spans \(V\) and is linearly independent, \(\beta\) is a basis for \(V\).

Leave a Reply

Your email address will not be published. Required fields are marked *