In the theory of vector spaces, a set of vectors is called linearly independent if no non-trivial linear combination of them equals the zero vector. On the contrary, if such a linear combination exists, the set of vectors is called "linearly dependent". These concepts play an important role in the definition of dimension, because the dimension of a vector space can be determined by the maximum number of linearly independent vectors it has.
A set of vectors must be linearly dependent if at least one of them can be expressed as a linear combination of other vectors.
Specifically, suppose a set of vectors v1, v2, ..., vk comes from a vector space V. This set of vectors This is called linear dependence. When there exist non-all-zero scalars a1, a2, ..., ak such that
a1v1 + a2v2 + ... + ak vk = 0
. In other words, if there is a scalar that is non-zero, then it follows that at least one vector can be represented by a linear combination of the other vectors. Conversely, if the only solution is one in which all scalars are zero, then the set of vectors is linearly independent.
In the infinite-dimensional case, as long as several non-empty finite subsets are linearly independent, then this set of vectors is a linearly independent set.
In addition, for the case of two vectors: the two vectors are linearly dependent if and only if one vector is a scalar multiple of the other vector. If two vectors are independent, then they cannot be scalar multiples of each other. More specifically, if one vector is the zero vector, then the set of vectors must be linearly dependent, since the zero vector can be formed by any linear combination of vectors.
The zero vector cannot appear in any set of linearly independent vectors.
To explain with a geometric example: consider the vectors u and v, which if independent define a plane. However, if a third vector w lies in the same plane as u and v, then the three vectors become linearly dependent. This means that all three vectors are not needed to describe the plane, since only u and v are needed. If we infer this, n linearly independent vectors in n-dimensional space can uniquely define a point in the space.
Assessing the linear independence of vectors is not always intuitive. For example, in geolocation, if a person asks for the coordinates of a place, they can say "It is located three miles north of here and four miles east." This is sufficient to describe the location. Here the "North" vector and the "East" vector are linearly independent, and the "Northeast" vector of 5 miles formed by the "North" 3 miles and the "East" 4 miles vector is a linear combination of the first two vectors. This makes it redundant.
How to evaluate the independence of a set of vectors is always a challenging problem. By examining the linear combinations and their components one by one, we can more clearly determine the relationship between them. But is there an easier or more intuitive way to understand and evaluate the linear independence of vectors?