Linear algebra
A
line passing through the origin
(blue, thick) in R3 is a linear subspace, a common object of study in linear
algebra.
Linear
algebra is a branch of mathematics concerned with the study of vectors, with families of vectors called vector
spaces or linear spaces, and with
functions which input one vector and output another, according to certain
rules. These functions are called linear
maps or linear transformations
and are often represented by matrices.
Linear algebra is central to modern mathematics and its applications. An
elementary application of linear algebra is to the solution of a systems of linear equations in several unknowns. More advance applications are
ubiquitous, in areas as diverse as abstract
algebra and functional analysis.
Linear algebra has a concrete representation in analytic
geometry and is generalized in operator
theory. It has extensive applications in
the natural sciences
and the social sciences.
Nonlinear mathematical models
can often be approximated by linear ones.
History
Many
of the basic tools of linear algebra, particularly those concerned with the
solution of systems of linear equations, date to antiquity. See, for example,
the history of Gaussian elimination. But the abstract study of vectors and vector spaces does
not begin until the 1600s. The origin of many of these ideas is discussed in
the article on determinants.
The method of least squares, first used by Gauss in the 1790s, is an early and significant application of
the ideas of linear algebra.
The
subject began to take its modern form in the mid-19th century, which saw many
ideas and methods of previous centuries generalized as abstract
algebra. Matrices
and tensors were introduced and well understood by the turn of the 20th
century. The use of these objects in special relativity,
statistics, and quantum
mechanics did much to spread the subject of
linear algebra beyond pure mathematics.
Main structures
The
main structures of linear algebra are vector
spaces and linear
maps between them. A vector space
is a set
whose elements can be added together and multiplied by the scalars, or
numbers. In many physical applications, the scalars are real
numbers, R. More generally, the
scalars may form any field
F — thus one can consider vector spaces over the field Q of rational
numbers, the field C of complex
numbers, or a finite
field Fq. These
two operations must behave similarly to the usual addition and multiplication
of numbers: addition is commutative and associative, multiplication distributes over addition, and so on. More precisely, the two
operations must satisfy a list of axioms chosen to emulate the properties of
addition and scalar multiplication of Euclidean
vectors in the coordinate n-space Rn.
One of the axioms stipulates the existence of zero vector, which behaves
analogously to the number zero with respect to addition. Elements of a general
vector space V may be objects of any nature, for example, functions
or polynomials,
but when viewed as elements of V, they are frequently called vectors.
which
is compatible with addition and scalar multiplication:
for
any vectors u,v ∈ V and a scalar r ∈ F.
A
fundamental role in linear algebra is played by the notions of linear combination,
span,
and linear independence
of vectors and basis
and the dimension of a vector space. Given a vector space V over a
field F, an expression of the form
where
v1, v2, …, vk are vectors
and r1, r2, …, rk are
scalars, is called the linear combination of the vectors v1,
v2, …, vk with coefficients r1,
r2, …, rk. The set of all linear
combinations of vectors v1, v2, …, vk
is called their span. A linear combination of any system of vectors with
all zero coefficients is zero vector of V. If this is the only way to
express zero vector as a linear combination of v1, v2,
…, vk then these vectors are linearly independent. A
linearly independent set of vectors that spans a vector space V is a basis
of V. If a vector space admits a finite basis then any two bases have
the same number of elements called the dimension of V and V
is a finite-dimensional vector space. This theory can be extended to
infinite-dimensional spaces.
There
is an important distinction between the coordinate n-space Rn
and a general finite-dimensional vector space V. While Rn
has a standard basis {e1, e2, …, en},
a vector space V typically does not come equipped with a basis and many
different bases exist (although they all consist of the same number of elements
equal to the dimension of V). Having a particular basis {v1,
v2, …, vn} of V allows one to
construct a coordinate system in V: the vector with coordinates (r1,
r2, …, rn) is the linear combination
The
condition that v1, v2, …, vn
span V guarantees that each vector v can be assigned coordinates,
whereas the linear independence of v1, v2,
…, vn further assures that these coordinates are determined
in a unique way (i.e. there is only one linear combination of the basis vectors
that is equal to v). In this way, once a basis of a vector space V
over F has been chosen, V may be identified with the coordinate n-space
Fn. Under this identification, addition and scalar
multiplication of vectors in V correspond to addition and scalar
multiplication of their coordinate vectors in Fn.
Furthermore, if V and W are an n-dimensional and m-dimensional
vector spaces over F and a basis of V and a basis of W
have been fixed then any linear transformation T: V → W
may be encoded by an m × n matrix
A with entries in the field F, called the matrix of T with
respect to these bases. Therefore, by and large, the study of linear transformations,
which were defined axiomatically, may be replaced by the study of matrices,
which are concrete objects. This is a major technique in linear algebra.
Vector spaces over the complex numbers
Remarkably,
the 2 × 2 complex
matrices were studied before 2 × 2 real matrices. Early topics of interest included biquaternions and Pauli
algebra. Investigation of 2 × 2 real matrices
revealed the less common split-complex numbers and dual
numbers, which are at variance with the
Euclidean nature of the ordinary complex number plane.
Some useful theorems
- Every vector space has a basis.[1]
- Any two bases of the same vector space have the same cardinality;
equivalently, the dimension of a vector space is well-defined.[2]
- A square matrix
is invertible
if and only if
its determinant
is nonzero.[3]
- A matrix is invertible if and only if the linear map represented by the matrix is
an isomorphism.
- If a square matrix has a left inverse or a right
inverse then it is invertible (see invertible matrix for other equivalent statements).
- A matrix is positive
semidefinite if and only if each of its eigenvalues is
greater than or equal to zero.
- A matrix is positive
definite if and only if each of its eigenvalues is
greater than zero.
- An n×n matrix is diagonalizable (i.e.
there exists an invertible matrix P and a diagonal matrix D
such that A = PDP−1) if and only if it has n
linearly independent eigenvectors.
- The spectral theorem states that a matrix is orthogonally diagonalizable if
and only if it is symmetric.
For
more information regarding the invertibility of a matrix, consult the invertible matrix
article.
Generalizations and related topics
Since
linear algebra is a successful theory, its methods have been developed in other
parts of mathematics. In module
theory one replaces the field
of scalars by a ring. In multilinear algebra
one considers multivariable linear transformations, that is, mappings which are
linear in each of a number of different variables. This line of inquiry
naturally leads to the idea of the tensor
product. Functional analysis
mixes the methods of linear algebra with those of mathematical analysis
No comments:
Post a Comment