Linear
Algebra
I. A New Isomorphism: The Matrix
a. Given a system of linear equations, we
can arrange the equations into a matrix based on the variable (or power of the
variable) corresponding to each coefficient.
i.
EX:
3x+ y = 7 Þ

9x-8y = 8
b. Basics
i.
An
n x m matrix contains n rows
and m columns whose elements are as
follows:

ii.
Coefficient Matrix- A matrix containing only the coefficients
of the variables in a system of equations.
iii.
Augmented Matrix- A matrix containing both the
coefficients and the constant terms in a system of equations (see EX Iai)
iv.
Square Matrix- A matrix where the number of rows
equals the number of columns (n x n).
v.
Diagonal Matrix- A matrix wherein all elements above and
below the main diagonal are zero.
1. Main
Diagonal-The diagonal from
the top left element to the lower right one.
vi.
Upper/Lower Triangular Matrix- A matrix wherein all elements
above/below the main diagonal are zero.
c. Column Vectors
i.
Column Vector- A matrix with only one column,
sometimes denoted as ‘vector’ only;
1.
Components- The entries in a vector (column or row).
2.
Standard Representation of Vectors-


=
is
normally represented in the Cartesian plane by a directed line segment from the
origin to the point (x,y). Vectors are
traditionally allowed to slide at will having no fixed position, only direction
and magnitude.
II. Reduced Row Echelon Form (RREF).
a. Matrices can be manipulated with
elementary row operations without compromising the isomorphism (losing
answers).
i.
Elementary
Row Operations:
1. Multiplication by a nonzero scalar (real
number)
2. Adding one row to another
3. Swapping Rows
b. Reduced
Row Echelon Form (RREF)-
A matrix is in RREFÛ
i.
The
leftmost non-zero entry in every non-zero row is a one.
ii.
Every
entry in the column containing a leading one is zero.
iii.
Every
row below a row containing a leading one has a leading one to the right.
c.
Rank- The number of leading 1s in a matrix’s RREF is the rank of
that matrix; Consider A, an n x m matrix.
i.
If
rank (A) = m, then there exists only
one solution to the system.
ii.
If
rank (A) < m, then the system has
either infinitely many or no solutions.
d. This process of reduction carries three
distinct possibilities:
i.
The
RREF of the coefficient matrix is the ‘identity matrix’ (rows containing only zeroes
are admissible as long as the remainder represents an identity matrix). Then, there exists only one solution to the
system of equations, and reintroducing the variables will give it (multiply by
the column vector containing the variables in their respective order).
1. Identity
Matrix- A matrix
containing 1s on the main diagonal and 0s elsewhere.
2. This is only possible when there are at
least as many equations as unknowns.
ii.
The
matrix reduction produces a contradiction of the type‘0 = C’ ЭC∈R.
iii.
The
matrix reduction fails to produce a RREFconforming to i or a
contradiction. This occurs when there is
a variable in the system that is not dependent on the others and therefore
multiple correct solutions exist.
1. Free
variable- A variable in
a system which is not dependent on any of the others and therefore does not
reduce out of the matrix.
2. To express this solution, reintroduce
variables and simply solve for the dependent (leading) variables. The free variable(s) is set equal to itself.
3. Example:



Þ =
4. It may be helpful to think of the last
column as separated from the rest of the matrix by an ‘=’.
e. Geometry
i.
Matrices
have strong ties to geometrical concepts and the insight necessary to solve
many linear algebra problems will be found by considering the geometric
implications of a system (and corresponding matrices).
ii.
Considering
the above example, it’s clear our system represents three planes (three
variables in each equation). Thus it
should not surprise us that the intersections of three planes (the solutions)
can either not occur (parallel planes, ii above) or take the form of a point (i
above), a line (one free variable), or plane (two free variables).
III. Matrix Algebra
a. Matrix Addition
i.
Matrix
addition is accomplished by simply adding elements in the same position to form
a new matrix.
b. Scalar Multiplication
i.
The
scalar is multiplied by every element individually.
c. Matrix-Vector Multiplication
i.
If
the number of rows in the column vector matches the number of columns in the
matrix:


=
Otherwise
the solution is undefined.
ii.
This
is often defined in terms of the columns or rows of A (it’s not hard to
translate)
iii.
It’s
helpful to think off placing the column vector horizontally above the matrix,
multiplying downward, then summing for each row.
d. Algebraic Rules
i.
If
A is an n x m; and x and y are vectors in Rm;
and k is a scalar, then

IV.
Linear
Transformations
a.
Matrix
Form of a Linear System
i.
A
linear system can be written in matrix form as
where A is the ‘matrix of transformation’,
is the column vector containing the variables
of the system, and
is the constant terms to which the variables
are equal.



b.
Linear Transformation
i.
A
function T from Rm to Rn for which there exists an n x m
matrix AЭ
for all
in Rm and


1.
for all
and
∈ Rm



2.
for all for all
∈ Rm and all scalars k


c. Finding the ‘Matrix of Transformation’,
A
i.
Standard Vectors- The vectors
in Rm that contain a 1 in the
position noted in their subscript and 0s in all others.

ii.
Using
the standard vectors,

Э
represents the ith column in A.

iii.
Identity Transformation
1. The transformation that returns
unchanged and thus has the identity matrix as
its matrix of transformation.

d. Geometry
i.
Linear
transformations can be found for many geometrical operations such as rotation,
scaling, projection, translation, etc.
V. Composing Transformations and Matrix
Multiplication
a. Just as we can compose functions and
generate another function, so can we compose linear transformations and
generate another linear transformation.
This composing can be represented as
.

b. To translate this into a new linear
transformation, we need to find the new matrix of transformation C=BA; this
process is known as ‘matrix multiplication’.
i.
Matrix
Multiplication
1. Let B be an n x p matrix and A a q x m
matrix. The product BA is defined Û p = q
2. If B is an n x p matrix and A a q x m
matrix, then the product BA is defined as the linear transformation
for all
in Rm. The product BA is an n x m matrix.


3. Arrange the two matrices as follows (the
order is important!):
![]() |



For
each new element in the matrix, you must multiply the elements of the old
matrices along the two lines that cross at its position, then sum the
products. In this case, the new element
will be equal to a11b11+a21b12+a31b13. Repeat.
ii.
Properties
of Matrix Multiplication
1. MATRIX MULTIPLICATION IS NONCOMMUTATIVE
a. BA ¹
AB
b. That means that what side of a matrix
you write another matrix on matters! This is not normal so pay close attention!
c. BA and AB both exist Û A and B are square matrices
2. Matrix Multiplication is associative
a. (AB)C = A(BC)
3. Distributive Property
a. A(C+D) = AC + AD
b. (A+B)C = AC + BC
i.
Be
careful! Column-Row rule for matrix multiplication still applies!
4. Scalars
a. (kA)B = A(kB) = k(AB)
VI.
The
Inverse of a Linear Transformation
a. A Linear Transformation is invertible if
its RREF is an identity matrix and therefore provides a single solution for any
(ensures bijectivity)

i.
Invertible Matrix- A matrix which, when used as the matrix
of transformation in a linear transformation, produces an invertible linear
transformation.
ii.
As
stated earlier, this requires that the matrix in question to either be square,
or for excess rows to 0 out.
iii.
Additional
Properties of an Invertible Matrix
1.
rref(A)
= In
2.
rank
(A) = n
3.
im
(A) = Rn
4.
ker
(A) = {
}

5.
Column
Vectors form a basis of Rn
6.
det
(A) ≠ 0
7.
0
fails to be an eigenvalue of A
b.
Finding
the Inverse of a Matrix
i.
To
find the inverse of a matrix A, combine A with the same-sized identity matrix
as shown, then row-reduce A. When A is
the identity matrix, the identity matrix will be A-1.


Þ
ii.
AA-1
= IA and A-1A = IA
iii.
A
=
Þ
= A-1




iv.
(AB)-1
= B-1A-1
c. Geometry
i.
The
invertibility or non-invertibility of a given matrix can also be viewed from a
geometric perspective based on the conservation of information with the
corresponding linear transformation. If
any information is lost during the transformation, the matrix will not be
invertible (and the contra positive).
ii.
Consider
two geometric processes, translation and projection. After a moment of thought, it should be
obvious that, given knowledge of the translation, we could undo any translation
we’re given. Conversely, given knowledge
of the type of projection and the projection itself, there are still infinitely
many vectors which could correspond to any vector on our plane. Thus, translation is invertible, and
projection is not.
VII. The Image and Kernel of a Linear
Transformation
a.
Linear Combinations-A vector
in Rn
is called a linear combination of the vectors
in Rn
if there exists scalars x1, … , xm such that 



b.
Span- The set of all linear combinations
of the vectors
is called their span:


i.


c.
Spanning Set-A set of vectors
∈ V which can express "
∈Vas a linear combination of themselves.


i.
span(
) = V

d.
Subspace of Rn- A subset W of the vector space Rn
is called a (linear) subspace of Rn if it has the following
properties:
i.
W
contains the zero vector in Rn
ii.
W
is closed under (vector) addition
iii.
W
is closed under scalar multiplication.
1.
ii
and iii together mean that W is closed under linear combination.
e.
Image of a Linear Transformation- The image of the linear transformation
is the span of the column vectors of A.

i.
im(T)=
im(A) =
where
are the column vectors of A.


ii.
The
image of T: Rm ® Rn is a subspace of the
target space Rn or im(A) ÍRn
1.
Properties
a.
The
zero vector in Rn is in the image of T
b.
The
image of T is closed under addition.
c.
The
image of T is closed under scalar multiplication
f.
Kernel of a Linear Transformation- All zeroes of the linear function, i.e.
all solutions to
; denoted ker(T) and ker (A).

i.
The
kernel of T: Rm ® Rn is a subspace of the domain
Rm or ker(A) ÍRn
1.
Properties
a.
The
zero vector in Rm is in the kernel of T
b.
The
kernel is closed under addition
c.
The
kernel is closed under scalar multiplication
ii.
Finding
the Kernel
1.
To
find the kernel, simply solve the system of equations denoted by
.
In this case, since all the constant terms are zero, we can ignore them
(they won’t change) and just find the rref (A).

2.
Solve
for the leading (dependent) variables
3.
The
ker(A) is equal to the span of the vectors with the variables removed or just
the system solved for the leading variables.

![]() |

Þ
VIII.
Bases
and Linear Independence
a.
Redundant Vectors- We say that a vector in the list
is redundant if
is a linear combination of the preceding
vectors 



b.
Linear Independence- The vectors
are called linearly independent if none of
them is redundant; otherwise, they are called linearly dependent.

i.
ker

ii.
rank

c.
Basis- The vectors
form a basis of a subspace V of Rn
if they span V and are linearly independent (the vectors
are required to be in V).


d.
Finding
a Basis
i.
To
construct a basis, say of the image of a matrix A, list all the column vectors
of A and omit the redundant vectors.
ii.
Finding
Redundant Vectors
1.
The
easiest way to this is by ‘inspection’ or looking at the vectors (specifically
0 components) and noticing that if a vector has a zero it cannot produce
anything in that position other than a 0.
2.
When
this isn’t possible, we can use a subtle connection between the kernel and
linear independence. The vectors in the
kernel of a matrix correspond to linear relations in which the vectors are set
equal to zero. Thus, solving for the
free variable, we obtain a linear combination.
Long story short, any column in a rref that doesn’t contain only one 1
and all else 0s, is redundant. Moreover,
the values in that column of the matrix correspond to the scalars by which the
other columns need be multiplied to produce the particular vector (note that
the column won’t contain a scalar for itself).
iii.
The
number of vectors in a basis is independent of the Basis itself (all bases for
the same subspace have the same number of vectors )
iv.
The
matrix representing a basis will always be invertible.
IX. Dimensions
a.
Dimension-The number of vectors needed to form a
basis of the subspace V, denoted dim (V)
i.
If
dim (V) = m
1.
There
exists at most m linearly independent vectors in V
2.
We
need at least m vectors to span V
3.
If
m vectors in V are linearly independent, then they form a basis of V
4.
If
m vectors in V span V, then they form a basis of V
ii.
The Rank-Nullity Theorem:For an n x mmatrix A, m = dim
(ima(A)) + dim(ker(A))
X.
Coordinates
a.
Using
the idea of basis and spanning set, we can create a new coordinate system for a
particular subspace. This system records
the constant terms needed to generate a particular vector in the subspace.
b.
Consider
a basis B = (
) of a subspace V of Rn. Then any vector,
,
in V can denoted uniquely by:





c. Linearity of Coordinates
i.
If
B is a basis of a subspace V of Rn, then
1. 

2. 

d. B-Matrix
i.
B-Matrix- The matrix that transforms
into
for a given Linear Transformation, T.


ii.
Finding
the B-Matrix
1. B
=
where
are the vectors in the Basis B (this is what
we did with the standard vectors!)


2. B = S-1AS where S is the
‘Standard Matrix’ and A is the matrix of transformation for T.
a.
Standard Matrix- The matrix containing all the members
of the spanning set.

b. Whenever this relation holds between two
n x n matrices A and B, we say that A and B are similar, i.e. the
represent the same linear transformation with respect to different bases.
i.
Similarity
is an Equivalence Relation
XI. Linear/Vector Spaces
a.
Linear/Vector Space- A set endowed with a rule for addition
and a rule for scalar multiplication such that the following are satisfied:
i.
(f+g)+h
= f+(g+h)
ii.
f+g
= g+f
iii.
There
exists a unique neutral element n in V such that f+n = f
iv.
For
each f in V, there exists a unique g such that f + g = 0
v.
k(f+g)
= kf+kg
vi.
(c+k)f
= cf+kf
vii.
c(kf)
= (ck)f
viii.
1(f)
= f
b.
Vector/Linear
Spaces are often not traditional Rn! Yet, all (unless specified
otherwise) terms/relations transfer wholesale.
i.
Example:
Polynomials! Derivation! Integration!
1.
Hint:
Pn means all polynomials of degree less than or equal to n
c.
Remember
the definition of subspaces!
d.
Finding
the Basis of a Linear Space (V)
i.
Write
down a typical element of the space in general form (using variables)
ii.
Using
the arbitrary constants as coefficients, express your typical element as a
linear combination of some (particular) elements of V.
1.
Make
sure you’ve captured any relationships between the arbitrary constants!
2.
EX:
In P2, the typical basis is [1, x, x2] from which any
element of P2 can be constructed as a linear combination.
iii.
Verify
the (particular) elements of V in this linear combination are linearly
independent; then they will form a basis of V.
XII. Linear Transformations and Isomorphisms
a.
Linear Transformation (Vector Space)-A function T from a linear space V to a
linear space W that satisfies:
i.
T(f
+ g) = T(f) + T(g)
ii.
T(kf)
= kT(f)
for all
elements f and g of V and for all scalars k.
b. If T is finite dimensional, the
rank-nullity theorem holds (analogous definitions of rank and nullity to
earlier).
c. Isomorphisms and Isomorphic Spaces
i.
Isomorphism- An invertible linear transformation.
ii.
Isomorphic Spaces- Two linear/vector spaces are isomorphic
iff there exists an isomorphism between them, symbolized by ‘»’.
iii.
Properties
1.
A
linear transformation is an isomorphism Û
ker(T) = {0} and im(T) = W
Assuming our
linear spaces are finite dimensional:
2. If V is isomorphic to W, then dim(V) =
dim (W)
d. Proving Isomorphic
i.
Necessary
Conditions
1. dim(V) = dim (W)
2. ker(T) = {0}
3. im(T) = W
ii.
Sufficient
Conditions
1. 1 & 2
2. 1 & 3
3. T is invertible (you can write a
formula)
XIII.
The
Matrix of a Linear Transformation
a.
B-Matrix (B)- The matrix which converts elements
from the original space V expressed in terms of the basis Binto the corresponding element of W, also expressed in
terms of the basis of B.
i.
[
f ]B-----B----> [T( f )]B
ii.


b.
Change of Basis Matrix- An invertible matrix which converts
from a basisB to another basis U in the same vector
space or [ f ]U = S [ f ]B
where S or SBÞ U
denotes the change of basis matrix.
i.


c.
As
earlier, the equalities:
i.
AS
= SB
ii.
A
= SBS-1
iii.
B
= S-1AS
hold for linear
transformation (A is the matrix of transformation in the standard basis).
XIV.
Orthogonality
a.
Orthogonal-Two vectors
in Rn are orthogonal Û
,perpendicular


b.
Length (magnitude or norm) of a vector-
,
a scalar

c.
Unit Vector- a vector whose length is 1, usually
denoted 

i.
A
unit vector can be created from any vector
by 


d.
Orthonormal Vectors- A set of vectors Э" vectors are both unit vectors and
mutually orthogonal.
i.
Þfor any
,



ii.
The
above may come in handy for proofs, especially when combined with distributing
the dot product between a set of orthonormal vectors and another vector.
iii.
Properties
1.
Linearly
Independent
2.
n
orthonormal vectors form a basis for Rn
e.
Orthogonal
Projection
i.
Any
vector in Rn can be uniquely expressed in terms of a subspace V of Rn
by a vector in V and a vector perpendicular to V (creates a triangle)
1.


ii.
Finding
the Orthogonal Projection
1.
If
V is a subspace of Rn with an orthonormal basis
, then


2.
This
can be checked with 

f.
Orthogonal Complement- Given a subset V of Rn, the
orthogonal complement,
,
consists of all vectors in Rn that are orthogonal to all vectors in
V.

i.
This
is equivalent to finding the kernel of the orthogonal projection onto V.
ii.
Properties:
1.
is
a subspace of Rn

2.


3.


4.


iii.
Given
a span of V, you can find this by finding the kernel of the span turned on its
side (i.e. the vector that is perpendicular to both spanning vectors)!
g.
Angle
between Two Vectors
i.


1.
Cauchy-Schwarz
Inequality ensures that this value is defined.
h.
Cauchy-Schwarz
Inequality
i.


i.
Gram-Schmidt Process- an algorithm for producing an
orthonormal basis from any basis.
i.
For
the first vector, simply divide it by its length to create a unit vector.
ii.
To
find the next basis vector, first find 

1.
This
becomes
in the general case.

iii.
Then,


iv.
This
procedure is simply repeated for every vector in the original basis.
1.
Keep
in mind that, to simply the calculation, any
can multiplied by a scalar (becomes a unit
vector, so won’t effect the end result just the difficulty of the calculation).

XV. Orthogonal Transformations and
Orthogonal Matrices
a.
Orthogonal Transformation- A transformation, T, from Rn
to Rn that preserves the length of vectors.
i.
Orthogonal
transformations preserve orthogonality and angles in general (Pythagorean
theorem proof).
ii.
Useful
Relations
1.
If
T:RnÞRn is orthogonal and
, then 


2.


b.
Orthogonal Matrices- The transformation matrix of an
orthogonal transformation.
i.
Properties:
1. The product, AB, of two orthogonal n x n matrices A and B is orthogonal
2. The inverse A-1 of an
orthogonal n x nmatrix A is
orthogonal
3. A matrix is orthogonal Û ATA=In or,
equivalently, A-1=AT.
4. The columns of an orthogonal matrix form
an orthonormal basis of Rn.
c. The Transpose of a Matrix
i.
The
matrix created by taking the columns of the original matrix and making them the
rows of a new matrix (and therefore the rows become columns).
1. More officially, the transpose AT
of A is the n x m matrix whose ijth
entry is the jith entry of A.
2.
Symmetric- A square matrix A Э AT=A
3.
Skew Symmetric- A square matrix A Э AT= -A
4.
(SA)T=ATST
ii.
If
are two (column) vectors in Rn,
then 


1. This WILL come in handy
d. The Matrix of an Orthogonal Projection
i.
Considering
a subspace V of Rn with orthonormal basis
, the
matrix of orthogonal projection onto V is QQT Э


XVI.
Inner
Product Spaces
a.
Inner Product- a linear space V is a rule that assigns
a real scalar (denoted by <f, g>) to any pair f, g of elements in V Э the following properties hold for all
f, g, h in V, and all c in R:
i.
<f,
g> = <g, f> (symmetry)
ii.
<f+h,
g> = <f, g> + <h, g>
iii.
<cf,
g> = c<f, g>
iv.
<f,
f>> 0, for all nonzero f in V (positive definiteness)
1.
This
is the tricky one! It will often
require, for matrices, that the kernel = {0} or the matrix is invertible.
b.
Inner Product Space- A linear space endowed with an inner
product.
c.
Norm- The magnitude of an element f of an inner product space:

d.
Orthogonality- Two elements, f and g, of an inner
product space are called orthogonal (or perpendicular) if 

e.
Distance- if f and g are two elements of an inner
product space, 

f.
Orthogonal
Projection
i.
Analogous
to a true vector space; if g1,…,gm is an orthonormal
basis of a subspace of W of an inner product space V, then
for
all f in V.

XVII. Determinants
a.
Determinant- In general, a formula for calculating a
value which summarizes certain properties of a matrix, namely invertible if ¹ 0.
i.
Properties:
1.
det(AT)
= det(A)
2.
det
(AB) = det(A)det(B)
b.
The
Determinant of a 2 x 2 matrix
i.


c.
The
Determinant of a 3 x 3 matrix and
Beyond
i.
Geometrically,
our matrix will be invertible Û
its column vectors are linearly independent (and therefore span R3). This only occurs if
.

ii.
We
can find an equivalent value with the following formulas:
or 


1.
These
formulas are visibly represented by first picking a column (or row), then
multiplying every element in that column (or row) by the determinant of the
matrix generated by crossing out the row and column containing that element. The sign in front of each product alternates
(and starts with positive).
2.
In
this way, if we select the 1st column, our determinant will be: det
(A) = a11det(A11) – a21det(A21) + a31det(A31)
a.
Aij
represents the 2 x 2matrix generated
by crossing out the ith row and jth column of our 3 x 3 matrix.
3.
This
definition is recursive! It allows us to find the determinant of a square
matrix of any size by slowing reducing it until we reach the 2 x 2 case.
4.
Pick
your starting row/column with care! 0s are your friends!
d.
The
Determinant and Elementary Row Operations
i.
If
the proceeding seemed a little daunting for large matrices, there exists a
simple relationship between the elementary row operations and the determinant that
will allow us to greatly increase the number of zeroes in any given matrix.
ii.
Gauss-Jordan
Elimination and Ties (ERO)
1.
Swap
ithand jth row
a.
The
new determinant will be equal to –det(A) where A was the old matrix. Therefore, multiply the final determinant by
-1.
2.
Multiply
a row by a Scalar
a.
The
new determinant will be equal to kdet(A) where A was the old matrix and k the
scalar. Therefore, multiply by 1/k.
3.
Replace
with Self and Scalar of Another Row
a.
No
Change!
e.
The
Determinant of a Linear Transformation
i.
For
a linear transformation from V to V where V is a finite-dimensional linear space, then if B is a basis of V and B is the B –matrix
of T, then we define det (T) = det (B).
1.
The
det (T) will remain unchanged no matter which basis we choose!
XVIII. Eigenvalues and Eigenvectors
a.
Eigenvector- for an n x n matrix A, a nonzero
vector
in RnЭ
is a scalar multiple of
, or 




i.
The
scalar
may be equal to 0

b.
Eigenvalue-the scalar
for a particular eigenvector and matrix

c.
Exponentiation
of A
i.
If
is an eigenvector of A, then
is also an eigenvector of A raised to any
power:
,
, … , 





d.
Finding
the Eigenvalues of a Matrix
i.
Characteristic Equation-The relation stating
is true Û
is an
eigenvalue for the matrix A; also known as the secular equation.


1.
This
equation is seldom actually written; most people skip straight to:

2.
Characteristic Polynomial- the polynomial generated by solving for
the determinant in the characteristic expression (i.e. finding the determinant
of the above), represented by fA(
).

a.
Special Case: The 2 x
2 Matrix
i.
For
a 2 x 2 matrix, the characteristic
polynomial is given by:

b.
If
the characteristic polynomial found is incredibly complex, try
or
(common in
intro texts)


ii.
Trace- the sum of the diagonal entries of a
square matrix, denoted tr(A)
iii.
Algebraic Multiplicity of an Eigenvalue-
An eigenvalue has
algebraic multiplicity k if is a root of multiplicity k of the characteristic
polynomial, or rather
for some
polynomial
≠ 0.


iv.
Number
of Eigenvalues
1.
An
n x nmatrix has--at most--n real
eigenvalues, even if they are counted with their algebraic multiplicities.
a.
If
n is odd, there exists at least one real eigenvalue
b.
If
n is even, there need not exist any real eigenvalues.
v.
Eigenvalues,
the Determinant, and the Trace
1.
If
an n x n matrix A has eigenvalues
listed with
their algebraic multiplicities, then

a.


b.


vi.
Special Case: Triangular Matrix
1. The eigenvalues of a Triangular Matrix
are its diagonal entries.
vii.
Special Case: Eigenvalues of Similar
Matrices
1.
If
matrix A is similar to matrix B (i.e. there exists an invertible S Э B = S-1AS):
a.
A
& B have the same characteristic polynomial
b.
rank(A)
= rank (B), nullity (A) = nullity (B)
c.
A
and B have the same eigenvalues with the same algebraic and geometric
multiplicities.
i.
Eigenvectors
may be different!
d.
Matrices
A and B have the same determinant and trace.
e.
Finding
the Eigenvectors of a Matrix
i.
Eigenspace- For a particular eigenvalue
of a matrix A,
the kernel of the matrix
or 



1.
The
eigenvectors with eigenvalue
are the nonzero vectors in the eigenspace
.


ii.
Geometric Multiplicity- the dimension of the eigenspace (the
nullity of the matrix
).

1.
If
is an eigenvalue of a square matrix A, then the
geometric multiplicity of
must be less than of equal to the algebraic
multiplicity of
.



iii.
Eigenbasis-a basis of Rn consisting of
eigenvectors of A for a given n x nmatrix
A.
1.
If
an n x n matrix A has n distinct eigenvalues, then there exists
an eigenbasis for A.
iv.
Eigenbasis
and Geometric Multiplicities
1.
By
finding the basis of every eigenspace of a given n x nmatrix A and concatenating them, we can obtain a list of linearly
independent eigenvectors (the largest number possible); if the number of
elements in this list is equal to n (i.e. the geometric multiplicities sum to
n), then we can construct an eigenbasis; otherwise, there doesn’t exist an
eigenbasis.
f.
Diagonalization
i.
The
process of constructing the matrix of a linear transformation with respect to
the eigenbasis of the original matrix of transformation; this always produces a
diagonal matrix with the diagonal entries being the transformation’s
eigenvalues (recall that eigenvalues are independent of basis; see eigenvalues
of similar matrices).
ii.


iii.
Diagonalizable Matrix- An n x n matrix A that is similar to
some diagonal matrix D.
1.
A
matrix A is diagonalizable Û∃an
eigenbasis for A
2.
If
an n x n matrix A has n distinct
eigenvalues, then A is diagonalizable.
iv.
Powers
of a Diagonalizable Matrix
1.
To
compute the powers At of a diagonalizable matrix (where t is a
positive integer), diagonalize A, then raise the diagonal matrix to the t
power:

g. Complex Eigenvalues
i.
Polar
Form
1. 

ii.
De
Moivre’s Formula
1. 

iii.
Fundamental
Theorem of Algebra
1. Any polynomial of degree n has, allowing
complex numbers and algebraic multiplicity, exactly n (not necessarily
distinct) roots.
iv.
Finding
Complex Eigenvectors
1. After finding the complex eigenvalues,
subtract them along the main diagonal like usual. Afterwards, however, simple take the top row
of the resulting matrix, reverse it, multiply by a negative, and voila an
eigenvector! This may only work for 2 x 2
matrices.
h.
Discrete
Dynamical Systems
i.
Many
relations can be represented as a ‘dynamical system’, where the state of the
system is given, at any time, by the equation
or equivalently (by exponentiation)
where
represents the basis state and
represents the state at any particular time,
t.




ii.
This
can be further simplified by first finding a basis
of Rnsuch
that
are eigenvectors of the matrix A. Then, using
, the exponentiated matrix At can be
distributed and the eigenvector properties of
utilized to
generate
.





iii.
The
long term properties of a system can be obtained by taking the
and noticing
which vectors
approach zero
and which approach infinity (in the long run the former will have no effect
while the later will attempt to pull the system into an asymptotic approach to
themselves (albeit a scaled version of themselves)).


iv.
Stability
1.Dynamical Systems can be divided into two different
categories based on their long-term behavior: stable and unstable.
a. Stable Equilibrium
i.
In
the long run, a stable dynamical system asymptotically approaches the zero
state (
) or the original state.

ii.
This
occurs if the absolute value of all the eigenvalues in
are less than
(approaches
) or equal to 1
(approaches original or alternates).


b. Unstable
i.
> 1

2.Polar
a.
This
distinction can be transferred into polar coordinates by the following:

b. Therefore: 

c. 

d. 

v.
Complex
Dynamical Systems
1.
No comments:
Post a Comment