https://sbarone7.math.gatech.edu/Chapters_1_and_2.pdf
- Replacement/Addition
- Interchange
- Scaling Row operations can be used to solve systems of linear equations
![[augmented-matrix.png]] A system of equations written as an augmented matrix
Row operation example (these are augmented) $$ \begin{bmatrix} 1 & -2 & 1 & 0 \ 0 & 2 & -8 & 8 \ 5 & 0 & -5 & 0 \end{bmatrix} \begin{bmatrix} 1 & -2 & 1 & 0 \ 0 & 2 & -8 & 8 \ 0 & -10 & 10 & 10 \end{bmatrix} \begin{bmatrix} 1 & -2 & 1 & 0 \ 0 & 2 & -8 & 8 \ 0 & -10 & 10 & 10 \end{bmatrix} \begin{bmatrix} 1 & -2 & 1 & 0 \ 0 & 2 & -8 & 8 \ 0 & 0 & 30 & -30 \end{bmatrix} $$ $$ \begin{bmatrix} 1 & -2 & 1 & 0 \ 0 & 1 & -4 & 4 \ 0 & 0 & 1 & -1 \end{bmatrix} \begin{bmatrix} 1 & -2 & 1 & 0 \ 0 & 1 & 0 & 0 \ 0 & 0 & 1 & -1 \end{bmatrix} \begin{bmatrix} 1 & 0 & 0 & 1 \ 0 & 1 & 0 & 0 \ 0 & 0 & 1 & -1 \end{bmatrix} $$
A linear system is considered consistent if it has
A rectangular matrix is in echelon form if
- If there are any, all zero rows are at the bottom
- The first non-zero entry (leading entry) of a row is to the right of any leading entries in the row above it
- All elements below a leading entry are zero ![[Pasted image 20240821095448.png|450]] For reduced row echelon form
- All leading entries, if any, are equal to 1.
- Leading entries are the only nonzero entry in their respective column.
Pivot position in a matrix A is a location in A that corresponds to a leading 1 in the REF of A Pivot column is the column of the pivot
Free variables are the variables of the non-pivot columns Any choice of the free variables leads to a solution of the system ^[If you have any free variables you do not have a unique solution] ![[Pasted image 20240821100548.png]]
A linear system is consistent iff the last column of the augmented matrix does not have a pivot. This is the same as saying that the RREF of the augmented matrix does not have a row of the form
$[0\ 0\ 0\ 0\ ...\ |\ 1]$ Moreover, if a linear system is consistent, then it has 1. a unique solution iff there are no free variables. 2. infinitely many solutions that are parameterized by free variables
Let
- The set of all linear combinations of the
$v$ 's in called the span of the$v$ 's - e.g. $$ SPAN(\begin{bmatrix} 1 \ 0 \end{bmatrix}, \begin{bmatrix} 0 \ 1 \end{bmatrix}) = \mathbb{R}^2 $$
- any 2 vectors in
$\mathbb{R}^2$ that are not scalar multiplies of each other span$\mathbb{R}^2$ Q: Is$\vec{b} \in SPAN(\vec{a}_1, \vec{a}_2)$ $$ \vec{b} = \begin{bmatrix} 7 \ 4 \ 3 \end{bmatrix}, \vec{a}_1 = \begin{bmatrix} 1 \ -2 \ -5 \end{bmatrix}, \vec{a}_2 = \begin{bmatrix} 2 \ 5 \ 6 \end{bmatrix} $$ Matrix below in form of system of equations where X and Y scale columns 0 and 1, and column 2 are coefficients on the right hand side of the equation. By reducing this matrix to RREF, we can systematically reveal the values of X and Y
$$ \begin{bmatrix} 1 & 2& 7\ -2 & 5& 4\ -5 & 6& 3 \end{bmatrix}
\begin{bmatrix} 1 & 2& 7\ 0 & 9& 18\ -5 & 6& 3 \end{bmatrix}
\begin{bmatrix} 1 & 2& 7\ 0 & 1& 2\ -5 & 6& 3 \end{bmatrix}
\begin{bmatrix} 1 & 2& 7\ 0 & 1& 2\ 0 & 16& 38 \end{bmatrix}
\begin{bmatrix} 1 & 2& 7\ 0 & 1& 2\ 0 & 0& 0 \end{bmatrix}
\begin{bmatrix} 1 & 0& 3\ 0 & 1& 2\ 0 & 0& 0 \end{bmatrix} $$ Yes.
![[Pasted image 20240826095038.png]] $$ \begin{bmatrix} 2 & 3 & 7 \ 1 & -1 & 5 \end{bmatrix}
\begin{bmatrix} 5 & 0 & 22 \ 1 & -1 & 5 \end{bmatrix}
\begin{bmatrix} 1 & 0 & 22/5 \ 0 & 1 & 22/5-5 \end{bmatrix} $$
Homogeneous:
You can parameterize the free variables and then write the solution as a vector sum.
The solution is
Because right side of augmented is nonzero:
$$
\begin{bmatrix}
1 & 3 & 1 & 9 \
2 & -1 & -5 & 11 \
0 & 1 & -2 & 6
\end{bmatrix} \rightarrow^{RREF} \begin{bmatrix}
1 & 0 & -2 & 6 \
0 & 1 & 1 & 1 \
0 & 0 & 0 & 0
\end{bmatrix}
$$
Let $x_3 = t$
$$
\vec{x} = \begin{bmatrix}
x_1 \
x_2 \
x_3
\end{bmatrix}
\begin{bmatrix} 2 \ -1 \ 1 \end{bmatrix} t
- \begin{bmatrix} 6 \ 1 \ 0 \end{bmatrix} $$
Given
Note: (This might just be wrong)
- Any of the vectors in the set are a linear combination of the others
- If there is a free variable, so there are infinite solutions to the homogenous equation
- If the columns of A are in
$\mathbb{R}^n$ , and there are$n$ basis vectors in$\mathbb{R}^n$ (which is always true), then if the amount of columns in A exceeds the amount of basis vectors that exist in that dimension, it means that there are free variables, which indicates linear dependence - If one or more of the columns of A is
$\vec{0}$
If two vectors are linearly independent, the are not colinear If 3, then not coplanal If 4, not cospacial
-
Domain of T is
$\mathbb{R}^n$ (where we start) -
Codomain or target of T is
$\mathbb{R}^m$ - The vector
$T(\vec{x})$ is the image of$\vec{x}$ under T - The set of all possible images is called the range
- image
$\in$ range$\in$ codomain - When the domain and codomain are both
$\mathbb{R}$ , you can represent them as a Cartesian Graph in$\mathbb{R}^2$ , as in a mapping of$\mathbb{R} \rightarrow \mathbb{R}$ - If y is the codomain and x is the domain, the range is the range, the domain is all the images of f(x)
$$
A = \begin{bmatrix}
1 & 1 \
0 & 1 \
1 & 1
\end{bmatrix}, \vec{u} = \begin{bmatrix}
3 \
4
\end{bmatrix}, \vec{b} = \begin{bmatrix}
7 \
5 \
7
\end{bmatrix}
$$
$$
T: \mathbb{R}^2 \rightarrow \mathbb{R}^3
$$
Compute
Range of
A function
- T(u + v) = T(u) + T(v)
- T(c$\vec{v}$) = cT(v)
- "Principle of Superposition"
- If we know
$T(e_1)$ , ...,$T(e_n)$ then we know every T(v)
- If we know
- Prove it is linear by proving the addition and multiplication rules
Standard vectors in
Theorem
Let
$T: \mathbb{R}^n \rightarrow \mathbb{R}^m$ be a linear transformation. Then there is a unique matrix$A$ such that$T(\vec{x}) = A\vec{x}, \vec{x} \in \mathbb{R}^n$ In fact,$A$ is a$m \times n$ and its$j^{th}$ column is a vector$T(\vec{e_j})$ $A = [T(\vec{e_1}), T(\vec{e_2}), ... T(\vec{e_n})]$
Find standard matrix A for T(x) = 3$\vec{x}$ for x in
TLDR:
1-1
A linear transformation
The matrix A has columns which span
- If there is at most one location in the codomain for every location in the domain
- 1-1 iff standard matrix has pivot in every column
- e.g.
$F(x) = x^2$ is not 1-1, because multiple x values for a single y value
The unique solution to
need square matrix if 1-1 then onto if Onto then 1-1
0 Matrix is matrix full of zeroes Identity matrix is a square matrix full of zeroes except for the diagonal, which is all ones. Multiplying with identity matrix always yields the same matrix.
Sums: same dimensions
Matrix multiplication:
Definition:
A is invertible
#star
Linearly dependent
- Mnemonic: After the trial, Johnny Depp was Single
Linearly independent
$\iff$ Invertible - This is just the inverse of the above
also
-
$A \in \mathbb{R}^{n \times n}$ is invertible$\iff \forall \vec{b}\ \exists ! \vec{x}\ (A \vec{x} = \vec{b})$ - Basically means that A is 1-1 and Onto, meaning that there is exactly one domain entry for every codomain entry
- (1-1 is at most 1, Onto is at least 1, together they make exactly 1)
- Basically means that A is 1-1 and Onto, meaning that there is exactly one domain entry for every codomain entry
-
$det(A) \not = 0 \iff$ invertible
An elementary matrix, E, is one that differs by
Row reduce
Therefore, if
Let A be an n x n matrix. These statements are all equivalent
a) A is invertible. b) A is row equivalent to I^n. c) A has n pivotal columns. (All columns are pivotal.) d) Ax = 0 has only the trivial solution. e) The columns of A are linearly independent. f) The linear transformation x -> Ax is one-to-one. g) The equation Ax = b has a solution for all b in R^n. h) The columns of A span R^n. i) The linear transformation x -> Ax is onto. j) There is a n x n matrix C so that CA = I_n. (A has a left inverse.) k) There is a n x n matrix D so that AD = I_n. (A has a right inverse.) l) A^T is invertible.
- 0 not eigenvalue of A
$det A \not = 0$
Noninvertible
A partitioned matrix is a matrix that you write as a matrix of matrices When doing multiplication with a block matrix, make sure the "receiving" matrix's entries go first, to respect the lack of commutativity in matrix multiplication. See HW 2.4 if this doesn't make sense.
Let A be m x n and B be n x p matrix. Then, the (i, j) entry of AB is row_i A · col_j B. This is the Row Column Method for matrix multiplication
![[Pasted image 20240917100516.png]]
Upper Triangular: Nonzero along and above Lower Triangular: Nonzero along and below
If A is an m x n matrix that can be row reduced to echelon form without row exchanges, then A = LU . L is a lower triangular m x m matrix with 1’s on the diagonal, U is an echelon form of A.
Suppose A can be row reduced to echelon form U without interchanging
rows. Then,
- Reduce A to an echelon form U by a sequence of row replacement operations, if possible.
- Place entries in L such that the same sequence of row operations reduces L to I.
A subset of
A subset H of
$c \in \mathbb{R}; \vec{u}, \vec{v} \in H$ -
$c \vec{u} \in H$ ,$\vec{u} + \vec{v} \in H$ $\vec{0} \in H$
span of columns of A same as range of A
span of set of
Linearly independent vectors that span a subspace
There are many different possible choice of basis for a subspace. Our choice can give us dramatically different properties.
Standard basis are i, j, k, but you can use other vectors to span the same amount of space if you want.
- What is a determinant? Given a linear transformation T, let us focus on the magnitude of the cross product of the basis vectors. The determinant would be the scalar factor between the original and transformed areas? (Yes)
- If you are calculating some integral over a transformed space, is the jacobian just the determinant of the transformation, or is it related---possibly scaling the result to make sense given standard basis vectors? (Yes)
Dimension/Cardinality of a non-zero subspace H, dim H, is the number of vectors in the basis of H. We define dim{0} = 0.
Theorem
Any two choices of
$\mathcal{B}_1$ ,$\mathcal{B}_2$ of a non-zero subspace H have the same dimension*
Ex Problems
- dim
$\mathbb{R}^n$ - n
- H =
${(x_1 ...., x_n) : x_1 + ... + x_n = 0}$ has dimension- n - 1
- use the idea of # 3
- n variables, solve for
$x_1$ ito everything else. -> one pivot everything else free vars. Therefore n - 1 free vars
- dim(Null A) is the number of
- number of free vars
- dim(Col A) is the number of
- number of pivots
the rank of a matrix A is the dimension of its column space number of pivots
dim(Null A) = Nullity number of of free vars
- Let
$\mathcal{B} \in H$ $\mathcal{B} = {\vec{b_1}, ..., \vec{b_n}}$ -
$\mathcal{B}$ is some basis for the subspace$H$ $$ \displaylines{ \vec{x} \in H \implies \ \text{coords of$\vec{x}$ relative to$\mathcal{B}$ are$c_1, . . . , c_n$ }\quad \vec{x} = c_1 \vec{b_1} + ... + c_n \vec{b_n}\quad\ \ \land \ \text{coord vector of } \vec{x} \text{ relative to } \mathcal{B}\quad [\vec{x}]_{\mathcal{B}} = \begin{bmatrix} c_1 \ ... \ c_n \end{bmatrix} } $$ ![[Pasted image 20240923104520.png|400]]
If a matrix A has n columns, then
$Rank(A) + Nullity(A) = n$ $dim(Col(A)) + dim(Nul(A)) = n$
Any two bases for a subspace have the same dimension
Let A be a n x n matrix. These conditions are equivalent.
- A is invertible
- The columns of A are a basis for
$\mathbb{R}^n$ - Col A =
$\mathbb{R}^n$ - rank A = dim Col A = n
- Null A = {0}
Imagine the area of parallelogram created by the basis of a standard vector space, like
You can also get the area of S by using the determinant of the matrix created by the vectors that span S, i.e.
- det(A) = 0
$\iff$ A is singular- det(A)
$\not =$ 0$\iff$ A is invertible
- det(A)
- det(Triangular) = product of diagonals
- det A = det
$A^T$ - det(AB) = det A · det B
$det(A^{-1}) = \frac{1}{det(A)}$ $det(kA) = k^n det(A)$
if A square:
- if adding rows to rows on A to get B then
$det A = det B$ - if swapping rows in A to get B then
$-det A = det B$ - if scaling one row of A by k, then
$k \cdot det(A)$ =$det(B)$ Exactly the same for columns
What the diagonal 3x3 is shorthand for
Cofactor of an n x n matrix A is
- & - & + & ... \
- & + & - & ... \
- & - & + & ... \
... & ... & ... & ...
\end{bmatrix}
$$
det A =
$a_{1j}C_{1j} + ... + a_{nj} C_{nj}$ For +/- use pattern of current matrix in Q, not the og
Given
- A is square
-
$A\vec{v}$ defined, e.g. if$A \in \mathbb{R}^{n\times n}$ then$\vec{v} \in \mathbb{R}^n$ $$ A\vec{v} = \lambda\vec{v} $$ -
$\vec{v}$ is an eigenvector for$A$ -
$\lambda$ is the corresponding eigenvalue ($\lambda\in \mathbb{C}$ ) An eigenvector is a vector solution$\vec{v}$ to the above equation, such that the linear transformation of$A$ has the same result as scaling the vector by$\lambda$ .
Furthermore:
Notes:
-
$\lambda > 0 \implies A\vec{v}, \vec{v}$ point same direction -
$\lambda < 0 \implies A\vec{v}, \vec{v}$ point opposite direction -
$\lambda$ can be complex even if nothing else in the equation is -
Eigenvalues cannot be determined from the reduced version of a matrix #star
- i.e. row reductions change the eigenvalues of a matrix
- The diagonal elements of a triangular matrix are its eigenvalues.
- A invertible iff 0 is not an eigenvalue of A.
- Stochastic matrices have an eigenvalue equal to 1.
- If
$\vec{v}_1 , \vec{v}_2, . . . , \vec{v}_k$ are eigenvectors that correspond to distinct eigenvalues, then$\vec{v}_1 , \vec{v}_2, . . . , \vec{v}_k$ are linearly independent
- the span of the eigenvectors that correspond to a particular eigenvalue
$Nul(A-\lambda I)$
trace of a Matrix
Algebraic multiplicity of an eigenvalue is how many times an eigenvalue repeatedly occurs as the root of the characteristic polynomial.
- Geometric multiplicity of an eigenvalue is the number of eigenvectors associated with an eigenvalue;
$dim(Nul(A-\lambda I))$ , which is saying how many eigenvector solutions does this eigenvalue have (recall$dim(B)$ is number of free vars in$B$ )
- square A,B are similar
$\iff$ we can find P so that$A = PBP^{-1}$ - A,B similar
$\implies$ same characteristic polynomial$\implies$ same eigenvalues
Stochastic matrix
- Matrix that uses the rates/probabilities
- Columns are probability vectors.
- Sum to 1 ![[Pasted image 20241002094128.png|500]]
Some vector
A stochastic matrix is a square matrix, P , whose columns are probability vectors. |det(P)| <= 1, only volume contracting or preserving
A Markov chain is a sequence of probability vectors, and a stochastic matrix P , such that: $$ \displaylines{ P^k \vec{x_0} = \vec{x}k\ \vec{x}{k+1} = P \vec{x}_k ; k = 0, 1, 2, . . . } $$
- Stochastic matrix is regular if there
$\exists (k \geq 1) P^k$ strictly has positive entries - Regular
$\iff$ unique steady state vectors- Irregular
$\iff$ $0\leq n\not = 1$ steady state vectors
- Irregular
A steady-state vector for P is a vector
Ex:
Determine the steady state vector for
$$
P = \begin{bmatrix}
.8 & .3 \
.2 & .7
\end{bmatrix}
$$
Goal: solve
- If the transformation is regular, a single eigenvector
- For our regular stochastic matrices, this is what the steady state vector is.
- If the transformation is irregular, possibly multiple eigenvectors or none at all. If multiple, points will converge to the closest possible eigenspace.
Theorem
as k ->
$\infty$ $$ \vec{x}_{k+1} = P \vec{x}_k ; k = 0, 1, 2, . . . $$If
$P$ is a regular stochastic matrix, then$P$ has a unique steady-state vector$\vec{q}$ , and$\vec{x_{k+1}} = P\vec{x_k}$ converges to$\vec{q}$ as$k \rightarrow \infty$ ;$(P^k \vec{x_0} \longrightarrow_{k\rightarrow \infty} \vec{q})$ where$P\vec{q} = \vec{q}$
Conjugate
- Reflects across the
$Re(z)$ axis Magnitude (or "Modulus")$|a + bi| = \sqrt{a^2 + b^2} = \sqrt{(a+bi)(a-bi)}$ Polar$a+ib = r(cos\phi + i\ sin\phi)$ where$r$ is the magnitude
if x and y
$\overline{(x+y)} = \overline{x} + \overline{y}$ $\overline{A\vec{v}} = A \overline{\vec{v}}$ $Im(x\overline{x}) = 0$ $\overline{(xy)} = \overline{x} \overline{y}$
Suppose
Theorem: Fundamental Theorem of Algebra
An
Theorem
- If
$\lambda \in \mathbb{C}$ is a root of a real polynomial,$\overline{\lambda}$ is also- Complex roots come in complex conjugate pairs
- If
$\lambda$ is an eigenvalue of real matrix$A$ , with eigenvector$\vec{v}$ , then$\overline{\lambda}$ is an eigenvalue of A with eigenvector$\overline{\vec{v}}$
4 of the eigenvalues of a 7 x 7 matrix are -2, 4 + i, -4 - i, and i
- Because there are 3 nonconjugate complex pairs, we know that the remaining eigenvalues are the conjugates of the given complex values
- What is the characteristic polynomial?
$p(\lambda) = (\lambda + 2)(\lambda - (4+i))(\lambda - (-4-i))(\lambda - i) (\lambda-(4-i))(\lambda - (-4 + i))(\lambda + i)$
The matrix that rotates vectors by
$\vec{u} \cdot \vec{v} = \vec{u}^{T} \vec{v}$ $\vec{u} \cdot \vec{v} = \vec{v} \cdot \vec{u}$ $(\vec{u} + \vec{v}) \cdot \vec{w} = \vec{u} \cdot \vec{w} + \vec{v} \cdot \vec{w}$ $(c\vec{u}) \cdot \vec{v} = c(\vec{u} \cdot \vec{w})$ $||c\vec{v}|| = |c|\ ||\vec{v}||$ $\vec{u} \cdot \vec{u} \geq 0$ $\vec{u} \cdot \vec{u} = ||\vec{u}||^2$ $\vec{a} \cdot \vec{b} = |a||b|\ cos\theta$
$||\vec{u} + \vec{w}||^2 = ||\vec{u}||^2 + ||\vec{w}||^2 \lor \vec{u} \cdot \vec{w} = 0 \implies \text{Orthogonal}$ - If
$W$ is a subspace of$\mathbb{R}^n$ and$\vec{z} \in \mathbb{R}^n$ ,$\vec{z}$ is orthogonal to$W$ if it is orthogonal to every vector in$W$ - The set of all vectors orthogonal to a subspace is a itself a subspace, called the [[Orthogonal Complement]] of
$W$ ,$W^{\perp}$ , W perp$W^{\perp} = {\vec{z} \in \mathbb{R}^n \mid \forall (\vec{w} \in W)\ \vec{z} \cdot \vec{w} = 0}$
$dim(Row\ A) = dim(Col\ A)$ $(Row\ A)^{\perp} = Nul\ A$ $(Col\ A)^{\perp} = Nul (A^T)$
$A\vec{x} = \vec{0}$ -
$\vec{x}$ is orthogonal to each row of$A$ -
$Row\ A$ is orthogonal complement to$Nul\ A$ -
$dim(Row\ A) + dim(Nul\ A) =$ number of columns
A set of vectors are an orthogonal set of vectors if every pair in the set is orthogonal to every other vector in the set.
Linear Independence for Orthogonal Sets:
If there is an orthogonal set of vectors
Expansion in Orthogonal Basis
If