此文僅爲學習記錄,內容會包括一些數學概念,定義,我的理解的摘要。但願可以分享一些學習內容。算法
第一節:Row Reduction and Echelon Forms
- Echelon form: 行消元后的矩陣
- Reduced echelon form: 行消元而且leading entry爲1的矩陣。
- Echelon form and reduced echelon form are row equivalent to the original form.
- Span{v1, v2, v3,...... vp} is the collection of all vectors that can be written in the form c1*v1 + c2*v2 + ...... cp*vp with c1, .... cp scalars.
- Ax = 0 has a nontrival solution if and only if the equation has at least one free variable.(not full column rank)
- Ax = b 的解等於 Ax = 0 和 特解的和。
- 解線性方程組流程P54。
- 線性無關指任何向量不能組合成其中一個向量。
- Ax = b : ColA1 * x1 + ColA2 * x2 +.... ColAm * xm = b
- Matrix Transformations: T(x) = Ax is linear transformation.
- 轉換矩陣是各維單位轉換後的組合。A = [T(e1) T(e2) .. T(en)]
- A mapping T: R^n -> R^m is said to be onto R^m if each b in R^m is the image of at least one x in R^n. (Ax = b 有解)
- A mapping T: R^n -> R^m is said to be one-to-one R^m if each b in R^m is the image of at most one x in R^n.
第二節:Matrix Operation
- Each column of AB is a linear combination of the columns of A using weightings from the corresponding columns of B. AB = A[b1 b2 b3 b4 ,,, bp] = [Ab1 Ab2 ... Abp]
- Each row of AB is a linear combination of the columns of B using weightings from the corresponding rows of A.
- Warning: AB != BA. AB = AC !=> B = C. AB = 0 !=> A = 0 or B = 0
- 逆矩陣的定義:A-1*A = A*A-1 = E. 能夠推導出A爲方陣,詳見Exercise 23-25 ,Section 2.1. A可逆的充要條件爲A滿秩(行列式不等於0)。
- 對[A I] 作行消元能夠獲得[I A-1]
- 矩陣滿秩的全部等價定義:P129,P179.
- LU分解:A = LU,其中L爲對角元素爲1,的下半方陣,U爲m*n的上半矩陣。L爲變換矩陣的伺機的逆,U爲A的Echelon form。計算L不須要計算各變換矩陣。詳見P146。
- subspace, column space, null space的定義。
- A = m*n => rank(A) + rank(Nul(A)) = n.
- The dimension of a nonzero subspace H, denoted by dim H, is the numbers of vectors in any basis for H. The dimension of the zero subspace {0} us defined to be zero.
第三節:Introduction to Determinants
- determinant的定義和計算方式。
- 行消元不改變行列式值。交換行改變正負號。某一行乘以k,那麼行列式乘以k。
- 三角矩陣的行列式爲對角元素的乘積。
- det(AB) = det(A) * det(B)。
- Let A be an invertible n*n matrix. For any b in R^n, the unique solutionx of Ax = b has entries given by xi = det Ai(b)/det(A)。 Ai(b) 表示用b替換A的第i行。
- 由5能夠推導出A^-1 = 1/det(A) * adj A. adj A = [(-1)^i+j* det(Aji)]
- 行列式與體積的關係:平行幾何體的面積或者體積等於|det(A)|。並且 det(Ap) = det(A)*det(p)
第四節:Vector Spaces
- An indexed set {v1, v2, ... ... vp} of two or more vectors, with vi != 0, is linearly dependent, if and only if some vj (with j > 1) is a linear combination of the preceding vectors.
- Elementary row operation on a matrix do not affect the linear dependence relations among the columns of the matrix.
- Row operations can change the column space of a matrix.
- x = Pb [x]b: we call Pb the change-of-coordinates matrix from B to the standard basis in R^n.
- Let B and C be bases of a vector space V. Then there is a unique n*n matrix P_C<-B such that [x]c = P_C<-B [x]b. The columns of P_C<-B are the C-coordinate vectors of the vectors in the basis B, that is P_C<-B = [[b1]c [b2]c ... [bn]c]. [ C B ] ~ [ I P_C<-B]
第五節:Eigenvectors and Eigenvalues
- \(Ax =\lambda * x\)
- 不一樣特徵值對應的特徵向量線性無關。
- det(A - λ *I) = 0. 由於(A - λ *I)有非零解。
- A is similar to B if there is an invertible matrix P such that P^-1AP = B. They have same eigenvalues.
- 矩陣可以對角化的條件是有n個線性無關的特徵向量(特徵向量有無窮多個,線性無關向量的數量最多爲n)。
- 特徵空間的維度小於等於特徵根的冪。當特徵空間的維度等於特徵根的冪,矩陣可以對角化。
- 相同座標變換矩陣在不一樣維度空間座標系下的轉換:P328。相同座標變換矩陣在不一樣座標系的轉換:P329。其實都是同樣的。
- Suppose A = PDP^-1, where D is a diagonal n*n matrix. If B is the basis for R^n formed from the columns of P, then D is the B-matrix for the transformation x ->Ax. 當座標系轉換爲P時,轉換矩陣對應變成對角矩陣。
- 複數系統。
- 迭代求特徵值和特徵向量。 先估計一個特近的特徵值和一個向量\(x_0\)(其中的最大元素爲1)。而後迭代,迭代流程詳見P365。迭代能夠獲得最大特徵值的緣由以下:由於\((\lambda_1)^{-k}A^kx\rightarrow c_1v_1\),因此對於任意\(x\),當k趨近無窮的時候,\(A^kx\)會和特徵向量同向。雖然\(\lambda\)和\(c_1v_1\)都未知,可是因爲\(Ax_k\)會趨近\(\lambda*x_k\),咱們只要令\(x_k\)的最大元素爲1,就能獲得\(\lambda\)。
第六節 :Inner Product, Length, and Orthogonality
- \((Row A)^{\bot} = Nul A\) and \((Col A)^{\bot} = Nul A^{\top}\). 這很顯然,其中\(A^{\bot}\)表示與A空間垂直的空間。
- An orthogonal basis for a subspace W of \(R^n\) is a basis for W that is also an orthogonal set.
- 一個向量在某一維的投影:\(\hat{y} = proj_L y = \frac{y\cdot u}{u\cdot u}u\).
- An set is an orthonormal set if it is an orthogonal set of unit vectors.
- An m*n matrix U has orthonormal columns if and only if \(U^\top U = I\)
- 一個向量在某一空間的投影:\(\hat{y} = proj_w y = \frac{y\cdot u_1}{u_1\cdot u_1}u_1 + \frac{y\cdot u_2}{u_1\cdot u_2}u_2 + ... + \frac{y{\cdot}u_p}{u_p\cdot u_p}u_p.\)
- 如何將一堆向量弄成正交單位向量: repeat 3.
- QR分解:若是A有線性無關的列向量,那麼能夠分解成Q(正交向量)和R(上三角矩陣,就是原座標在正交座標系的係數)\(Q^{\top}A=Q^{\top}(QR) = IR = R\)
- 最小平方lse(機器學習基礎:非貝葉斯條件下的線性擬合問題),由\(A^{\top}(b-A\hat{x})=0\)獲得\(\hat{x}=(A^\top A)^{-1}A^{\top}b\)。若是A可逆,此式能夠化簡。若是能夠作QR分解,那麼\(\hat{x}=R^{-1}Q^{\top}b\).
- 函數內積的概念。
第七節:Diagonaliztion of Symmetric matrixs
- 若是一個矩陣是對稱的,那麼它的任何兩個特徵值所對應的特徵空間是正交的。
- 矩陣可正交對角化等價於它是一個對稱矩陣。
- \(A=PDP^{-1}\)能夠獲得PCA(機器學習算法主成分分析,對協方差矩陣(對稱)作對角化)
- 將二次方程轉化成沒有叉乘項的形式。x=Py, \(A = PDP^{-1}\).
- 對於二次函數\(x^{\top}Ax\),在|x| = 1的條件下,最大值爲最大特徵值,最小值爲最小特徵值。若是最大特徵值(\(x^{\top}u_1\))不能選,則選擇次之。
- 正交矩陣P大概意思就是在該座標系下,函數比較對稱,D爲座標軸的伸展比例。
- SVD分解(該書的最後一個內容,蘊含了不少上述的內容)是要將矩陣分解成相似PDP^-1的形式,可是不是任何矩陣都能表示成這種形式(有n個線性無關的特徵向量,正交的話還要是對稱矩陣)。其中\(A=U{\Sigma}V^{\top}\),\({\Sigma}\)是A的singular value(\(A^{\top}A\)的特徵值的開方),V是\(A^{\top}A\)的對應特徵向量,U是\(AV\)的歸一化。AV內的向量是垂直的。\(U{\Sigma}\)是AV的另一種表示。