方差及常見分佈的方差計算與推導

1. 方差定義

  • 引言html

    咱們知道,數學指望表示隨機變量的平均值,例如,有一批燈泡,其平均壽命是 E ( X ) = 1000 E(X)=1000 (小時),可是僅有這一項指標並不能知道這批燈泡質量的好壞,若有可能大部分的壽命在 950 1050 950\sim1050 之間,也有可能其中約一半是高質量的,壽命可能在 1300 1300 小時左右,其他的則質量較差,壽命約只有 700 700 小時,所以,研究隨機變量與其平均值的偏離程度是十分必要的,爲了度量這個偏離程度,咱們很容易想到 E { X E ( X ) } E\{|X-E(X)|\} .因爲該式中帶有絕對值,運算不便,且有些時候絕對值不可導,不方便進行研究,所以,咱們用 E { [ X E ( X ) ] 2 } E\{[X-E(X)]^2\} 來度量隨機變量 X X 與其均值 E ( X ) E(X) 的偏離程度。web

  • 定義app

    X X 是一個隨機變量,若 E { [ X E ( X ) ] 2 } E\{[X-E(X)]^2\} 存在,則稱 E { [ X E ( X ) ] 2 } E\{[X-E(X)]^2\} X X 方差. 記爲 D ( X ) D(X) V a r ( X ) V_{ar}(X) ,即 D ( X ) = V a r ( X ) = E { [ X E ( X ) ] 2 } D(X)=V_{ar}(X)=E\{[X-E(X)]^2\} ide

    在應用上還引入量 D ( X ) \sqrt{D(X)} ,記爲 σ ( X ) \sigma(X) ,稱爲標準差均方差svg

    實際上,根據方差的定義,方差和均值是有一個單位的問題的,如引言中燈泡的壽命,指望的單位爲‘小時’,方差的單位爲‘小時的平方’,引入標準差以後,標準差的單位則和指望的單位就保持一致了.函數

  • 由定義可知,方差實際上是隨機變量 X X 的函數 g ( X ) = [ X E ( X ) ] 2 g(X)=[X-E(X)]^2 的數學指望,所以spa

    • 對於離散型隨機變量,有 D ( X ) = k = 1 [ x k E ( X ) ] 2 p k , D(X)=\sum\limits_{k=1}^{\infty}[x_k-E(X)]^2p_k, 其中 P { X = x k } = p k , k = 1 , 2 , P\{X=x_k\}=p_k,\quad k=1,2,\cdots X X 的分佈律
    • 對於連續型隨機變量,有 D ( X ) = + [ x k E ( X ) ] 2 f ( x ) d x , \begin{aligned} D(X)=\int_{-\infty}^{+\infty}[x_k-E(X)]^2f(x)dx \end{aligned}, 其中 f ( x ) f(x) X X 的機率密度。
  • 在實際計算方差中,咱們每每使用 D ( X ) = E ( X 2 ) [ E ( X ) ] 2 . D(X)=E(X^2)-[E(X)]^2. .net

    證實:orm

    D ( X ) = E { [ X E ( X ) ] 2 } = E { X 2 2 X E ( X ) + [ E ( X ) ] 2 } = E ( X 2 ) 2 E ( X ) E ( X ) + [ E ( X ) ] 2 = E ( X 2 ) [ E ( X ) ] 2 \quad\begin{aligned} D(X)&=E\{[X-E(X)]^2\} = E\{X^2-2XE(X)+[E(X)]^2\} \\&= E(X^2)-2E(X)E(X)+[E(X)]^2\\&=E(X^2)-[E(X)]^2 \end{aligned} xml

  • 標準化

    設隨機變量 X X 具備數學指望 E ( X ) = μ E(X)=\mu ,方差 D ( X ) = σ 2 0 D(X)=\sigma^2\neq0 ,記 X = X μ σ X^*=\frac{X-\mu}{\sigma} , 則其指望爲 E ( X ) = E ( X μ σ ) = E ( X ) E ( μ ) E ( σ ) = μ μ σ = 0. \begin{aligned} E(X^*) = E(\frac{X-\mu}{\sigma}) = \frac{E(X)-E(\mu)}{E(\sigma)} = \frac{\mu-\mu}{\sigma} = 0 .\end{aligned} 方差爲 D ( X ) = E ( X μ σ ) 2 = E ( X μ ) 2 σ 2 = E ( X 2 ) + E ( μ 2 ) 2 E ( X ) E ( μ ) σ = E ( X 2 ) μ 2 σ 2 = E ( X 2 ) [ E ( X ) ] 2 σ 2 = D ( X ) σ 2 = σ 2 σ 2 = 1. \begin{aligned} D(X^*) &= E(\frac{X-\mu}{\sigma})^2 = \frac{E(X-\mu)^2}{\sigma^2} = \frac{E(X^2)+E(\mu^2)-2E(X)E(\mu)}{\sigma} \\&= \frac{E(X^2)-\mu^2}{\sigma^2} = \frac{E(X^2)-[E(X)]^2} {\sigma^2} \\&=\frac{D(X)}{\sigma^2} = \frac{\sigma^2}{\sigma^2} = 1.\end{aligned}

    X = X μ σ X^*=\frac{X-\mu}{\sigma} 的數學指望爲 0 0 ,方差爲 1 1 . X X^* 稱爲 X X 標準化變量 .

2. 方差性質

  • C C 是常數,則 D ( C ) = 0 D(C)=0

    證實:

    D ( C ) = E [ C E ( C ) ] 2 = E ( C 2 ) [ E ( C ) ] 2 = C 2 C 2 = 0 D(C)=E[C-E(C)]^2 = E(C^2)-[E(C)]^2 = C^2-C^2 = 0

    根據方差的定義,方差表示隨機變量和指望的偏離程度,隨機變量恆爲一個常數,很明顯,不存在偏離,所以 D ( C ) = 0. D(C)=0.

  • X X 是隨機變量, C C 是常數,則有
    1 o D ( C X ) = C 2 D ( X ) 1^o \quad D(CX)=C^2D(X)

    證實:

    D ( C X ) = E ( C 2 X 2 ) [ E ( C X ) ] 2 = C 2 E ( X 2 ) C 2 [ E ( X ) ] 2 = C 2 { E ( X 2 ) [ E ( X ) ] 2 } = C 2 D ( X ) D(CX)=E(C^2X^2)-[E(CX)]^2 = C^2E(X^2)-C^2[E(X)]^2 = C^2\{E(X^2)-[E(X)]^2\}=C^2D(X)

    2 o D ( X + C ) = D ( X ) 2^o \quad D(X+C) = D(X)

    證實:

    D ( X + C ) = E [ ( C + X ) 2 ] [ E ( C + X ) ] 2 = E [ C 2 + 2 C X + X 2 ] [ E ( C ) + E ( X ) ] 2 = E ( C 2 ) + 2 E ( C ) E ( X ) + E ( X 2 ) [ E ( C ) ] 2 2 E ( C ) E ( X ) [ E ( X ) ] 2 = E ( X 2 ) [ E ( X ) ] 2 = D ( X ) \begin{aligned} D(X+C)&=E[(C+X)^2]-[E(C+X)]^2 = E[C^2+2CX+X^2]-[E(C)+E(X)]^2 = E(C^2)+2E(C)E(X)+E(X^2)-[E(C)]^2-2E(C)E(X)-[E(X)]^2\\&=E(X^2)-[E(X)]^2\\&= D(X) \end{aligned}

  • X , Y X,Y 是兩個隨機變量,則有 D ( X + Y ) = D ( X ) + D ( Y ) + 2 E { [ X E ( X ) ] [ Y E ( Y ) ] } . D(X+Y)=D(X)+D(Y)+2E\{[X-E(X)][Y-E(Y)]\}.

    證實:

    D ( X + Y ) = E [ ( X + Y ) E ( X + Y ) ] 2 = E { [ X E ( X ) ] + [ Y E ( Y ) ] } 2 = E [ X E ( X ) ] 2 + E [ Y E ( Y ) ] 2 + 2 E { [ X E ( X ) ] [ Y E ( Y ) ] } = D ( X ) + D ( Y ) + 2 E { [ X E ( X ) ] [ Y E ( Y ) ] } \quad\begin{aligned} D(X+Y) &= E[(X+Y)-E(X+Y)]^2 = E\{[X-E(X)]+[Y-E(Y)]\}^2 \\&=E[X-E(X)]^2+E[Y-E(Y)]^2+2E\{[X-E(X)][Y-E(Y)]\} \\&=D(X)+D(Y)+2E\{[X-E(X)][Y-E(Y)]\} \end{aligned}

    特別,若 X , Y X,Y 相互獨立,則有 D ( X + Y ) = D ( X ) + D ( Y ) D(X+Y)=D(X)+D(Y)

    證實 :

    D ( X + Y ) = E ( X + Y ) 2 [ E ( X + Y ) ] 2 = E ( X 2 + Y 2 + 2 X Y ) [ E ( X ) + E ( Y ) ] 2 = E ( X 2 ) + E ( Y 2 ) + 2 E ( X Y ) [ E ( X ) ] 2 [ E ( Y ) ] 2 2 E ( X ) E ( Y ) ( 1 ) = D ( X ) + D ( Y ) + 2 [ E ( X Y ) E ( X ) E ( Y ) ) ] X , Y E ( X Y ) = E ( X ) E ( Y ) = D ( X ) + D ( Y ) \quad\begin{aligned} D(X+Y) &= E(X+Y)^2-[E(X+Y)]^2 = E(X^2+Y^2+2XY) - [E(X)+E(Y)]^2 \\&=E(X^2)+E(Y^2)+2E(XY)-[E(X)]^2-[E(Y)]^2-2E(X)E(Y) \quad (1) \\&=D(X)+D(Y) + 2[E(XY)-E(X)E(Y))] \\&\because X,Y 相互獨立,\quad \therefore E(XY) = E(X)E(Y)\\ &=D(X)+D(Y) \end{aligned}

    這一性質能夠推廣到任意有限多個相互獨立的隨機變量之和的狀況。

  • D ( X ) = 0 D(X)=0 的充要條件是 X X 以機率 1 1 取常數 E ( X ) E(X) ,即 P { X = E ( X ) } = 1. P\{X=E(X)\}=1.

    證實:

    充分性,已知 P { X = E ( X ) } = 1 P\{X=E(X)\}=1 ,則 P { X 2 = [ E ( X ) ] 2 } = 1 E ( X 2 ) = E [ E ( X ) ] 2 = [ E ( X ) ] 2 D ( X ) = E ( X 2 ) [ E ( X ) ] 2 = 0 P\{X^2=[E(X)]^2\}=1 \quad \therefore E(X^2) = E[E(X)]^2 = [E(X)]^2 \quad \therefore D(X) = E(X^2)-[E(X)]^2 = 0

    注意,這裏不能根據 P { X = E ( X ) } = 1 P\{X=E(X)\}=1 斷定隨機變量X爲一個常數 C C , 緣由是離散型能夠得出這個結論,可是對於連續型,其選中有限個點以後,剩餘的點的機率仍爲1,可是這部分被選中的點的取值,不必定爲該常數 C C


    必要性,已知 D ( X ) = 0 D(X)=0 ,要證實 P { X = E ( X ) } = 1 P\{X=E(X)\}=1 ,利用反證法,證實以下:

    假設 P { X = E ( X ) } < 1 P\{X=E(X)\}<1 ,則對於某一個數 ϵ > 0 \epsilon>0 ,有 P { X E ( X ) ϵ } > 0 P\{|X-E(X)|\geq\epsilon\}>0 ,由切比雪夫不等式,對於任意的 ϵ > 0 \epsilon>0 ,有 P { X E ( X ) ϵ } D ( X ) ϵ = 0 P\{|X-E(X)|\geq\epsilon\}\leq \frac{D(X)}{\epsilon}=0 與假設矛盾, P { X = E ( X ) } = 1 \therefore P\{X=E(X)\}=1

3. 常見隨機變量分佈的方差

3.1 ( 0 1 ) (0-1) 分佈

  • 隨機變量 X X 服從 ( 0 1 ) (0-1) 分佈,則其分佈律爲 P { X = k } = p k ( 1 p ) 1 k , k = 0 , 1 P\{X=k\} = p^k(1-p)^{1-k}, \quad k=0,1

    此時有 D ( X ) = p ( 1 p ) D(X)=p(1-p) .

    證實:

    E ( X ) = p \quad E(X)=p

    E ( X 2 ) = k = 0 1 x k 2 p k = 0 2 p 0 ( 1 p ) 1 0 + 1 2 p 1 ( 1 p ) 1 1 = p \quad E(X^2)=\sum\limits_{k=0}^{1}x_k^2p_k = 0^2\cdot p^0(1-p)^{1-0}+1^2\cdot p^1(1-p)^{1-1} = p

    D ( X ) = E ( X 2 ) [ E ( X ) ] 2 = p p 2 = p ( 1 p ) \quad \therefore D(X) = E(X^2)-[E(X)]^2 = p-p^2=p(1-p)

3.2 二項分佈

  • X b ( n , p ) X\sim b(n,p) ,則其分佈律爲 P { X = k } = ( k n ) p k q n k k = 0 , 1 , 2 , n P\{X=k\} = \left(_k^n\right)p^kq^{n-k} \quad k=0,1,2\cdots, n ,此時有 D ( X ) = n p ( 1 p ) . D(X)=np(1-p).

    證實:

    E ( X ) = n p \quad E(X) =np

    E ( X 2 ) = k = 0 n k 2 ( k n ) p k q n k = k = 0 n k 2 n ! k ! ( n k ) ! p k q n k = k = 1 n k n ! ( k 1 ) ! ( n k ) ! p k q n k = k = 1 n ( k 1 ) n ! ( k 1 ) ! ( n k ) ! p k q n k + k = 1 n n ! ( k 1 ) ! ( n k ) ! p k q n k = k = 2 n n ! ( k 2 ) ! ( n k ) ! p k q n k + n p k 1 = 0 n 1 ( n 1 ) ! ( k 1 ) ! ( n k ) ! p k 1 q n k = n ( n 1 ) p 2 k 2 = 0 n 2 ( n 2 ) ! ( k 2 ) ! ( n k ) ! p k 2 q n k + n p ( p + q ) n 1 = ( n 2 p 2 n p 2 ) ( p + q ) n 2 + n p = n 2 p 2 n p 2 + n p \quad\begin{aligned}E(X^2) &= \sum\limits_{k=0}^{n}k^2(_k^n)p^kq^{n-k} =\sum\limits_{k=0}^{n}k^2\frac{n!}{k!(n-k)!}p^kq^{n-k}\\& = \sum\limits_{k=1}^{n}k\frac{n!}{(k-1)!(n-k)!}p^kq^{n-k}\\&= \sum\limits_{k=1}^{n}(k-1)\frac{n!}{(k-1)!(n-k)!}p^kq^{n-k}+\sum\limits_{k=1}^{n}\frac{n!}{(k-1)!(n-k)!}p^kq^{n-k} \\& = \sum\limits_{k=2}^{n}\frac{n!}{(k-2)!(n-k)!}p^kq^{n-k}+np\sum\limits_{k-1=0}^{n-1}\frac{(n-1)!}{(k-1)!(n-k)!}p^{k-1}q^{n-k} \\&= n(n-1)p^2\sum\limits_{k-2=0}^{n-2}\frac{(n-2)!}{(k-2)!(n-k)!}p^{k-2}q^{n-k}+np(p+q)^{n-1} \\&=(n^2p^2-np^2)(p+q)^{n-2}+np \\&=n^2p^2-np^2+np\end{aligned}

    D ( X ) = E ( X 2 ) [ E ( X ) ] 2 = ( n 2 p 2 n p 2 + n p ) ( n p ) 2 = n p n p 2 = n p ( 1 p ) \quad \therefore D(X) = E(X^2)-[E(X)]^2 = (n^2p^2-np^2+np)-(np)^2 = np-np^2 = np(1-p)

3.3 泊松分佈

  • X π ( λ ) X\sim \pi(\lambda) ,則其分佈律爲 P { X = k } = λ k k ! e λ k = 0 , 1 , 2 , P\{X=k\} = \frac{\lambda^k}{k!}e^{-\lambda} \quad k=0,1,2,\cdots ,此時有 D ( X ) = λ . D(X)=\lambda.

    證實:

    E ( X ) = λ \quad E(X) = \lambda

    E ( X 2 ) = k = 0 k 2 λ k k ! e λ = k = 1 k λ k ( k 1 ) ! e λ = k = 1 ( k 1 ) λ k ( k 1 ) ! e λ + k = 1 λ k ( k 1 ) ! e λ = λ 2 k = 2 λ k 2 ( k 2 ) ! e λ + λ k = 1 λ k 1 ( k 1 ) ! e λ = λ 2 + λ \quad\begin{aligned}E(X^2) &= \sum\limits_{k=0}^{\infty}k^2\frac{\lambda^k}{k!}e^{-\lambda} = \sum\limits_{k=1}^{\infty}k\frac{\lambda^k}{(k-1)!}e^{-\lambda}\\&=\sum\limits_{k=1}^{\infty}(k-1)\frac{\lambda^k}{(k-1)!}e^{-\lambda}+\sum\limits_{k=1}^{\infty}\frac{\lambda^k}{(k-1)!}e^{-\lambda}\\&= \lambda^2\sum\limits_{k=2}^{\infty}\frac{\lambda^{k-2}}{(k-2)!}e^{-\lambda} + \lambda\sum\limits_{k=1}^{\infty}\frac{\lambda^{k-1}}{(k-1)!}e^{-\lambda} \\&= \lambda^2+\lambda \quad \end{aligned}\quad

    D ( X ) = E ( X 2 ) [ E ( X ) ] 2 = ( λ 2 + λ ) ( λ ) 2 = λ \quad \therefore D(X) = E(X^2)-[E(X)]^2 = (\lambda^2+\lambda)-(\lambda)^2 = \lambda

3.4 幾何分佈

  • X G ( p ) X\sim G(p) ,則其分佈律爲 P { X = k } = ( 1 p ) k 1 p k = 1 , 2 , 3 , P\{X=k\} = (1-p)^{k-1}p \quad k = 1,2,3,\cdots ,此時有 D ( X ) = 1 p p 2 . D(X)=\frac{1-p}{p^2}.

    證實:

    E ( X ) = 1 p \quad E(X) = \frac{1}{p}

    E ( X 2 ) = k = 1 k 2 ( 1 p ) k 1 p = p k = 1 k 2 ( 1 p ) k 1 \quad\begin{aligned} &E(X^2) = \sum\limits_{k=1}^{\infty}k^2(1-p)^{k-1}p = p\sum\limits_{k=1}^{\infty}k^2(1-p)^{k-1} \end{aligned}
    \quad 咱們在計算幾何分佈的數學指望時,引入了一個求導技巧,即

    0 < x < 1 k k = 1 k x k 1 = ( k = 1 x k ) = ( x 1 x ) = 1 ( 1 x ) 2 x = 1 p x k = 1 k x k 1 = x 1 ( 1 x ) 2 k = 1 k x k = x 1 ( 1 x ) 2 k = 1 k 2 x k 1 = ( k = 1 k x k ) = [ x ( 1 x ) 2 ] = 1 + x ( 1 x ) 3 \quad當0<x<1時 且 k\to\infty, \sum\limits_{k=1}^{\infty}kx^{k-1} =(\sum\limits_{k=1}^{\infty}x^k)' =(\frac{x}{1-x})' = \frac{1}{(1-x)^2}\\\quad結合咱們的證實需求,該式中,x=1-p 爲常數,所以有x\sum\limits_{k=1}^{\infty}kx^{k-1} = x \frac{1}{(1-x)^2} \\\quad即 \sum\limits_{k=1}^{\infty}kx^{k} = x \frac{1}{(1-x)^2} ,此時有 \sum\limits_{k=1}^{\infty}k^2x^{k-1} = (\sum\limits_{k=1}^{\infty}kx^{k})' = [\frac{x}{(1-x)^2}]' = \frac{1+x}{(1-x)^3}

    E ( X 2 ) = p k = 1 k 2 ( 1 p ) k 1 = p 1 + 1 p [ 1 ( 1 p ) ] 3 = 2 p p 2 \quad\therefore E(X^2) = p\sum\limits_{k=1}^{\infty}k^2(1-p)^{k-1} = p\frac{1+1-p}{[1-(1-p)]^3} = \frac{2-p}{p^2}

    D ( X ) = E ( X 2 ) [ E ( X ) ] 2 = 2 p p 2 ( 1 p ) 2 = 1 p p 2 . \quad\therefore D(X) = E(X^2)-[E(X)]^2 = \frac{2-p}{p^2}-(\frac{1}{p})^2 = \frac{1-p}{p^2}.

3.5 超幾何分佈

  • X H ( n , M , N ) X\sim H(n,M,N) ,則其分佈律爲 P { X = k } = ( k M ) ( n k N M ) ( n N ) k = 0 , 1 , , m i n { n , M } . P\{X=k\} = \frac{(_k^M)(_{n-k}^{N-M})}{(_n^N)} \quad k= 0,1,\cdots,min\{n,M\}. ,此時有 D ( X ) = n M N ( 1 M N ) ( N n N 1 ) . D(X)=n\frac{M}{N}(1-\frac{M}{N})(\frac{N-n}{N-1}).

    證實:

    E ( X ) = n M N \quad E(X) = n\frac{M}{N}

    E ( X 2 ) = k = 0 m i n { n , M } k 2 ( k M ) ( n k N M ) ( n N ) = k = 0 m i n { n , M } k 2 M ! k ! ( M k ) ! ( n k N M ) n ! ( N n ) ! N ! = k = 1 m i n { n , M } k M ! ( k 1 ) ! ( M k ) ! ( n k N M ) n ! ( N n ) ! N ! = k = 1 m i n { n , M } ( k 1 ) M ! ( k 1 ) ! ( M k ) ! ( n k N M ) n ! ( N n ) ! N ! + k = 1 m i n { n , M } M ! ( k 1 ) ! ( M k ) ! ( n k N M ) n ! ( N n ) ! N ! = k = 2 m i n { n , M } M ( M 1 ) ( M 2 ) ! ( k 2 ) ! ( M k ) ! ( n k N M ) n ! ( N n ) ! N ! + k = 0 m i n { n , M } k M ! k ! ( M k ) ! ( n k N M ) n ! ( N n ) ! N ! = M ( M 1 ) n ! ( N n ) ! N ! k = 2 m i n { n , M } ( k 2 M 2 ) ( n k N M ) + E ( X ) = M ( M 1 ) n ! ( N n ) ! N ! ( n 2 N 2 ) + n M N ( C m + n k = i = 0 k C m i C n k i ) = M ( M 1 ) n ( n 1 ) N ( N 1 ) + n M N = n M N [ ( M 1 ) ( n 1 ) N 1 + 1 ] \quad\begin{aligned} E(X^2) &= \sum\limits_{k=0}^{min\{n,M\}}k^2\frac{(_k^M)(_{n-k}^{N-M})}{(_n^N)} \\&=\sum\limits_{k=0}^{min\{n,M\}}k^2\frac{M!}{k!(M-k)!}(_{n-k}^{N-M})\frac{n!(N-n)!}{N!} \\&=\sum\limits_{k=1}^{min\{n,M\}}k\frac{M!}{(k-1)!(M-k)!}(_{n-k}^{N-M})\frac{n!(N-n)!}{N!}\\ &=\sum\limits_{k=1}^{min\{n,M\}}(k-1)\frac{M!}{(k-1)!(M-k)!}(_{n-k}^{N-M})\frac{n!(N-n)!}{N!}+\sum\limits_{k=1}^{min\{n,M\}}\frac{M!}{(k-1)!(M-k)!}(_{n-k}^{N-M})\frac{n!(N-n)!}{N!}\\&=\sum\limits_{k=2}^{min\{n,M\}}\frac{M(M-1)(M-2)!}{(k-2)!(M-k)!}(_{n-k}^{N-M})\frac{n!(N-n)!}{N!}+\sum\limits_{k=0}^{min\{n,M\}}k\frac{M!}{k!(M-k)!}(_{n-k}^{N-M})\frac{n!(N-n)!}{N!} \\&=M(M-1)\frac{n!(N-n)!}{N!}\sum\limits_{k=2}^{min\{n,M\}}(_{k-2}^{M-2})(_{n-k}^{N-M})+E(X)\\&=M(M-1)\frac{n!(N-n)!}{N!}(_{n-2}^{N-2})+n\frac{M}{N} \quad (範德蒙恆等式C_{m+n}^k = \sum\limits_{i=0}^{k}C_{m}^iC_{n}^{k-i})\\&=M(M-1)\frac{n(n-1)}{N(N-1)}+n\frac{M}{N} \\&=n\frac{M}{N}[\frac{(M-1)(n-1)}{N-1}+1] \end{aligned}

    D ( X ) = E ( X 2 ) [ E ( X ) ] 2 = n M N [ ( M 1 ) ( n 1 ) N 1 + 1 ] ( n M N ) 2 = n M N ( M n M n + 1 + N 1 N 1 n M N ) = n M N [ M N n M N N n + N 2 M N n + M n N ( N 1 ) ] = n M N [ N ( N M ) n ( N M ) N ( N 1 ) ] = n M N [ ( N M ) ( N n ) N ( N 1 ) ] = n M N ( 1 M N ) ( N n N 1 ) . \quad\begin{aligned} \therefore D(X) &= E(X^2)-[E(X)]^2 = n\frac{M}{N}[\frac{(M-1)(n-1)}{N-1}+1]-(n\frac{M}{N})^2 = n\frac{M}{N}(\frac{Mn-M-n+1+N-1}{N-1}-n\frac{M}{N})\\ &= n\frac{M}{N}\bigg[\frac{MNn-MN-Nn+N^2-MNn+Mn}{N(N-1)}\bigg] \\&= n\frac{M}{N}\bigg[\frac{N(N-M)-n(N-M)}{N(N-1)}\bigg] = n\frac{M}{N}\bigg[\frac{(N-M)(N-n)}{N(N-1)}\bigg]\\&= n\frac{M}{N}(1-\frac{M}{N})(\frac{N-n}{N-1}). \end{aligned}

3.6 均勻分佈

  • X U ( a , b ) X\sim U(a,b) ,則其機率密度爲 f ( x ) = { 1 b a , a < x < b 0 , e l s e . f(x)=\begin{cases} \frac{1}{b-a},\quad a<x<b, \\ 0,\quad else \end{cases}. ,此時有 D ( X ) = ( b a ) 2 12 . D(X)=\frac{(b-a)^2}{12}.

    證實:

    E ( X ) = a + b 2 \quad E(X) = \frac{a+b}{2}

    E ( X 2 ) = + x 2 f ( x ) d x = a x 2 0 d x + a b x 2 1 b a d x + b + x 2 0 d x = 0 + ( 1 3 1 b a x 3 ) a b + 0 = b 3 a 3 3 ( b a ) = a 2 + b 2 + a b 3 \quad\begin{aligned} E(X^2) &= \int_{-\infty}^{+\infty}x^2f(x)dx = \int_{-\infty}^{a}x^2\cdot0dx+\int_{a}^{b}x^2\frac{1}{b-a}dx+\int_{b}^{+\infty}x^2\cdot0dx\\&=0+(\frac{1}{3}\frac{1}{b-a}x^3)\bigg|_a^b+0 \\&=\frac{b^3-a^3}{3(b-a)} = \frac{a^2+b^2+ab}{3}\end{aligned}

    D ( X ) = E ( X 2 ) [ E ( X ) ] 2 = a 2 + b 2 + a b 3 ( a + b 2 ) 2 = ( b a ) 2 12 . \therefore D(X) = E(X^2)-[E(X)]^2 = \frac{a^2+b^2+ab}{3}-(\frac{a+b}{2})^2 = \frac{(b-a)^2}{12}.

3.7 指數分佈

  • X E ( θ ) X\sim E(\theta) ,則其機率密度爲 f ( x ) = { 1 θ e x / θ , 0 < x 0 , e l s e ( θ > 0 ) . f(x)=\begin{cases} \frac{1}{\theta}e^{-x/\theta},\quad 0<x, \\ 0,\quad else \end{cases} \quad (\theta>0). ,此時有 D ( X ) = θ 2 . D(X)=\theta^2.

    證實:

    E ( X ) = θ \quad E(X) = \theta
    E ( X 2 ) = + x 2 f ( x ) d x = 0 x 2 0 d x + 0 + x 2 1 θ e x / θ d x = 0 + ( x 2 e x / θ ) 0 + 0 + 2 x e x / θ d x = 2 0 + x e x / θ d x ( ) = 2 [ ( x θ e x / θ ) 0 + ) ] 2 0 + θ e x / θ d x = 2 θ 2 0 + 1 θ e x / θ d x = 2 θ 2 ( 0 + 1 θ e x / θ d x = F ( ) F ( 0 ) = 1 ) \quad \begin{aligned} E(X^2) &= \int_{-\infty}^{+\infty}x^2f(x)dx = \int_{-\infty}^{0}x^2\cdot0dx+\int_{0}^{+\infty}x^2\frac{1}{\theta}e^{-x/\theta}dx\\&=0+(-x^2e^{-x/\theta})\bigg|_0^{+\infty} -\int_{0}^{+\infty}-2xe^{-x/\theta}dx =2\int_{0}^{+\infty}xe^{-x/\theta}dx\quad (分部積分法)\\&=2\bigg[(-x\theta e^{-x/\theta})\bigg|_0^{+\infty})\bigg]-2\int_{0}^{+\infty}-\theta e^{-x/\theta}dx = 2\theta^2\int_{0}^{+\infty}\frac{1}{\theta} e^{-x/\theta}dx \\&= 2\theta^2 \quad (積分項 \int_{0}^{+\infty}\frac{1}{\theta} e^{-x/\theta}dx = F(\infty)-F(0) = 1)\end{aligned}

    D ( X ) = E ( X 2 ) [ E ( X ) ] 2 = 2 θ 2 ( θ ) 2 = θ 2 . \quad \therefore D(X) = E(X^2)-[E(X)]^2 = 2\theta^2 -(\theta)^2 = \theta^2.

3.8 正態分佈

  • X N ( μ , σ 2 ) X\sim N(\mu,\sigma^2) ,則其機率密度爲 f ( x ) = 1 2 π σ e ( x μ ) 2 2 σ 2 , < x < + . f(x)=\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x-\mu)^2}{2\sigma^2}} , \quad -\infty<x<+\infty. ,此時有 D ( X ) = σ 2 . D(X)=\sigma^2.

證實:

E ( X ) = μ \quad E(X) = \mu

E ( X 2 ) = + x 2 f ( x ) d x = + x 2 1 2 π σ e ( x μ ) 2 2 σ 2 d x \quad \begin{aligned} E(X^2) &= \int_{-\infty}^{+\infty}x^2f(x)dx = \int_{-\infty}^{+\infty}x^2\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x-\mu)^2}{2\sigma^2}}dx \end{aligned}

\quad t = x μ σ , x = t σ + μ t = \frac{x-\mu}{\sigma},則 x = t\sigma+\mu ,另外咱們還知道 + e t 2 2 d t = 2 π \int_{-\infty}^{+\infty}e^{-\frac{t^2}{2}} dt=\sqrt{2\pi} (連續型隨機變量及其常見分佈函數和機率密度 中有相關證實,不在贅述) ,則此時有

E ( X 2 ) = + ( t σ + μ ) 2 1 2 π σ e t 2 2 σ d t = 1 2 π + t 2 σ 2 e t 2 2 d t + 1 2 π + 2 t σ μ e t 2 2 d t + 1 2 π + μ 2 e t 2 2 d t = σ 2 2 π + t 2 e t 2 2 d t + 2 σ μ 2 π + t e t 2 2 d t + μ 2 2 π + e t 2 2 d t = σ 2 2 π [ ( t e t 2 2 ) + + e t 2 2 d t ] + 2 σ μ 2 π [ e t 2 2 ] + + μ 2 2 π 2 π = σ 2 2 π + e t 2 2 d t + 0 + μ 2 = σ 2 2 π 2 π + μ 2 = σ 2 + μ 2 \quad\begin{aligned} E(X^2) & =\int_{-\infty}^{+\infty}(t\sigma+\mu)^2\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{t^2}{2}}\sigma dt \\&=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{+\infty}t^2\sigma^2e^{-\frac{t^2}{2}}dt+\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{+\infty}2t\sigma\mu e^{-\frac{t^2}{2}}dt+\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{+\infty}\mu^2e^{-\frac{t^2}{2}}dt \\&=\frac{\sigma^2}{\sqrt{2\pi}}\int_{-\infty}^{+\infty}t^2e^{-\frac{t^2}{2}}dt+\frac{2\sigma\mu}{\sqrt{2\pi}}\int_{-\infty}^{+\infty}te^{-\frac{t^2}{2}}dt+\frac{\mu^2}{\sqrt{2\pi}}\int_{-\infty}^{+\infty}e^{-\frac{t^2}{2}}dt \\&= \frac{\sigma^2}{\sqrt{2\pi}}\bigg[(-te^{-\frac{t^2}{2}})\bigg|_{-\infty}^{+\infty}-\int_{-\infty}^{+\infty}-e^{-\frac{t^2}{2}}dt\bigg] + \frac{2\sigma\mu}{\sqrt{2\pi}}\bigg[-e^{-\frac{t^2}{2}}\bigg]\bigg|_{-\infty}^{+\infty}+ \frac{\mu^2}{\sqrt{2\pi}}\sqrt{2\pi}\\&=\frac{\sigma^2}{\sqrt{2\pi}}\int_{-\infty}^{+\infty}e^{-\frac{t^2}{2}}dt + 0 +\mu^2\\&=\frac{\sigma^2}{\sqrt{2\pi}}\sqrt{2\pi}+\mu^2\\&=\sigma^2+\mu^2\end{aligned}

D ( X ) = E ( X 2 ) [ E ( X ) ] 2 = σ 2 + μ 2 ( μ ) 2 = σ 2 . \quad\therefore D(X) = E(X^2)-[E(X)]^2 = \sigma^2+\mu^2 -(\mu)^2 = \sigma^2.

證實方法二:

\quad 隨機變量 X X 進行 標準化,令 Z = X μ σ Z = \frac{X-\mu}{\sigma} , 此時有 f ( z ) = 1 2 π e x 2 2 f(z)=\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}} ,此時有

E ( Z ) = 0 \quad E(Z)=0

E ( Z 2 ) = + z 2 f ( z ) d z = + z 2 1 2 π e x 2 2 = 1 2 π [ ( z e z 2 2 ) + + e z 2 2 d z ] = 1 2 π 2 π = 1 \quad\begin{aligned} E(Z^2) &= \int_{-\infty}^{+\infty}z^2f(z)dz = \int_{-\infty}^{+\infty}z^2\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}} \\&= \frac{1}{\sqrt{2\pi}}\bigg[(-ze^{-\frac{z^2}{2}})\bigg|_{-\infty}^{+\infty}-\int_{-\infty}^{+\infty}-e^{-\frac{z^2}{2}}dz\bigg]\\&=\frac{1}{\sqrt{2\pi}}\sqrt{2\pi} \\&= 1 \end{aligned}

D ( Z ) = E ( Z 2 ) [ E ( Z ) ] 2 = 1 2 ( 0 ) 2 = 1. D ( X μ σ ) = 1 σ 2 D ( X ) = 1 D ( X ) = σ 2 \quad\begin{aligned}&\therefore D(Z) = E(Z^2)-[E(Z)]^2 = 1^2 -(0)^2 = 1. 即\\&D(\frac{X-\mu}{\sigma}) = \frac{1}{\sigma^2}D(X)=1\\&\therefore D(X)= \sigma^2\end{aligned}

3.9 常見分佈的方差和指望彙總表

分佈 參數 分佈律或機率密度 數學指望 方差
( 0 1 ) (0-1) 分佈 0 < p < 1 0<p<1 P { X = k } = p k ( 1 p ) 1 k , k = 0 , 1 P\{X=k\} = p^k(1-p)^{1-k}, \quad k=0,1
相關文章
相關標籤/搜索