做者:桂。html
時間:2017-03-20 06:20:54算法
連接:http://www.cnblogs.com/xingshansi/p/6584555.html 函數
前言oop
本文是曲線擬合與分佈擬合系列的一部分,主要總結混合高斯模型(Gaussian Mixture Model,GMM),GMM主要基於EM算法(前文已經推導),本文主要包括:學習
1)GMM背景介紹;優化
2)GMM理論推導;spa
3)GMM代碼實現;3d
內容多有借鑑他人,最後一併給出連接。orm
1、GMM背景htm
A-高斯模型1
給出單個隨機信號(均值爲-2,方差爲9的高斯分佈),能夠利用最大似然估計(MLE)求解分佈參數:
B-高斯模型2
對於單個高斯模型2(均值爲3,方差爲1),一樣能夠利用MLE求解:
C-高斯模型3
如今對於一個隨機數,每個點來自混合模型1機率爲0.5,來自混合模型2機率爲0.5,獲得統計信息:
可能已經觀察到:只要將信號分爲先後兩段分別用MLE解高斯模型不就能夠?其實這個時候,已經默默地用了一個性質:數據來自模型1或2的機率爲0.5,可見一旦該特性肯定,混合模型不過是普通的MLE求解問題,可現實狀況怎麼會這麼規律呢,數據來自模型1或2的機率很難經過觀察得出。觀測數據$Y_1$來自模型1,$Y_2$來自模型2...參差交錯。
再分兩段看看?若是直接利用MLE求解,這就碰到了與以前分析EM時:硬幣第三拋一樣的尷尬。先看一下EM解決的效果:
其實硬幣第三拋,也是一個混合機率模型:對於任意一個觀測點以機率$\pi$選擇硬幣A,以機率$1-\pi$選擇硬幣B,對應混合模型爲:
$P\left( {{Y_j}|\theta } \right) = {w_1}{P_A} + {w_2}{P_B} = \pi {P_A} + \left( {1 - \pi } \right){P_B}$
一樣,對於兩個高斯的混合模型(連續分佈,故不用分佈率,而是機率密度):
推而廣之,對於K個高斯的混合模型:
2、GMM理論推導
能夠看出GMM與拋硬幣徹底屬於一類問題,故採用EM算法求解,按模式識別(2)——EM算法的思路進行求解。
記:觀測數據爲$Y$={$Y_1,Y_2,...Y_N$},對應隱變量爲$Z$={$Z_1,Z_2,...Z_N$}。
寫出EM算法中Q函數的表達式:
E-Step:
1)將缺失數據,轉化爲徹底數據
主要求解:$P\left( {{Z_j}|{Y_j},{\Theta ^{\left( i \right)}}} \right)$,此處的求解與EM算法一文中硬幣第三拋的思路一致,只要求出$P\left( {{Z_j} \in {\Upsilon _k}|{Y_j},{\Theta ^{\left( i \right)}}} \right)$便可,${{Z_j} \in {\Upsilon _k}}$表示第$j$個觀測點來自第$k$個分模型。同硬幣第三拋的求解徹底一致,利用全機率公式,容易獲得:
爲了推導簡潔,M-Step時保留隱變量機率的原形式而再也不展開。
2)構造準則函數Q
根據上面給出的Q,能夠寫出混合分佈模型下的準則函數:
$Q\left( {\Theta ,{\Theta ^{\left( i \right)}}} \right) = \sum\limits_{j = 1}^N {\sum\limits_{k = 1}^K {\log \left( {{w_k}} \right)P\left( {{Z_j} \in {\Upsilon _k}|{Y_j},{\Theta ^{\left( i \right)}}} \right)} } + \sum\limits_{j = 1}^N {\sum\limits_{k = 1}^K {\log \left( {{f_k}\left( {{Y_j}|{Z_j} \in {\Upsilon _k},{\theta _k}} \right)} \right)} } P\left( {{Z_j} \in {\Upsilon _k}|{Y_j},{\Theta ^{\left( i \right)}}} \right)$
其中${{\theta _k}} = [\mu_k,\sigma_k]$爲分佈$k$對應的參數,$\Theta$ = {$\theta _1$,$\theta _2$,...,$\theta _K$}爲參數集合,$N$爲樣本個數,$K$爲混合模型個數。
獲得$Q$以後,便可針對徹底數據進行MLE求參,能夠看到每個分佈的機率(即權重w)與該分佈的參數在求參時,可分別求解。因爲表達式爲通常形式,故該性質對全部混合分佈模型都適用。因此對於混合模型,套用Q並代入分佈具體表達式便可。
M-Step:
1)MLE求參
因爲$\sum\limits_{k = 1}^M {{w_k}} = 1$,利用Lagrange乘子求解:
${J_w} = \sum\limits_{j = 1}^N {\sum\limits_{k = 1}^K {\left[ {\log \left( {{w_k}} \right)P\left( {\left. {{Z_j} \in {\Upsilon _k}} \right|{Y_j},{{\bf{\Theta }}^{\left( i \right)}}} \right)} \right]} } + \lambda \left[ {\sum\limits_{k = 1}^K {{w_k}} - 1} \right]$
求偏導:
$\frac{{\partial {J_w}}}{{\partial {w_k}}} = \sum\limits_{J = 1}^N {\left[ {\frac{1}{{{w_k}}}P\left( {{Z_j} \in {\Upsilon _k}|{Y_j},{{\bf{\Theta }}^{\left( i \right)}}} \right)} \right] + } \lambda = 0$
得
給出準則函數:
${J_\Theta } = \sum\limits_{j = 1}^N {\sum\limits_{k = 1}^K {\log \left( {{f_k}\left( {{Y_j}|{Z_j} \in {\Upsilon _k},{\theta _k}} \right)} \right)} } P\left( {{Z_j} \in {\Upsilon _k}|{Y_j},{\Theta ^{\left( i \right)}}} \right)$
高維數據,$Y_j$爲向量或矩陣,對於高斯分佈:
關於$\theta_k$利用MLE便可求參,注意對${{{\bf{\Sigma }}_k}}$求偏導時,關於${{{\bf{\Sigma }}^{-1}_k}}$求偏導更方便,得出結果:
至此,完成了參數求導,可見推導前半部分對於任意分佈都有效。只是涉及具體求參時,形式不一樣有差異。
總結一下GMM:
E-Step:
M-Step:
3、GMM代碼實現
子程序代碼:
function [u,sig,t,iter] = fit_mix_gaussian( X,M ) % % fit_mix_gaussian - fit parameters for a mixed-gaussian distribution using EM algorithm % % format: [u,sig,t,iter] = fit_mix_gaussian( X,M ) % % input: X - input samples, Nx1 vector % M - number of gaussians which are assumed to compose the distribution % % output: u - fitted mean for each gaussian % sig - fitted standard deviation for each gaussian % t - probability of each gaussian in the complete distribution % iter- number of iterations done by the function % % initialize and initial guesses N = length( X ); Z = ones(N,M) * 1/M; % indicators vector P = zeros(N,M); % probabilities vector for each sample and each model t = ones(1,M) * 1/M; % distribution of the gaussian models in the samples u = linspace(min(X),max(X),M); % mean vector sig2 = ones(1,M) * var(X) / sqrt(M); % variance vector C = 1/sqrt(2*pi); % just a constant Ic = ones(N,1); % - enable a row replication by the * operator Ir = ones(1,M); % - enable a column replication by the * operator Q = zeros(N,M); % user variable to determine when we have converged to a steady solution thresh = 1e-3; step = N; last_step = inf; iter = 0; min_iter = 10; % main convergence loop, assume gaussians are 1D while ((( abs((step/last_step)-1) > thresh) & (step>(N*eps)) ) | (iter<min_iter) ) % E step % ======== Q = Z; P = C ./ (Ic*sqrt(sig2)) .* exp( -((X*Ir - Ic*u).^2)./(2*Ic*sig2) ); for m = 1:M Z(:,m) = (P(:,m)*t(m))./(P*t(:)); end % estimate convergence step size and update iteration number prog_text = sprintf(repmat( '\b',1,(iter>0)*12+ceil(log10(iter+1)) )); iter = iter + 1; last_step = step * (1 + eps) + eps; step = sum(sum(abs(Q-Z))); fprintf( '%s%d iterations\n',prog_text,iter ); % M step % ======== Zm = sum(Z); % sum each column Zm(find(Zm==0)) = eps; % avoid devision by zero u = (X')*Z ./ Zm; sig2 = sum(((X*Ir - Ic*u).^2).*Z) ./ Zm; t = Zm/N; end sig = sqrt( sig2 );
給出一個示例:
clc;clear all;close all; set(0,'defaultfigurecolor','w') x = [1*randn(100000,1)+3;3*randn(100000,1)-5]; %fitting x = x(:); % should be column vectors ! N = length(x); [u,sig,t,iter] = fit_mix_gaussian( x,2 ); sig = sig.^2; %Plot figure; %Bar subplot 221 plot(x(randperm(N)),'k');grid on; xlim([0,N]); subplot 222 numter = [-15:.2:10]; [histFreq, histXout] = hist(x, numter); binWidth = histXout(2)-histXout(1); bar(histXout, histFreq/binWidth/sum(histFreq)); hold on;grid on; %Fitting plot subplot 223 y = t(2)*1/sqrt(2*pi*sig(2))*exp(-(numter-u(2)).^2/2/sig(2)); plot(numter,y,'r','linewidth',2);grid on; hold on; y = t(1)*1/sqrt(2*pi*sig(1))*exp(-(numter-u(1)).^2/2/sig(1)); plot(numter,y,'g','linewidth',2);grid on; %Fitting result subplot 224 bar(histXout, histFreq/binWidth/sum(histFreq)); hold on;grid on; y = t(2)*1/sqrt(2*pi*sig(2))*exp(-(numter-u(2)).^2/2/sig(2)); plot(numter,y,'r','linewidth',2);grid on; hold on; y = t(1)*1/sqrt(2*pi*sig(1))*exp(-(numter-u(1)).^2/2/sig(1)); plot(numter,y,'g','linewidth',2);grid on;
結果即是GMM背景介紹中的圖形。
相似的,能夠參考混合拉普拉斯分佈擬合(LMM),對應效果:
參考:
李航《統計學習方法》.