特徵選擇 | 邏輯迴歸-經過係數符號和VIF進一步篩選變量-R版本

      本文是上篇(特徵選擇 | 邏輯迴歸-經過係數符號和VIF進一步篩選變量)的續,是R版本的實現。
git


R代碼微信


一、逐步選擇,對R邏輯迴歸逐步選擇變量的實現作了微調:函數

logit_stepwise <- function(data, label, slentry, slstay){
    # author: 小石頭(bigdata_0819@163.com)
    # data:包含自變量和應變量的數據框
    # label:應變量
    # slentry:選擇變量時顯著水平判斷的閾值
    # slstay: 剔除變量時顯著水平判斷的閾值
    # return: lst:
    #       SelectionProcess: 變量逐步選擇的過程
    #       models: 每一步的模型結果
    #       model_optimal: 最終輸出的模型
    #       result_optimal: 最終輸出的模型結果
    
    Selected <- c()
    Remaining <- names(data)[which(names(data)!=label)]
    NumberInModel <- 0
    SelectionProcess_list <- list()
    model_list <- list()
    result_list <- list()
    
    ### 第0步,只有截距項
    Step <- 0
    formula <- paste0(label, ' ~ 1')
    result <- glm(formula, data=data, family=binomial)
    summ <- summary(result)
    model <- data.frame(summ$coefficients)
    model <- tibble::rownames_to_column(model, "Variable")
    rownames(model) <- NULL
    model <- cbind(Step, model)
    colnames(model) <- c('Step', 'Variable', 'Estimate', 'StdErr', 'Zvalue', 'Pvalue')
    model_list <- c(model_list, list(model))
    
    ### 逐步選擇過程
    while(length(Remaining)>0){
        ## 計算剩餘變量的顯著性
        Remaining_Sl_list <- list()
        for(var in Remaining){
            formula <- paste0(label, ' ~ ', paste(c(Selected, var), collapse = "+"))
            result <- glm(formula, data=data, family=binomial)
            summ <- summary(result)
            coef <- data.frame(summ$coefficients)
            coef <- tibble::rownames_to_column(coef, "Variable")
            coef <- coef[coef$Variable == var, ]
            Remaining_Sl_list <- c(Remaining_Sl_list, list(coef))
        }
        Remaining_Sl <- dplyr::bind_rows(Remaining_Sl_list)
        colnames(Remaining_Sl) <- c('Variable', 'Estimate', 'StdErr', 'Zvalue', 'Pvalue')
        Entered_var <- Remaining_Sl$Variable[which.min(Remaining_Sl$Pvalue)]
        z <- Remaining_Sl$Zvalue[which.min(Remaining_Sl$Pvalue)]
        p <- min(Remaining_Sl$Pvalue)
        
        if(p>slentry){
            break
        }else{
            Step <- Step+1
            NumberInModel <- NumberInModel+1
            Selected <- c(Selected, Entered_var)
            Remaining <- Remaining[Remaining!=Entered_var]
            SelectionProcess <- data.frame(Step=Step,
                                           Entered=Entered_var,
                                           Removed='-',
                                           NumberInModel=NumberInModel,
                                           Zvalue=z,
                                           Pvalue=p,
                                           stringsAsFactors=FALSE)
            SelectionProcess_list <- c(SelectionProcess_list, list(SelectionProcess))
            
            formula <- paste0(label, ' ~ ', paste(Selected, collapse = "+"))
            result <- glm(formula, data=data, family=binomial)
            summ <- summary(result)
            model <- data.frame(summ$coefficients)
            model <- tibble::rownames_to_column(model, "variable")
            rownames(model) <- NULL
            model <- cbind(Step, model)
            colnames(model) <- c('Step', 'Variable', 'Estimate', 'StdErr', 'Zvalue', 'Pvalue')
            model_list <- c(model_list, list(model))
            result_list <- c(result_list, list(result))
            
            ## 剔除已選擇變量中最不顯著的變量
            var_Pvalue <- model[model$Variable!='(Intercept)', ]
            if(max(var_Pvalue$Pvalue)>slstay){
                Step <- Step+1
                NumberInModel <- NumberInModel-1
                Removed_var <- var_Pvalue$Variable[which.max(var_Pvalue$Pvalue)]
                z <- var_Pvalue$Zvalue[which.max(var_Pvalue$Pvalue)]
                p <- max(var_Pvalue$Pvalue)
                Selected <- Selected[Selected!=Removed_var]
                Remaining <- c(Remaining, Removed_var)
                
                SelectionProcess <- data.frame(Step=Step,
                                               Entered='-',
                                               Removed=Removed_var,
                                               NumberInModel=NumberInModel,
                                               Zvalue=z,
                                               Pvalue=p,
                                               stringsAsFactors=FALSE)
                SelectionProcess_list <- c(SelectionProcess_list, list(SelectionProcess))
                
                formula <- paste0(label, ' ~ ', paste(Selected, collapse = "+"))
                result <- glm(formula, data=data, family=binomial)
                summ <- summary(result)
                model <- data.frame(summ$coefficients)
                model <- tibble::rownames_to_column(model, "variable")
                rownames(model) <- NULL
                model <- cbind(Step, model)
                colnames(model) <- c('Step', 'Variable', 'Estimate', 'StdErr', 'Zvalue', 'Pvalue')
                model_list <- c(model_list, list(model))
                result_list <- c(result_list, list(result))
            }
        }
    }
    
    SelectionProcess <- dplyr::bind_rows(SelectionProcess_list)
    models <- dplyr::bind_rows(model_list)
    model_optimal <- model_list[[length(model_list)]]
    result_optimal <- result_list[[length(result_list)]]
    
    model_optimal$significance[model_optimal$Pvalue<=0.001] <- '***'
    model_optimal$significance[model_optimal$Pvalue>0.001 & model_optimal$Pvalue<=0.01] <- '**'
    model_optimal$significance[model_optimal$Pvalue>0.01 & model_optimal$Pvalue<=0.05] <- '*'
    model_optimal$significance[model_optimal$Pvalue>0.05] <- ''
    
    lst <- list(SelectionProcess=SelectionProcess, models=models, model_optimal=model_optimal, result_optimal=result_optimal)
    return(lst)
}

二、迭代剔除係數爲負值和VIF較大的變量:大數據

model_logit <- function(data, label, slentry, slstay){
    # author: 小石頭(bigdata_0819@163.com)
    # data:包含自變量和應變量的數據框
    # label:應變量
    # return: lst:
    #       SelectionProcess: 變量逐步選擇的過程
    #       models: 每一步的模型結果
    #       model_optimal: 最終輸出的模型
    #       result_optimal: 最終輸出的模型結果
    
    while(TRUE){
        ### 剔除係數爲負值的變量
        while(TRUE){
            lst <- logit_stepwise(data=data, label=label, slentry=slentry, slstay=slstay)
            SelectionProcess <- lst$SelectionProcess
            models <- lst$models
            model_optimal <- lst$model_optimal
            result_optimal <- lst$result_optimal
            vars_minus <- model_optimal$Variable[model_optimal$Estimate<0]
            vars_minus <- SelectionProcess$Entered[SelectionProcess$Entered %in% vars_minus]
            if(length(vars_minus)>0){
                data <- dplyr::select(data, -c(vars_minus[length(vars_minus)]))
            }else{
                break
            }
        }
        
        ### 剔除VIF>10的變量
        model_vars <- model_optimal$Variable[model_optimal$Variable!='(Intercept)']
        formula = paste0(label, ' ~ ', paste(model_vars, collapse = "+"))
        la <- lm(formula, data)
        VIF <- car::vif(la)
        if(max(VIF)>10){
            data <- dplyr::select(data, -c(names(VIF)[which.max(VIF)]))
        }else{
            break
        }
    }
    
    VIF_df <- data.frame(Variable=names(VIF), VIF=VIF)
    model_optimal <- merge(model_optimal, VIF_df, by='Variable', all.x=TRUE, all.y=FALSE)
    model_optimal <- dplyr::select(model_optimal, -c('Step'))
    
    Step_df <- SelectionProcess[!duplicated(SelectionProcess$Entered, fromLast=TRUE), c('Step', 'Entered')]
    colnames(Step_df) <- c('Step', 'Variable')
    model_optimal <- merge(Step_df, model_optimal, by='Variable', all.x=FALSE, all.y=TRUE)
    model_optimal[model_optimal$Variable=='(Intercept)', 'Step'] <- 0
    model_optimal <- dplyr::arrange(model_optimal, Step)
    model_optimal$Step <- 1:length(model_optimal$Step)-1
    
    lst <- list(SelectionProcess=SelectionProcess, models=models, model_optimal=model_optimal, result_optimal=result_optimal)
    return(lst)
}

      使用相同數據集查看訓練結果,能夠看出最終結果和的一致的this

lst <- model_logit(data=data, label='target', slentry=0.05, slstay=0.05)
model_optimal <- lst$model_optimal

      同時發現VIF的值有一些差別,主要由於這裏使用的R方法在計算VIF作線性迴歸時包含了截距項,而Python模塊statsmodels.stats.outliers_influence中的variance_inflation_factor函數在作線性迴歸時沒有包含截距項(見下面的源碼)。是否包含截距項對VIF值的影響不大,也不改變各變量VIF的排序性,於是並不影響結果。spa

variance_inflation_factor的源碼:.net

##### variance_inflation_factor源碼 #####
"""Influence and Outlier Measures

Created on Sun Jan 29 11:16:09 2012

Author: Josef Perktold
License: BSD-3
"""

from statsmodels.regression.linear_model import OLS
def variance_inflation_factor(exog, exog_idx):
    """variance inflation factor, VIF, for one exogenous variable

    The variance inflation factor is a measure for the increase of the
    variance of the parameter estimates if an additional variable, given by
    exog_idx is added to the linear regression. It is a measure for
    multicollinearity of the design matrix, exog.

    One recommendation is that if VIF is greater than 5, then the explanatory
    variable given by exog_idx is highly collinear with the other explanatory
    variables, and the parameter estimates will have large standard errors
    because of this.

    Parameters
    ----------
    exog : ndarray
        design matrix with all explanatory variables, as for example used in
        regression
    exog_idx : int
        index of the exogenous variable in the columns of exog

    Returns
    -------
    vif : float
        variance inflation factor

    Notes
    -----
    This function does not save the auxiliary regression.

    See Also
    --------
    xxx : class for regression diagnostics TODO: does not exist yet

    References
    ----------
    https://en.wikipedia.org/wiki/Variance_inflation_factor
    """

    k_vars = exog.shape[1]
    x_i = exog[:, exog_idx]
    mask = np.arange(k_vars) != exog_idx
    x_noti = exog[:, mask]
    r_squared_i = OLS(x_i, x_noti).fit().rsquared
    vif = 1. / (1. - r_squared_i)
    return vif


歷史文章3d


特徵選擇 | 邏輯迴歸-經過係數符號和VIF進一步篩選變量
code

WOE(證據權重)爲什麼這樣計算?
orm

Vintage損失率轉爲年化損失率(名義利率/內部收益率/年化利率等概念)

本文分享自微信公衆號 - 大數據建模的一點一滴(bigdatamodeling)。
若有侵權,請聯繫 support@oschina.cn 刪除。
本文參與「OSC源創計劃」,歡迎正在閱讀的你也加入,一塊兒分享。

相關文章
相關標籤/搜索