Spark數據挖掘-基於 K 均值聚類的網絡流量異常檢測(1): 數據探索、模型初探

Spark數據挖掘-基於 K 均值聚類的網絡流量異常檢測(1): 數據探索、模型初探

1 前言

分類和迴歸是強大易學的機器學習技術。須要注意的是:爲了對新的樣本預測未知的值, 必須從大量已知目標值的樣本中去學習,這類技術統稱爲監督學習技術。 下面將會重點介紹非監督學習的算法:K均值聚類。這樣的狀況實際中老是可能碰到:當你手頭的數據徹底沒有目標變量的正確結果。 例如:請根據電子商務站點提供的關於用戶習慣和品味的數據給用戶分羣,輸入數據是他們購物、點擊、我的圖片 、瀏覽、下單等等的特徵,輸出是用戶的分組標識,也許其中某個類別的人是表明有時尚意識的用戶,而另外一個組的 用戶更加喜歡價格便宜的商品等等。不過不用擔憂這類問題卻能夠用非監督學習的方法解決。這些技術並非經過學習目標值纔去預測目標值, 由於沒有目標值。然而這些技術經過學習數據的結構,從而發現哪些樣本之間是類似的,或者學到哪些輸入可能會發生, 哪些又不會發生。放心上面的問題下面將會一步一步用Spark實戰的方式獲得初步解決。html

2 異常值檢測

異常值檢測正如其名目的就是爲了發現不尋常的事情。若是你已經有一批標記爲異常的數據集合,那麼存在大量的監督學習算法 很是容易就能夠檢測異常。這些算法將從標記有 「異常」 和 「非異常」 的數據集中 學會怎麼去區別它們。然而現實世界裏的異常值倒是人們不知道的事情。換句話說,若是你都觀測到或者理解異常值 的本意那麼它們就不是異常值了。
異常值檢測一些重要的用途有:欺詐發現、網絡攻擊檢測、服務器或其餘裝備傳感器的機械故障問題。在這些例子中, 一個重要的問題是新的異常類型以前歷來沒有出現過——新的欺詐、新的攻擊、新的服務失敗緣由。
非監督學習對這些例子是很是有幫助的,由於它們能夠從數據的結構中學到正常的數據應該是什麼樣子的,所以當模型碰到 不像以前正常的數據,就會發現異常。算法

3 K均值聚類

算法名字 K均值聚類,K-means Clustering, KMC
算法描述 k均值聚類是非監督算法中很是有名的。目標是對數據分組,最大化類與類之間的距離,最小化類內之間的距離。算法關鍵參數 K,指定分類的個數,很是重要也很難抉擇。另個一個須要選擇的是距離的定義。Spark 目前只能支持歐幾里德距離。
算法原理 1. 首先隨機選擇K個點做爲類中心 <br /> 2. 計算每一個點到類中心的距離,將點分到距離最近的類 <br /> 3. 從新計算每一個類的類中心 <br /> 4. 重複2到3直到迭代次數達到指定的值或者類中心變化足夠小(達到指定的值)算法結束
使用場景 非監督聚類學習算法
算法優缺點 優勢: 1. 容易實現 <br /> 缺點: 1. 可能收斂到局部最小值點 2. 在大數據集上收斂比較慢 3. 只適合數值型數據(定性變量須要編碼選擇合適的距離算法)4. 效果評價難
參考資料 1. 算法原理 機器學習實戰 <br /> 2. MLlib實現 MLlib - K-means Clustering

4 網絡攻擊

所謂的網絡網絡攻擊今天隨處可見。一些攻擊嘗試經過大量的訪問讓服務器流量暴漲以致於沒法處理正常的請求。 另一些攻擊方式嘗試利用網絡軟件的漏洞得到服務器的訪問權限。前面一種狀況是比較明顯並且容易發現處理的, 可是檢測漏洞就好像在這麼多請求中大海撈針同樣困難。
一些漏洞攻擊者有一些固定的模式。好比,訪問計算機上的每個端口,正常的軟件通常不會這麼去作。 這是網絡攻擊者首先須要作的步驟,目的是嘗試去發現服務器上面哪些應用可能存在漏洞。
若是你統計在很短期裏面同一個軟件訪問不一樣端口的個數,這個將會是一個很好的拿來發現端口掃描攻擊的特徵。 一樣其餘特徵將用來檢驗其餘類型的攻擊如:發送和接受的數據的字節大小、TCP 錯誤類型等等。
然而當你面臨的攻擊是以前沒有發現過的,這個時候該怎麼辦?假如以前你沒有碰到過端口掃描攻擊,你沒有想到端口個數這個指標呢? 最大威脅就是這些歷來沒有被發現過被分類過的攻擊。
K-means 能夠用來檢測這種未知的網絡攻擊,可是前提得定義多個比較通用的指標,這多是一個悖論。你不知道有這個攻擊以前 怎麼想到這些指標呢?這裏面就是人的大腦發揮做用的時候,經驗、洞察力、邏輯思惟等等,沒有統一的標準,這裏也無法討論這個複雜的問題。 假設一切你都想好了,數據也收集好了,咱們進行下一步。shell

5 模型初試基於數據集(KDD Cup 1999)

KDD Cup 是ACM的一個每一年舉行一次數據挖掘競賽的特別組織。每年這個組織會提出一個機器學習的問題,而且 附帶一個數據集,優秀的解決這個問題的研究者被邀請發表詳細的論文。在1999年,這個組織提出的就是網絡入侵的檢測問題, 數據集仍然是有效的。下面就是要利用 Spark 從這個數據集中學習如何檢測網絡異常。注意:這的確只是一個示例而已,相隔十幾年 如今的狀況早已不一樣。
幸運的是組織主辦方已經將數據原始請求包處理成單次鏈接的各個屬性的結構化的數據。而且保存爲CSV的文件格式,總共大小 708 MB,包含 4898431 條的鏈接次數, 總共有41個特徵,而且最後一列是攻擊類型或者正常鏈接的標識。注意:即便最後一列有攻擊標識,模型也不會嘗試去從已知的攻擊類型中學習,由於此次的目的是 發現未知的攻擊,暫時就看成什麼攻擊都不知道吧。
首先從http://bit.ly/1ALCuZN下載數據集合,利用 tar 或者 7-zip 解壓縮文件,就獲得了研究主要文件:
kddcup.data.correctedapache

5.1 數據集簡單探索

數據探索的代碼:promise

/**
   * kddcup.data.corrected 數據簡單探索
   */
  def dataExplore(data: RDD[String]) = {
    val splitData = data.map{
      line => line.split(",")
    }
    splitData.take(10).foreach{
      line => println(line.mkString(","))
    }
    val sample = splitData.map(_.last).countByValue().toSeq.sortBy(_._2).reverse
    sample.foreach(println)
  }

數據探索獲得結果:服務器

# 查看10條樣本數據
0,tcp,http,SF,215,45076,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,1,0.00,0.00,0.00,0.00,1.00,0.00,0.00,0,0,0.00,0.00,0.00,0.00,0.00,0.00,0.00,0.00,normal.
0,tcp,http,SF,162,4528,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,2,2,0.00,0.00,0.00,0.00,1.00,0.00,0.00,1,1,1.00,0.00,1.00,0.00,0.00,0.00,0.00,0.00,normal.
0,tcp,http,SF,236,1228,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,1,0.00,0.00,0.00,0.00,1.00,0.00,0.00,2,2,1.00,0.00,0.50,0.00,0.00,0.00,0.00,0.00,normal.
0,tcp,http,SF,233,2032,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,2,2,0.00,0.00,0.00,0.00,1.00,0.00,0.00,3,3,1.00,0.00,0.33,0.00,0.00,0.00,0.00,0.00,normal.
0,tcp,http,SF,239,486,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,3,3,0.00,0.00,0.00,0.00,1.00,0.00,0.00,4,4,1.00,0.00,0.25,0.00,0.00,0.00,0.00,0.00,normal.
0,tcp,http,SF,238,1282,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,4,4,0.00,0.00,0.00,0.00,1.00,0.00,0.00,5,5,1.00,0.00,0.20,0.00,0.00,0.00,0.00,0.00,normal.
0,tcp,http,SF,235,1337,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,5,5,0.00,0.00,0.00,0.00,1.00,0.00,0.00,6,6,1.00,0.00,0.17,0.00,0.00,0.00,0.00,0.00,normal.
0,tcp,http,SF,234,1364,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,6,6,0.00,0.00,0.00,0.00,1.00,0.00,0.00,7,7,1.00,0.00,0.14,0.00,0.00,0.00,0.00,0.00,normal.
0,tcp,http,SF,239,1295,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,7,7,0.00,0.00,0.00,0.00,1.00,0.00,0.00,8,8,1.00,0.00,0.12,0.00,0.00,0.00,0.00,0.00,normal.
0,tcp,http,SF,181,5450,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,8,8,0.00,0.00,0.00,0.00,1.00,0.00,0.00,9,9,1.00,0.00,0.11,0.00,0.00,0.00,0.00,0.00,normal.
# 對每一種攻擊類型數據量統計
(smurf.,2807886)
(neptune.,1072017)
(normal.,972781)
(satan.,15892)
(ipsweep.,12481)
(portsweep.,10413)
(nmap.,2316)
(back.,2203)
(warezclient.,1020)
(teardrop.,979)
(pod.,264)
(guess_passwd.,53)
(buffer_overflow.,30)
(land.,21)
(warezmaster.,20)
(imap.,12)
(rootkit.,10)
(loadmodule.,9)
(ftp_write.,8)
(multihop.,7)
(phf.,4)
(perl.,3)
(spy.,2)

下面給出上面樣本數據每一列的含義和類型(continuous 表示數值型,symbolic 表示名義型):微信

duration: continuous.
protocol_type: symbolic.
service: symbolic.
flag: symbolic.
src_bytes: continuous.
dst_bytes: continuous.
land: symbolic.
wrong_fragment: continuous.
urgent: continuous.
hot: continuous.
num_failed_logins: continuous.
logged_in: symbolic.
num_compromised: continuous.
root_shell: continuous.
su_attempted: continuous.
num_root: continuous.
num_file_creations: continuous.
num_shells: continuous.
num_access_files: continuous.
num_outbound_cmds: continuous.
is_host_login: symbolic.
is_guest_login: symbolic.
count: continuous.
srv_count: continuous.
serror_rate: continuous.
srv_serror_rate: continuous.
rerror_rate: continuous.
srv_rerror_rate: continuous.
same_srv_rate: continuous.
diff_srv_rate: continuous.
srv_diff_host_rate: continuous.
dst_host_count: continuous.
dst_host_srv_count: continuous.
dst_host_same_srv_rate: continuous.
dst_host_diff_srv_rate: continuous.
dst_host_same_src_port_rate: continuous.
dst_host_srv_diff_host_rate: continuous.
dst_host_serror_rate: continuous.
dst_host_srv_serror_rate: continuous.
dst_host_rerror_rate: continuous.
dst_host_srv_rerror_rate: continuous.
back,buffer_overflow,ftp_write,guess_passwd,imap,ipsweep,land,loadmodule,multihop,neptune,nmap,normal,perl,phf,pod,portsweep,rootkit,satan,smurf,spy,teardrop,warezclient,warezmaster.

5.2 第一次聚類

首先注意到數據裏面有非數值特徵(沒有大小關係的分類變量。好比第二列,它的值只有 tcp、udp 或則 icmp),可是 K-means 聚類算法須要數值變量。如今就簡單的把這些列去掉,包括第二、三、4和最後一列。下面是數據準備的代碼:網絡

/**
   * 將原始數據去掉分類變量,而且轉成帶標籤的 double 向量
   * @param data 原始數據集合
   * @return
   */
  def dataPrepare(data: RDD[String]) = {
      val labelsAndData = data.map {
        line =>
          val buffer = line.split(",").toBuffer
          //從索引1開始,總共去掉三個數據(即去掉 1, 2, 3 列)
          buffer.remove(1, 3)
          val label = buffer.remove(buffer.length - 1)
          val vector = Vectors.dense(buffer.map(_.toDouble).toArray)
          (label, vector)
      }
    labelsAndData
  }

注意,下面只會去使用上面獲得的 Tuple2 裏面的 values 去訓練 KMeansModel。注意:如今沒有關心參數,也沒有關心效果, 如今只是讓整個訓練跑通。機器學習

//本地測試
  def main(args: Array[String]) {
    val rootDir = "please set your path"
    val conf = new SparkConf().setAppName("SparkInAction").setMaster("local[4]")
    val sc = new SparkContext(conf)
    val kddcupData = sc.textFile(rootDir + "/kddcup.data.corrected")
    //數據初步探索
    dataExplore(kddcupData)
    val data = dataPrepare(kddcupData).values
    data.cache()
    //參數是隨意設置的,兩個類,迭代次數50次
    val numClusters = 2
    val numIterations = 50
    val clusters = KMeans.train(data, numClusters, numIterations)
    //查看一下模型獲得的每個類中心
    clusters.clusterCenters.foreach(println)
    //使用均方偏差查看模型效果
    val wssse = clusters.computeCost(data)
    println("Within Set Sum of Squared Errors = " + wssse)
  }

能夠看到,k-means 算法擬合了兩個類,類中心以及均方偏差和分別打印以下:tcp

[48.34019491959669,1834.6215497618625,826.2031900016945,5.7161172049003456E-6,6.487793027561892E-4,7.961734678254053E-6,0.012437658596734055,3.205108575604837E-5,0.14352904910348827,0.00808830584493399,6.818511237273984E-5,3.6746467745787934E-5,0.012934960793560386,0.0011887482315762398,7.430952366370449E-5,0.0010211435092468404,0.0,4.082940860643104E-7,8.351655530445469E-4,334.9735084506668,295.26714620807076,0.17797031701994304,0.17803698940272675,0.05766489875327384,0.05772990937912762,0.7898841322627527,0.021179610609915762,0.02826081009629794,232.98107822302248,189.21428335201279,0.753713389800417,0.030710978823818437,0.6050519309247937,0.006464107887632785,0.1780911843182427,0.17788589813471198,0.05792761150001037,0.05765922142400437]

[10999.0,0.0,1.309937401E9,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,1.0,0.0,0.0,1.0,1.0,1.0,0.0,0.0,255.0,1.0,0.0,0.65,1.0,0.0,0.0,0.0,1.0,1.0]

# 偏差
Within Set Sum of Squared Errors = 4.6634585670252554E18

能夠預料這個效果確定是很差的,由於數據集至少已知的分類都有23類。而這裏只分了兩類,固然下面還能夠直觀的感覺一下,模型給出的分類中真實的分類的分佈狀況(好比:模型標記爲1的類的數據集中,本來的類分佈是如何, 固然理想是越純越好,若是模型分出的類別爲1的原來也只是對應一類,或者雖然有多個類,但都是攻擊類,那就完美了),查看的代碼以下:

# 小trick如何同時對兩個列彙總
  val clusterLabelCount = labelsAndData.map {
    case (label, data) =>
      val predLabel = clusters.predict(data)
      (predLabel, label)
  }.countByValue()
  clusterLabelCount.toSeq.sorted.foreach {
    case ((cluster, label), count) =>
      println(f"$cluster%1s$label%18s$count%8s")
  }

固然結果是至關的不理想:

0             back.    2203
0  buffer_overflow.      30
0        ftp_write.       8
0     guess_passwd.      53
0             imap.      12
0          ipsweep.   12481
0             land.      21
0       loadmodule.       9
0         multihop.       7
0          neptune. 1072017
0             nmap.    2316
0           normal.  972781
0             perl.       3
0              phf.       4
0              pod.     264
0        portsweep.   10412
0          rootkit.      10
0            satan.   15892
0            smurf. 2807886
0              spy.       2
0         teardrop.     979
0      warezclient.    1020
0      warezmaster.      20
1        portsweep.       1

我的微信公衆號

歡迎關注本人微信公衆號,會定時發送關於大數據、機器學習、Java、Linux 等技術的學習文章,並且是一個系列一個系列的發佈,無任何廣告,純屬我的興趣。
Clebeg能量集結號

相關文章
相關標籤/搜索