有個需求要分析nginx日誌,也懶得去研究logstach之類的開源工具,乾脆直接寫一個腳本,本身根據需求來實現:html
先看日誌格式:咱們跟別人的不太同樣,因此沒辦法了:
python
12.195.166.35 [10/May/2015:14:38:09 +0800] "list.xxxx.com" "GET /new/10:00/9.html?cat=0,0&sort=price_asc HTTP/1.0" 200 42164 "http://list.zhonghuasuan.com/new/10:00/8.html?cat=0,0&sort=price_asc" "Mozilla/5.0 (Linux; U; Android 4.4.2; zh-CN; H60-L02 Build/HDH60-L02) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 UCBrowser/10.4.0.558 U3/0.8.0 Mobile Safari/534.30"nginx
上面是個人日誌格式:ide
腳本以下:函數
#!/usr/bin/env python #-*- coding:utf-8 –*- #Author:xiaoluo #QQ:942729042 #date:2015:05:12 import re import sys log = sys.argv[1] ip = r"?P<ip>[\d.]*" date = r"?P<date>\d+" month = r"?P<month>\w+" year = r"?P<year>\d+" log_time = r"?P<time>\S+" timezone = r"""?P<timezone> [^\"]* """ name = r"""?P<name>\" [^\"]*\" """ method = r"?P<method>\S+" request = r"?P<request>\S+" protocol = r"?P<protocol>\S+" status = r"?P<status>\d+" bodyBytesSent = r"?P<bodyBytesSent>\d+" refer = r"""?P<refer>\" [^\"]*\" """ userAgent=r"""?P<userAgent> .* """ #f = open('access1.log','r') #for logline in f.readlines(): p = re.compile(r"(%s)\ \[(%s)/(%s)/(%s)\:(%s)\ (%s)\ (%s)\ (%s)\ (%s)\ (%s)\ (%s)\ (%s)\ (%s)\ (%s)" %(ip, date, month, year, log_time,timezone,name,method,request,protocol,status,bodyBytesSent,refer,userAgent), re.VERBOSE) def getcode(): codedic={} f = open(log,'r') for logline in f.readlines(): matchs = p.match(logline) if matchs !=None: allGroups =matchs.groups() status= allGroups[10] codedic[status]=codedic.get(status,0) +1 return codedic f.close() def getIP(): f = open(log,'r') IPdic={} for logline in f.readlines(): matchs = p.match(logline) if matchs !=None: allGroups =matchs.groups() IP=allGroups[0] IPdic[IP] = IPdic.get(IP,0) +1 IPdic=sorted(IPdic.iteritems(),key=lambda c:c[1],reverse=True) IPdic=IPdic[0:21:1] return IPdic f.close() def getURL(): f = open(log,'r') URLdic={} for logline in f.readlines(): matchs = p.match(logline) if matchs !=None: allGroups =matchs.groups() urlname = allGroups[6] URLdic[urlname] = URLdic.get(urlname,0) +1 URLdic=sorted(URLdic.iteritems(),key=lambda c:c[1],reverse=True) URLdic=URLdic[0:21:1] return URLdic def getpv(): f = open(log,'r') pvdic={} for logline in f.readlines(): matchs = p.match(logline) if matchs !=None: allGroups =matchs.groups() timezone=allGroups[4] time = timezone.split(':') minute = time[0]+":"+time[1] pvdic[minute]=pvdic.get(minute,0) +1 pvdic=sorted(pvdic.iteritems(),key=lambda c:c[1],reverse=True) pvdic=pvdic[0:21:1] return pvdic if __name__=='__main__': print "網站監控情況檢查狀態碼" print getcode() print "網站訪問量最高的20個IP地址" print getIP() print "網站訪問最多的20個站點名" print getURL() print getpv()
這裏要指出的是。我當初是給正則匹配的時候單獨封裝一個函數的,這樣就省去了下面每一個函數要打開以前都要單獨打開一遍文件,可是我return的時候只能用列表的形式返回,結果列表太大把個人內存耗光了,個人是32G的內存,15G的日誌。工具
效果:網站
最後一個函數是統計每分鐘,訪問的數量ui