首先說一下我爲何要寫這個,我並無系統的去學過Python,學校也只教了一點C。好久以前本身就有學python,但也只是學了一點點,和沒學同樣的。php
認真審視一下個人編程技術,真的很菜,必需要專一學習一門語言了。因此在這裏就把之前學過的雜亂知識彙總一下,以便查閱。html
備註:如下代碼均爲Python 3.7python
在Python 中,咱們常用 socket()函數來建立套接字,語法格式以下:mysql
socket.socket([family[, type[, proto]]])
參數:sql
family: 套接字家族可使AF_UNIX或者AF_INET type: 套接字類型能夠根據是面向鏈接的仍是非鏈接分爲SOCK_STREAM或SOCK_DGRAM protocol: 通常不填默認爲0
使用方法舉例:固定IP端口探測chrome
import socket s=socket.socket() #初始化 try: s.connect(('43.225.100.88', 22)) except:pass message='hello word!\n' message=message.encode() s.send(message) banner=s.recv(1024) print(banner)
import socket import sys name=sys.argv[0] ip=sys.argv[1] s=socket.socket() message='hello word!\n' message=message.encode() for port in range(20, 40): try: print("[+] Attempting to connect to:" + ip + ":" + str(port) + "...") s.connect((ip, port)) s.send(message) banner = s.recv(1024) if banner: print("[-] Port " + str(port) + " is open:", end="") print(banner) except:pass s.close()
import socket hosts = ['127.0.0.1', '192.168.1.5', '10.0.0.1'] ports = [22,445,80,443,3389] s=socket.socket() messages='hello word\n' messages=messages.encode() for host in hosts: for port in ports: try: print("[+] Connecting to "+host+":"+str(port)) s.connect((host,port)) s.send(messages) banner=s.recv(1024) if banner: print("[-] Port "+str(port)+" is open:",end="") print(banner) except:pass s.close()
import socket import re host='192.168.83.130/24' #自定義要掃描的端口 ports=['22','1433','3306'] #設置超時時間2秒 socket.setdefaulttimeout(2) def scan(host): for port in ports: try: s=socket.socket(socket.AF_INET,socket.SOCK_STREAM) s.connect((host,int(port))) print("[*]%s port %s is open"%(host,port)) s.close() if '21' in port: #若是21端口開放,嘗試爆破FTP弱口令 print("[*] Try to crack FTP pass") __import__('ftp_check').check(host) elif '3306' in port: #若是3306端口開放,嘗試爆破Mysql弱口令 print("[*]Try to crack Mysql pass") __import__('mysql_check').check(host) except: continue #檢查用戶輸入是否爲有效IP地址,若是用戶輸入中帶有/24,則掃描整個C段,不然掃描單個IP if '/24' in host: print(host) for x in range(130,140,1): #這裏並無真正掃描C段,節約時間,掃描了10個IP ip=re.sub(r'.\d+/24','.'+str(x),host) #表示將最後的地址替換成1到255的數字 print(ip) scan(ip) else: ip=host scan(ip)
__import__
函數:編程
__import__(module)
至關於import module
import urllib.request response=urllib.request.urlopen('https://www.cnblogs.com') #請求站點得到一個HTTP Response對象 print(response.read().decode('utf-8')) #返回網頁內容 print(response.getheader('server')) #返回響應頭中的server值 print(response.getheaders()) #以列表元祖對的形式返回響應頭信息 print(response.fileno()) #返回文件描述符 print(response.version) #返回版本信息 print(response.status) #返回狀態碼200,404表明網頁未找到 print(response.debuglevel) #返回調試等級 print(response.closed) #返回對象是否關閉布爾值 print(response.geturl()) #返回檢索的URL print(response.info()) #返回網頁的頭信息 print(response.getcode()) #返回響應的HTTP狀態碼 print(response.msg) #訪問成功則返回ok print(response.reason) #返回狀態信息
參數:json
url:網站地址,str類型,也能夠是一個request對象 data:data參數是可選的,內容爲字節流編碼格式的即bytes類型,若是傳遞data參數,urlopen將使用Post方式請求 timeout:用於設置超時時間,單位爲秒,若是請求超出了設置時間還未獲得響應則拋出異常,支持HTTP,HTTPS,FTP請求 context:她必須是ssl.SSLContext類型,用來指定SSL設置,此外,cafile和capath這兩個參數分別指定CA證書和它的路徑,會在https連接時用到。
參數:cookie
url:請求的URL,必須傳遞的參數,其餘都是可選參數 data:上傳的數據,必須傳bytes字節流類型的數據,若是它是字典,能夠先用urllib.parse模塊裏的urlencode()編碼 headers:它是一個字典,傳遞的是請求頭數據,能夠經過它構造請求頭,也能夠經過調用請求實例的方法add_header()來添加 origin_req_host:指請求方的host名稱或者IP地址 unverifiable:表示這個請求是不是沒法驗證的,默認爲False,如咱們請求一張圖片若是沒有權限獲取圖片那它的值就是true method:是一個字符串,用來指示請求使用的方法,如:GET,POST,PUT等
from urllib import request,parse url='http://httpbin.org/post' headers={ 'User-Agent':'Mozilla/5.0 (compatible; MSIE 5.5; Windows NT)', 'Host':'httpbin.org' } #定義頭信息 dict={'name':'germey'} data = bytes(parse.urlencode(dict),encoding='utf-8') #data須要字節類型的參數,使用bytes()函數轉換爲字節,使用urllib.parse模塊裏的urlencode()方法來說參數字典轉換爲字符串並指定編碼 req = request.Request(url=url,data=data,headers=headers,method='POST') #req.add_header('User-Agent','Mozilla/5.0 (compatible; MSIE 8.4; Windows NT') #也能夠request的方法來添加 response = request.urlopen(req) print(response.read())
from urllib import request,parse values ={"id":"2"} params="?" #這裏是GET傳參須要的 for key in values: params = (params + key + "=" + values[key] ) url="http://43.247.91.228:84/Less-1/index.php" headers = { # heard部分直接經過chrome部分request header部分 # 'Accept': 'application/json, text/plain, */*', # 'Accept-Encoding': 'gzip, deflate', # 'Accept-Language': 'zh-CN,zh;q=0.8', # 'Connection': 'keep-alive', # 'Content-Type': 'application/x-www-form-urlencoded', 'Referer': url, 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36' } data = bytes(parse.urlencode(values),encoding='utf-8') #req = request.Request(url=url, headers=headers, data=data) #POST方法 req = request.Request(url+params) #GET方法 response =request.urlopen(req) print (response.read().decode('utf-8'))
from urllib import parse,request import urllib.error url='http://43.247.91.228:80/Less-1/index.php' headers={ 'User-Agent':'Mozilla/5.0 (compatible; MSIE 5.5; Windows NT)', 'Host':'43.247.91.228:80' } #定義頭信息 data={'id':'1'} params='?' for key in data: params = (params + key + "=" + data[key] ) data = bytes(parse.urlencode(data),encoding='utf-8') req = urllib.request.Request(url+params) try: respose=urllib.request.urlopen(req) print(respose.read().decode('utf-8')) except urllib.error.HTTPError as e: print(e.code) print(e.read().decode("utf-8"))
import urllib.request proxy_support = urllib.request.ProxyHandler({"http" : "39.80.118.178:8060"}) opener = urllib.request.build_opener(proxy_support) urllib.request.install_opener(opener) a = urllib.request.urlopen("http://43.247.91.228:84/Less-1/?id=1").read().decode("utf-8") print(a)
import requests r = requests.get('http://43.247.91.228:84/Less-1/?id=1') print r.headers print r.status_code print r.url print r.text print r.content
import requests payload ={'id':1} r = requests.get('http://43.247.91.228:84/Less-1/',params=payload) print(r.url) print(r.content.decode("utf-8"))
import requests payload ={'id':1} r = requests.post('http://43.247.91.228:84/Less-1/',data=payload) print(r.url) print(r.content.decode("utf-8"))
import requests url='http://43.247.91.228:84/Less-1/?id=1' headers={'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64; rv:55.0) Gecko/20100101 Firefox/55.0'} r= requests.get(url,headers=headers) print(r.text)
import requests raw_cookies="PHPSESSID=d7kkojg82otnh9c53ao1m87pq3; security=low" cookies={} for line in raw_cookies.split(';'): key,value=line.split('=',1) cookies[key]=value testurl='http://43.247.91.228:81/' s=requests.get(testurl,cookies=cookies) print(s.text)
import requests data = {'username':'admin','password':'password','Login':'Login'} r=requests.post('http://43.247.91.228:81/login.php',data=data); print(r.url) print(r.content.decode("utf-8"))app
result=requests.get('https://www.v2ex.com', verify=False)
忽略驗證SSL證書,否則會報錯