前言:html
放假學習完web漏洞後。想寫一個腳本python
然而本身菜沒法像大佬們同樣寫出牛逼的東西web
嘗試寫了,都以失敗了結。安全
還有一個緣由:上學時間不能及時看到,本身也比較懶。郵件能提醒本身。學習
須要安裝的模塊:url
requests模塊code
smtplib模塊orm
email模塊htm
正文:blog
這個腳本的原理其實很簡單把freebuf上的a標籤抓取
而後獲去href裏面的連接與title的標題寫入到txt,讀取txt。發送郵箱
架空txt。間隔12個小時發一次,加上循環。
代碼:
import requests from bs4 import BeautifulSoup import smtplib import re from email.mime.text import MIMEText from email.header import Header import time while True: def freebuf(): url="http://www.freebuf.com" headers={'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.221 Safari/537.36 SE 2.X MetaSr 1.0'} res=requests.get(url,headers) bw=res.content.decode('utf-8') allsectool=re.findall('<a href="http://www.freebuf.com/sectool/.*?.html" target="_blank" title=".*?">',bw) allsectoolurl=BeautifulSoup(str(allsectool),'html.parser') osw=allsectoolurl.find_all('a') for u1 in osw: u3=u1.get('href') u4=u1.get('title') print(u3,"標題:",u4,file=open('freebuf.txt','a',encoding='utf-8')) networkall=re.findall('<a href="http://www.freebuf.com/articles/network/.*?.html" target="_blank" title=".*?">',bw) neturl=BeautifulSoup(str(networkall),'html.parser') netus=neturl.find_all('a') for y in netus: neturls=y.get('href') netiles=y.get('title') print(neturls,"標題:",netiles,file=open('freebuf.txt','a',encoding='utf-8')) jobsall=re.findall('<a href="http://www.freebuf.com/jobs/.*?.html" target="_blank" title=".*?">',bw) jobsurl=BeautifulSoup(str(jobsall),'html.parser') jobsurly=jobsurl.find_all('a') for m in jobsurly: m2=m.get('href') m3=m.get('title') print(m2,"標題:",m3,file=open('freebuf.txt','a',encoding='utf-8')) newsall=re.findall('<a href="http://www.freebuf.com/news/.*?.html" target="_blank" title=".*?">',bw) newsurl=BeautifulSoup(str(newsall),'html.parser') usd=newsurl.find_all('a') for g2 in usd: psw=g2.get('href') pswt=g2.get('title') print(psw,"標題:",pswt,file=open('freebuf.txt','a',encoding='utf-8')) vulsall=re.findall('<a href="http://www.freebuf.com/vuls/.*?.html" target="_blank" title=".*?">',bw) vulsallurl=BeautifulSoup(str(vulsall),'html.parser') gew=vulsallurl.find_all('a') for hp in gew: hpw=hp.get('href') gpw2=hp.get('title') print(hpw,"標題:",gpw2,file=open('freebuf.txt','a',encoding='utf-8')) system=re.findall('<a href="http://www.freebuf.com/articles/system/166121.html" target="_blank" title=".*?">',bw) systemurl=BeautifulSoup(str(system),'html.parser') sys=systemurl.find_all('a') for v in sys: v1=v.get('href') v2=v.get('title') print(v1,"標題:",v2,file=open('freebuf.txt','a',encoding='utf-8')) freebuf() def Email(): try: lk = open('freebuf.txt', 'r',encoding='utf-8') pg = lk.read() lk.close() except Exception as g: print('[-]報錯', g) sender="發送人" recivs="接收人" message=MIMEText(''' freebuf新聞快報\n {}\n '''.format(str(pg)),'plain','utf-8') message['From']=Header('發送人') sub="freebuf安全快報" message['subject']=Header(sub,'utf-8') try: smtp=smtplib.SMTP() smtp.connect("smtp服務",25) smtp.login("你本身的郵箱","你本身的郵箱密碼") smtp.sendmail(sender,recivs,message.as_string()) print('[+]發送成功') except Exception as l: print('[-]發送失敗',l) ws=open('freebuf.txt','w') Email() time.sleep(43200)
運行截圖
qq郵箱接收到的:
能夠根據本身的喜愛去抓取。