首先須要得到字體文件,在頁面或css裏搜@font-face或font-familcss
在font刷新頁面幾回發現字體會變化,字體庫是動態的html
創建字和動態字體庫字形的聯繫app
字體用相似表的結構記錄字,好比cmap記錄了unicode索引和字形,這裏反爬用到的表示glyf字形表,表裏記錄了具體的字形筆畫數據,字體
且表裏只記錄了字形數據,不關聯其餘表。有專門的loca表按順序記錄glyf表裏字形的位置,在使用字體時經過loca表來找到具體字形。網站
因此能夠利用字形數據來找到自定義字體unicode與字的聯繫。url
字體資料整理記錄在: http://www.javashuo.com/article/p-nfmrkysh-by.html spa
找關聯思路:3d
1.在貓眼電影下載一個字體作爲基準,創建基準字體unicode和字的關係。code
2.刷新網頁後下載新字體,記爲網站字體2,經過比較網站字體1和網站字體2的字形找到unicode和新unicode聯繫。htm
3.再經過相同的unicode來創建字和變化字體庫unicode的聯繫,最後將新unicode替換成字。
headers={"User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36"}
r=requests.get("https://maoyan.com/board/1",headers=headers) font1_url="http:"+re.findall("url\(\'(\/\/.*?woff)\'\)",r.text,re.M)[0]
#建立font目錄保存基準字體 if not os.path.exists("font"): font1=requests.get(font1_url,headers=headers) os.mkdir("font") with open("./font/base.woff","wb")as f: f.write(font1.content)
base_font = TTFont('./font/base.woff')
base_dict=[] for i in range(len(baseFont.getGlyphOrder()[2:])): print(f"對應的數字{i+1}:") w=input() base_dict.append({"code":baseFont.getGlyphOrder()[2:][i],"num":w})
new_font_url="http:"+re.findall("url\(\'(\/\/.*?woff)\'\)",r.text,re.M)[0]
font=requests.get(new_font_url,headers=headers) with open("new_font.woff","wb")as f: f.write(font.content) new_font = TTFont('new_font.woff') new_font_code_list=new_font.getGlyphOrder()[2:]
replace_dic=[]
for i in range(10): news = new_font['glyf'][new_font_code_list[i]] for j in range(10): bases = base_font['glyf'][base_dict[j]["code"]] if news == bases: unicode=new_font_code_list[i].lower().replace("uni","&#x")+";" num= base_dict[j]["num"] replace_dic.append({"code":unicode,"num":num})
org_data=r.text
for i in range(len(replace_dic)): new_data=new_data.replace(replace_dic[i]["code"],replace_dic[i]["num"])
tree=etree.HTML(org_data)
dds=tree.xpath('//dl[@class="board-wrapper"]/dd') info=[] for dd in dds: title=dd.xpath('.//p[@class="name"]/a/@title')[0] star=dd.xpath('.//p[@class="star"]/text()')[0].replace("主演:","") time=dd.xpath('.//p[@class="releasetime"]/text()')[0].replace("上映時間:","") realticket=dd.xpath('.//p[@class="realtime"]//text()')[1]+dd.xpath('.//p[@class="realtime"]//text()')[2].strip() totalticket=dd.xpath('.//p[@class="total-boxoffice"]//text()')[1]+dd.xpath('.//p[@class="total-boxoffice"]//text()')[2].strip() info.append({"標題":title,"主演":star,"上映時間":time,"實時票房":realticket,"總票房":totalticket})
import csv
csv_file = open("1325.csv", 'w', newline='') keys = [] writer = csv.writer(csv_file) keys = info[1].keys() writer.writerow(keys) for dic in info: for key in keys: if key not in dic: dic[key ] = '' writer.writerow(dic.values()) csv_file.close()