上一篇:Java網絡爬蟲實操(2)java
本篇文章主要介紹NetDiscovery框架中pipeline模式的一些實際使用方法。node
pipeline是一種常見的算法模式,針對不斷循環的耗時任務,若是要等一個循環結束後再輪處處理下一個任務的話,時間上有點浪費。mysql
因此,把耗時任務拆分爲幾個環節,只要一個環節完成了,就能夠輪到下一個任務的那個環節就立刻開始處理。不用等到這個耗時任務所有結束了纔開始。git
我認爲應用在處理爬蟲程序獲取的數據上,很是合適。github
從框架提供的原理圖上能夠了解到,pipeline對象扮演的角色是什麼:算法
任務步驟:sql
pipeline類:DownloadImage數據庫
package com.sinkinka.pipeline;
import com.cv4j.netdiscovery.core.domain.ResultItems;
import com.cv4j.netdiscovery.core.pipeline.Pipeline;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.URL;
import java.net.URLConnection;
import java.util.Map;
public class DownloadImage implements Pipeline {
@Override
public void process(ResultItems resultItems) {
Map<String, Object> map = resultItems.getAll();
for(String key : map.keySet()) {
String filePath = "./temp/" + key + ".png";
saveRemoteImage(map.get(key).toString(), filePath);
}
}
private boolean saveRemoteImage(String imgUrl, String filePath) {
InputStream in = null;
OutputStream out = null;
try {
URL url = new URL(imgUrl);
URLConnection connection = url.openConnection();
connection.setConnectTimeout(5000);
in = connection.getInputStream();
byte[] bs = new byte[1024];
int len;
out = new FileOutputStream(filePath);
while ((len = in.read(bs)) != -1) {
out.write(bs, 0, len);
}
} catch(Exception e) {
return false;
} finally {
try {
out.flush();
out.close();
in.close();
} catch(IOException e) {
return false;
}
}
return true;
}
}
複製代碼
pipeline類:SaveImagebash
package com.sinkinka.pipeline;
import com.cv4j.netdiscovery.core.domain.ResultItems;
import com.cv4j.netdiscovery.core.pipeline.Pipeline;
import com.safframework.tony.common.utils.Preconditions;
import java.sql.*;
import java.util.Map;
public class SaveImage implements Pipeline {
@Override
public void process(ResultItems resultItems) {
Map<String, Object> map = resultItems.getAll();
for(String key : map.keySet()) {
System.out.println("2"+key);
saveCompanyInfo(key, map.get(key).toString());
}
}
private boolean saveCompanyInfo(String shortName, String logoUrl) {
int insertCount = 0;
Connection conn = getMySqlConnection();
Statement statement = null;
if(Preconditions.isNotBlank(conn)) {
try {
statement = conn.createStatement();
String insertSQL = "INSERT INTO company(shortname, logourl) VALUES('"+shortName+"', '"+logoUrl+"')";
insertCount = statement.executeUpdate(insertSQL);
statement.close();
conn.close();
} catch(SQLException e) {
return false;
} finally {
try{
if(statement!=null) statement.close();
}catch(SQLException e){
}
try{
if(conn!=null) conn.close();
}catch(SQLException e){
}
}
}
return insertCount > 0;
}
//演示代碼,不建議用於生產環境
private Connection getMySqlConnection() {
//使用的是mysql connector 5
//數據庫:test 帳號/密碼: root/123456
final String JDBC_DRIVER = "com.mysql.jdbc.Driver";
final String DB_URL = "jdbc:mysql://localhost:3306/test";
final String USER = "root";
final String PASS = "123456";
Connection conn = null;
try {
Class.forName(JDBC_DRIVER);
conn = DriverManager.getConnection(DB_URL,USER,PASS);
} catch(SQLException e) {
return null;
} catch(Exception e) {
return null;
}
return conn;
}
}
複製代碼
Main類網絡
package com.sinkinka;
import com.cv4j.netdiscovery.core.Spider;
import com.sinkinka.parser.LagouParser;
import com.sinkinka.pipeline.DownloadImage;
import com.sinkinka.pipeline.SaveImage;
public class PipelineSpider {
public static void main(String[] args) {
String url = "https://xiaoyuan.lagou.com/";
Spider.create()
.name("lagou")
.url(url)
.parser(new LagouParser())
.pipeline(new DownloadImage()) //1. 首先,下載圖片到本地目錄
.pipeline(new SaveImage()) //2. 而後,把圖片信息存儲到數據庫
.run();
}
}
複製代碼
Parser類
package com.sinkinka.parser;
import com.cv4j.netdiscovery.core.domain.Page;
import com.cv4j.netdiscovery.core.domain.ResultItems;
import com.cv4j.netdiscovery.core.parser.Parser;
import com.cv4j.netdiscovery.core.parser.selector.Selectable;
import java.util.List;
public class LagouParser implements Parser {
@Override
public void process(Page page) {
ResultItems resultItems = page.getResultItems();
List<Selectable> liList = page.getHtml().xpath("//li[@class='nav-logo']").nodes();
for(Selectable li : liList) {
String logoUrl = li.xpath("//img/@src").get();
String companyShortName = li.xpath("//div[@class='company-short-name']/text()").get();
resultItems.put(companyShortName, logoUrl);
}
}
}
複製代碼
經過DownloadImage,保存到本地的圖片數據:
經過SaveImage,存儲到數據庫裏的數據:
以上代碼簡單演示了pipeline模式的用法,記住一點,pipeline是有執行順序的。在大量數據、高頻次的生產環境中再去體會pipeline模式的優勢吧。
下一篇:Java網絡爬蟲實操(4)