HtmlExtractor是一個Java實現的基於模板的網頁結構化信息精準抽取組件,自己並不包含爬蟲功能,但可被爬蟲或其餘程序調用以便更精準地對網頁結構化信息進行抽取。css
HtmlExtractor由2個子項目構成,html-extractor和html-extractor-web。 html-extractor實現了數據抽取邏輯,是從節點,html-extractor-web提供web界面來維護抽取規則,是主節點。 html-extractor是一個jar包,可經過maven引用:
<dependency> <groupId>org.apdplat</groupId> <artifactId>html-extractor</artifactId> <version>1.0</version> </dependency>
html-extractor-web是一個war包,須要部署到Servlet/Jsp容器上。
//一、構造抽取規則 List<UrlPattern> urlPatterns = new ArrayList<>(); //1.一、構造URL模式 UrlPattern urlPattern = new UrlPattern(); urlPattern.setUrlPattern("http://money.163.com/\\d{2}/\\d{4}/\\d{2}/[0-9A-Z]{16}.html"); //1.二、構造HTML模板 HtmlTemplate htmlTemplate = new HtmlTemplate(); htmlTemplate.setTemplateName("網易財經頻道"); htmlTemplate.setTableName("finance"); //1.三、將URL模式和HTML模板創建關聯 urlPattern.addHtmlTemplate(htmlTemplate); //1.四、構造CSS路徑 CssPath cssPath = new CssPath(); cssPath.setCssPath("h1"); cssPath.setFieldName("title"); cssPath.setFieldDescription("標題"); //1.五、將CSS路徑和模板創建關聯 htmlTemplate.addCssPath(cssPath); //1.六、構造CSS路徑 cssPath = new CssPath(); cssPath.setCssPath("div#endText"); cssPath.setFieldName("content"); cssPath.setFieldDescription("正文"); //1.七、將CSS路徑和模板創建關聯 htmlTemplate.addCssPath(cssPath); //可象上面那樣構造多個URLURL模式 urlPatterns.add(urlPattern); //二、獲取抽取規則對象 ExtractRegular extractRegular = ExtractRegular.getInstance(urlPatterns); //注意:可經過以下3個方法動態地改變抽取規則 //extractRegular.addUrlPatterns(urlPatterns); //extractRegular.addUrlPattern(urlPattern); //extractRegular.removeUrlPattern(urlPattern.getUrlPattern()); //三、獲取HTML抽取工具 HtmlExtractor htmlExtractor = HtmlExtractor.getInstance(extractRegular); //四、抽取網頁 String url = "http://money.163.com/08/1219/16/4THR2TMP002533QK.html"; List<ExtractResult> extractResults = htmlExtractor.extract(url, "gb2312"); //五、輸出結果 int i = 1; for (ExtractResult extractResult : extractResults) { System.out.println((i++) + "、網頁 " + extractResult.getUrl() + " 的抽取結果"); for(ExtractResultItem extractResultItem : extractResult.getExtractResultItems()){ System.out.print("\t"+extractResultItem.getField()+" = "+extractResultItem.getValue()); } System.out.println("\tdescription = "+extractResult.getDescription()); System.out.println("\tkeywords = "+extractResult.getKeywords()); }
一、運行主節點,負責維護抽取規則: 將子項目html-extractor-web打成War包而後部署到Tomcat。 二、獲取一個HtmlExtractor的實例(從節點),示例代碼以下:
String allExtractRegularUrl = "http://localhost:8080/HtmlExtractorServer/api/all_extract_regular.jsp"; String redisHost = "localhost"; int redisPort = 6379; HtmlExtractor htmlExtractor = HtmlExtractor.getInstance(allExtractRegularUrl, redisHost, redisPort);
三、抽取信息,示例代碼以下:
String url = "http://money.163.com/08/1219/16/4THR2TMP002533QK.html"; List<ExtractResult> extractResults = htmlExtractor.extract(url, "gb2312"); int i = 1; for (ExtractResult extractResult : extractResults) { System.out.println((i++) + "、網頁 " + extractResult.getUrl() + " 的抽取結果"); for(ExtractResultItem extractResultItem : extractResult.getExtractResultItems()){ System.out.print("\t"+extractResultItem.getField()+" = "+extractResultItem.getValue()); } System.out.println("\tdescription = "+extractResult.getDescription()); System.out.println("\tkeywords = "+extractResult.getKeywords()); }