Puppeteer爬取網頁數據

動機

但願能夠從各大技術論壇抓取本身感興趣的問題。javascript

技術

  • puppeteer
  • dotenv

詳細設計

  • 核心功能:使用puppeteer仿真用戶在瀏覽器上的行爲,操做DOM獲取數據
  • 快速啓動:使用dotenv將不一樣論壇之間的差別項(如元素的選擇器等)寫入環境變量,而後在package.json文件裏配置啓動命令,快速啓動

1. 目錄結構

|-- env
     |-- csdn.env
     |-- segmentfault.env
|-- index.js
複製代碼

2. 代碼設計

代碼邏輯特別簡單,主要流程以下:html

  1. 打開瀏覽器
  2. 建立一個新的頁面
  3. 跳轉到目標網站
  4. 獲取數據
  5. 將數據打印到控制檯(或者寫入到數據庫)
  6. 關閉瀏覽器

由於puppeteer很容易仿真用戶在瀏覽器上的行爲,因此DEMO的核心在於如何獲取數據,或者說如何實現get_news方法。前端

const puppeteer = require("puppeteer");
   
(async () => {
 // 1. Open browser
 const browser = await puppeteer.launch({});
  // 2. Create a new page
 const page = await browser.newPage();
  // 3. Go to the target website
  await page.goto(url, { waitUntil: "networkidle2" });
  // 4. Get data
  let data = await page.evaluate(get_news);
  // 5. Print out the data in the console
  console.log(data);
  // 6. Close browser
  await browser.close();
})();
   
// Getting news
function get_news() {
    // To do something to get news
}
複製代碼

CSDN爲例,以下圖所示,在網頁裏咱們能夠很容易的經過原生的DOM操做document.querySelector(selector)或者jQuery的DOM操做$(selector)來找到頁面上的元素,從而獲取頁面信息。在puppeteer的page.evaluate方法同時支持原生的DOM操做和jQuery的DOM操做,所以咱們獲取頁面數據就會變得很容易。具體代碼以下所示。vue

function get_news() {
  let result = [];
  let titles = $(".forums_title");
  let dates = $(".forums_author em");
  titles.map((i, title) => {
    result.push({
      title: title.text,
      link: title.href,
      date: dates[i].textContent
    });
  });
  return result;
}
複製代碼

如今運行代碼能夠在控制檯中打印出抓取到的網頁數據,以下圖所示。你一樣能夠將數據寫入到數據庫。java

3. 更多細節

代碼裏還有不少的細節實現,由於都有詳細的註釋我就不一一展開了,感興趣的小夥伴能夠閱讀後面的代碼。主要包括的技術細節以下:node

  • 怎麼經過dotenv配置環境變量
  • 怎麼在puppeteerpage.evaluate裏用console調試
  • 問題有效時間,關鍵字的檢查
  • 怎麼在package.json文件裏配置快速啓動的命令

參考

後記

我在前端上仍是個小白,代碼質量可能不高,若是有什麼問題但願你們在評論區裏及時指出,幫助小白成長,感激涕零!!!mysql

完整代碼

1. index.js

const puppeteer = require("puppeteer");
const {
  resolve
} = require("path");
 
(async(path_name, start_time) = >{
  // 1. Analytical path of environmental variables
  let dotenvPath = resolve(__dirname, "env", path_name);
  require("dotenv").config({
    path: dotenvPath
  });
 
  // 2. Open browser
  const browser = await puppeteer.launch({});
 
  // 3. Create a new page
  const page = await browser.newPage();
 
  // Catch headless navigator's console event
  page.on("console", msg = >{
    for (let i = 0; i < msg.args().length; ++i) {
      console.log(`$ {i}: $ {msg.args()[i]}`);
    }
  });
 
  // 4. Getting env variables
  let tags = JSON.parse(process.env.TAGS);
  let titles = process.env.SELECTOR_TITLES;
  let dates = process.env.SELECTOR_DATES;
  let keywords = JSON.parse(process.env.KEYWORDS);
  let time_interval = process.env.TIME_INTERVAL;
  let para = { path_name, start_time, time_interval, titles, dates, keywords};
 
  // Get page url based on label and page index
  const get_news_url = (tag, pageIndex) = >process.env.LIST_URL.replace("{tag}", tag).replace("{pageIndex}", pageIndex);
 
  // 5. Traverse through all tags to get data
  await Promise.all(tags.map(async tag => {
    let i = 0;
    while (true) {
      // 1) Go to the specified page
      await page.goto(get_news_url(tag, ++i), { waitUntil: "networkidle2" });
      // 2) Get data by function get_news
      let _titles = await page.evaluate(get_news, para);
      // 3) Stop the loop if it can't find the required data
      if (_titles.length === 0) break;
      // 4) Output captured data in console
      console.log(i, get_news_url(tag, i));
      console.log(_titles);
    }
  }));
 
  // 6. Close browser
  await browser.close();
})(process.env.PATH_NAME, process.env.START_TIME);
 
// Getting news
async
function get_news(para) {
  // Get release time of issue
  const get_release_time = dom => {
    if (path_name === "csdn.env") return dom.textContent;
    if (path_name === "segmentfault.env") return new Date(dom.dataset.created * 1000);
  }
 
  // Check whether the issue release time is within the valid time interval
  const validate_time = (time, start_time) => {
    let time_diff = (new Date(time)) - (new Date(start_time));
    return (time_diff > 0) && (time_diff < time_interval);
  }
 
  // Check to see if the keyword is included
  const validate_keyword = (keywords, title) => !!keywords.find(keyword = >(new RegExp(keyword)).test(title))
 
  // 1. Waiting for callback data
  let { path_name, start_time, time_interval, titles, dates, keywords } = await Promise.resolve(para);
 
  // 2. Traverse the page data to find the required data
  let result = [];
  $(titles).map((i, title) => {
    // 1) Verify that the data is valid in time
    let check_time = validate_time(get_release_time($(dates)[i]), start_time);
    if (!check_time) return;
    // 2) Verify that the data contains the specified keywords
    let check_keyword = validate_keyword(keywords, a.text);
    if (!check_keyword) return;
    result.push({
      title: title.text,
      link: title.href,
      date: get_release_time($(dates)[i]).toString()
    });
  });
  return result;
}
複製代碼

2. csdn.env

LIST_URL=https://bbs.csdn.net/forums/{tag}?page={pageIndex}
TAGS=["CSharp","DotNET"]
KEYWORDS=[".net","C#","c#"]
SELECTOR_TITLES=.forums_topic .forums_title
SELECTOR_DATES=.forums_author em
複製代碼

3. segmentfault.env

LIST_URL=https://segmentfault.com/questions/unanswered?page={pageIndex}
TAGS=[""]
KEYWORDS=["js","mysql","vue","html","javascript"]
SELECTOR_TITLES=.title a
SELECTOR_DATES=.askDate
複製代碼

4. package.json

{
  "name": "fetch-question",
  "version": "1.0.0",
  "description": "fetch questions from internet",
  "main": "index.js",
  "dependencies": {
  "cross-env": "^5.2.0",
  "dotenv": "^7.0.0",
  "puppeteer": "^1.13.0"
},
  "devDependencies": {},
  "scripts": {
    "csdn:list": "cross-env PATH_NAME=csdn.env START_TIME=2019/3/18 TIME_INTERVAL=172800000 node index.js",
    "segmentfault:list": "cross-env PATH_NAME=segmentfault.env START_TIME=2019/3/18 TIME_INTERVAL=172800000 node index.js",
},
  "author": "linli",
  "license": "ISC"
}
複製代碼
相關文章
相關標籤/搜索