Go Web爬蟲併發實現

題目:Exercise: Web Crawlergit

直接參考了 https://github.com/golang/tour/blob/master/solutions/webcrawler.go 的實現,不過該代碼使用了chan bool來存放子協程是否執行完成,個人代碼是使用WaitGroup來讓主協程等待子協程執行完成。github

完整代碼請參考 https://github.com/sxpujs/go-example/blob/master/crawl/web-crawler.gogolang

相對原程序增長的代碼以下:web

var fetched = struct {
    m map[string]error
    sync.Mutex
}{m: map[string]error{}}

// Crawl uses fetcher to recursively crawl
// pages starting with url, to a maximum of depth.
func Crawl(url string, depth int, fetcher Fetcher) {
    if _, ok := fetched.m[url]; ok || depth <= 0 {
        return
    }
    body, urls, err := fetcher.Fetch(url)

    fetched.Lock()
    fetched.m[url] = err
    fetched.Unlock()

    if err != nil {
        return
    }
    fmt.Printf("Found: %s %q\n", url, body)
    var wg sync.WaitGroup
    for _, u := range urls {
        wg.Add(1)
        go func(url string) {
            defer wg.Done()
            Crawl(url, depth-1, fetcher)
        }(u)
    }
    wg.Wait()
}
相關文章
相關標籤/搜索