優化curl併發使用

經典curl併發的處理流程:首先將全部的URL壓入併發隊列, 而後執行併發過程, 等待全部請求接收完以後進行數據的解析等後續處理。 php

在實際的處理過程當中, 受網絡傳輸的影響, 部分URL的內容會優先於其餘URL返回, 可是經典curl併發必須等待最慢的那個URL返回以後纔開始處理, 等待也就意味着CPU的空閒和浪費. 若是URL隊列很短, 這種空閒和浪費還處在可接受的範圍, 但若是隊列很長, 這種等待和浪費將變得不可接受.  json

優化的方式時當某個URL請求完畢以後儘量快的去處理它, 邊處理邊等待其餘的URL返回, 而不是等待那個最慢的接口返回以後纔開始處理等工做, 從而避免CPU的空閒和浪費. 下面貼上具體的實現: 網絡

 

function multiCurl($url, $log) {
	    $queue = curl_multi_init();

	    foreach($log as $info) {
                $ch = curl_init();
	        curl_setopt($ch, CURLOPT_URL, $url);
	        curl_setopt($ch, CURLOPT_POST, 1);
		curl_setopt($ch, CURLOPT_TIMEOUT, 3);
		curl_setopt($ch, CURLOPT_POSTFIELDS, $info);
	        curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
	        curl_setopt($ch, CURLOPT_HEADER, 0);
	        curl_setopt($ch, CURLOPT_NOSIGNAL, true);
	        curl_multi_add_handle($queue, $ch);
	    }

	    $responses = array();
	    do {
	        while (($code = curl_multi_exec($queue, $active)) == CURLM_CALL_MULTI_PERFORM) ;
	 
	        if ($code != CURLM_OK) { break; }
	 
	        // a request was just completed -- find out which one
	        while ($done = curl_multi_info_read($queue)) {
	 
	            // get the info and content returned on the request
	            //$info = curl_getinfo($done['handle']);
	            //$error = curl_error($done['handle']);
	            $results = curl_multi_getcontent($done['handle']);
	            //$responses[] = compact('info', 'error', 'results');
	            $responses[] = $results;

	            // remove the curl handle that just completed
	            curl_multi_remove_handle($queue, $done['handle']);
	            curl_close($done['handle']);
	        }
	 
	        // Block for data in / output; error handling is done by curl_multi_exec
	        if ($active > 0) {
	            curl_multi_select($queue, 0.5);
	        }
	 
	    } while ($active);
	 
	    curl_multi_close($queue);
	    return json_encode($responses);
}

 

 

轉自:http://www.j135.com/?p=684併發

相關文章
相關標籤/搜索