每週一個 Python 模塊 | multiprocessing

專欄地址:每週一個 Python 模塊html

multiprocessing 是 Python 的標準模塊,它既能夠用來編寫多進程,也能夠用來編寫多線程。若是是多線程的話,用 multiprocessing.dummy 便可,用法與 multiprocessing 基本相同。python

基礎

利用 multiprocessing.Process 對象能夠建立一個進程,Process 對象與 Thread 對象的用法相同,也有 start()run()join() 等方法。Process 類適合簡單的進程建立,如需資源共享能夠結合 multiprocessing.Queue 使用;若是想要控制進程數量,則建議使用進程池 Pool 類。git

Process 介紹:github

構造方法:bootstrap

  • Process([group [, target [, name [, args [, kwargs]]]]])
  • group: 線程組,目前尚未實現,庫引用中提示必須是 None;
  • target: 要執行的方法;
  • name: 進程名;
  • args/kwargs: 要傳入方法的參數。

實例方法:服務器

  • is_alive():返回進程是否在運行。
  • join([timeout]):阻塞當前上下文環境的進程程,直到調用此方法的進程終止或到達指定的 timeout(可選參數)。
  • start():進程準備就緒,等待 CPU 調度。
  • run():strat() 調用 run 方法,若是實例進程時未制定傳入 target,start 執行默認 run() 方法。
  • terminate():無論任務是否完成,當即中止工做進程。

屬性:網絡

  • authkey
  • daemon:和線程的 setDeamon 功能同樣(將父進程設置爲守護進程,當父進程結束時,子進程也結束)。
  • exitcode(進程在運行時爲 None、若是爲 –N,表示被信號 N 結束)。
  • name:進程名字。
  • pid:進程號。

下面看一個簡單的例子:多線程

import multiprocessing


def worker():
    """worker function"""
    print('Worker')


if __name__ == '__main__':
    jobs = []
    for i in range(5):
        p = multiprocessing.Process(target=worker)
        jobs.append(p)
        p.start()

# 輸出
# Worker
# Worker
# Worker
# Worker
# Worker 
複製代碼

輸出結果是打印了五次 Worker,咱們並不知道哪一個 Worker 是由哪一個進程打印的,具體取決於執行順序,由於每一個進程都在競爭訪問輸出流。併發

那怎樣才能知道具體執行順序呢?能夠經過給進程傳參來實現。與 threading 不一樣,傳遞給 multiprocessing Process 的參數必需是可序列化的,來看一下代碼:app

import multiprocessing


def worker(num):
    """thread worker function"""
    print('Worker:', num)


if __name__ == '__main__':
    jobs = []
    for i in range(5):
        p = multiprocessing.Process(target=worker, args=(i,))
        jobs.append(p)
        p.start()
        
# 輸出
# Worker: 1
# Worker: 0
# Worker: 2
# Worker: 3
# Worker: 4
複製代碼

可導入的目標函數

threading 和 multiprocessing 的一處區別是在 __main__ 中使用時的額外保護。因爲進程已經啓動,子進程須要可以導入包含目標函數的腳本。在 __main__ 中包裝應用程序的主要部分,可確保在導入模塊時不會在每一個子項中遞歸運行它。另外一種方法是從單獨的腳本導入目標函數。例如:multiprocessing_import_main.py使用在第二個模塊中定義的 worker 函數。

# multiprocessing_import_main.py 
import multiprocessing
import multiprocessing_import_worker

if __name__ == '__main__':
    jobs = []
    for i in range(5):
        p = multiprocessing.Process(
            target=multiprocessing_import_worker.worker,
        )
        jobs.append(p)
        p.start()
        
# 輸出
# Worker
# Worker
# Worker
# Worker
# Worker
複製代碼

worker 函數定義於multiprocessing_import_worker.py

# multiprocessing_import_worker.py 
def worker():
    """worker function"""
    print('Worker')
    return
複製代碼

肯定當前進程

傳參來識別或命名進程很是麻煩,也沒必要要。每一個Process實例都有一個名稱,其默認值能夠在建立進程時更改。命名進程對於跟蹤它們很是有用,尤爲是在同時運行多種類型進程的應用程序中。

import multiprocessing
import time


def worker():
    name = multiprocessing.current_process().name
    print(name, 'Starting')
    time.sleep(2)
    print(name, 'Exiting')


def my_service():
    name = multiprocessing.current_process().name
    print(name, 'Starting')
    time.sleep(3)
    print(name, 'Exiting')


if __name__ == '__main__':
    service = multiprocessing.Process(
        name='my_service',
        target=my_service,
    )
    worker_1 = multiprocessing.Process(
        name='worker 1',
        target=worker,
    )
    worker_2 = multiprocessing.Process(  # default name
        target=worker,
    )

    worker_1.start()
    worker_2.start()
    service.start()
    
# output
# worker 1 Starting
# worker 1 Exiting
# Process-3 Starting
# Process-3 Exiting
# my_service Starting
# my_service Exiting
複製代碼

守護進程

默認狀況下,在全部子進程退出以前,主程序不會退出。有些時候,啓動後臺進程運行而不阻止主程序退出是有用的,例如爲監視工具生成「心跳」的任務。

要將進程標記爲守護程序很簡單,只要將daemon屬性設置爲 True 就能夠了。

import multiprocessing
import time
import sys


def daemon():
    p = multiprocessing.current_process()
    print('Starting:', p.name, p.pid)
    sys.stdout.flush()
    time.sleep(2)
    print('Exiting :', p.name, p.pid)
    sys.stdout.flush()


def non_daemon():
    p = multiprocessing.current_process()
    print('Starting:', p.name, p.pid)
    sys.stdout.flush()
    print('Exiting :', p.name, p.pid)
    sys.stdout.flush()


if __name__ == '__main__':
    d = multiprocessing.Process(
        name='daemon',
        target=daemon,
    )
    d.daemon = True

    n = multiprocessing.Process(
        name='non-daemon',
        target=non_daemon,
    )
    n.daemon = False

    d.start()
    time.sleep(1)
    n.start()
    
# output
# Starting: daemon 41838
# Starting: non-daemon 41841
# Exiting : non-daemon 41841
複製代碼

輸出不包括來自守護進程的「退出」消息,由於全部非守護進程(包括主程序)在守護進程從兩秒休眠狀態喚醒以前退出。

守護進程在主程序退出以前自動終止,這避免了孤立進程的運行。這能夠經過查找程序運行時打印的進程 ID 值來驗證,而後使用 ps 命令檢查該進程。

等待進程

要等到進程完成其工做並退出,請使用 join()方法。

import multiprocessing
import time
import sys


def daemon():
    name = multiprocessing.current_process().name
    print('Starting:', name)
    time.sleep(2)
    print('Exiting :', name)


def non_daemon():
    name = multiprocessing.current_process().name
    print('Starting:', name)
    print('Exiting :', name)


if __name__ == '__main__':
    d = multiprocessing.Process(
        name='daemon',
        target=daemon,
    )
    d.daemon = True

    n = multiprocessing.Process(
        name='non-daemon',
        target=non_daemon,
    )
    n.daemon = False

    d.start()
    time.sleep(1)
    n.start()

    d.join()
    n.join()
    
# output
# Starting: non-daemon
# Exiting : non-daemon
# Starting: daemon
# Exiting : daemon
複製代碼

因爲主進程使用 join() 等待守護進程退出,所以此時將打印「退出」消息。

默認狀況下,join()無限期地阻止。也能夠傳遞一個超時參數(一個浮點數表示等待進程變爲非活動狀態的秒數)。若是進程未在超時期限內完成,則join()不管如何都要返回。

import multiprocessing
import time
import sys


def daemon():
    name = multiprocessing.current_process().name
    print('Starting:', name)
    time.sleep(2)
    print('Exiting :', name)


def non_daemon():
    name = multiprocessing.current_process().name
    print('Starting:', name)
    print('Exiting :', name)


if __name__ == '__main__':
    d = multiprocessing.Process(
        name='daemon',
        target=daemon,
    )
    d.daemon = True

    n = multiprocessing.Process(
        name='non-daemon',
        target=non_daemon,
    )
    n.daemon = False

    d.start()
    n.start()

    d.join(1)
    print('d.is_alive()', d.is_alive())
    n.join()
    
# output
# Starting: non-daemon
# Exiting : non-daemon
# d.is_alive() True
複製代碼

因爲傳遞的超時時間小於守護進程休眠的時間,所以join()返回後進程仍處於「活動」狀態。

終止進程

若是想讓一個進程退出,最好使用「poison pill」方法向它發送信號,若是進程出現掛起或死鎖,那麼強制終止它是有用的。 調用 terminate() 來殺死子進程。

import multiprocessing
import time


def slow_worker():
    print('Starting worker')
    time.sleep(0.1)
    print('Finished worker')


if __name__ == '__main__':
    p = multiprocessing.Process(target=slow_worker)
    print('BEFORE:', p, p.is_alive())

    p.start()
    print('DURING:', p, p.is_alive())

    p.terminate()
    print('TERMINATED:', p, p.is_alive())

    p.join()
    print('JOINED:', p, p.is_alive())
    
# output
# BEFORE: <Process(Process-1, initial)> False
# DURING: <Process(Process-1, started)> True
# TERMINATED: <Process(Process-1, started)> True
# JOINED: <Process(Process-1, stopped[SIGTERM])> False
複製代碼

在終止它以後對該進程使用 join() 很重要,能夠爲進程管理代碼提供時間來更新對象狀態,用以反映終止效果。

處理退出狀態

能夠經過exitcode屬性訪問進程退出時生成的狀態代碼。容許的範圍列於下表中。

退出代碼 含義
== 0 沒有產生錯誤
> 0 該進程出錯,並退出該代碼
< 0 這個過程被一個信號殺死了 -1 * exitcode
import multiprocessing
import sys
import time


def exit_error():
    sys.exit(1)


def exit_ok():
    return


def return_value():
    return 1


def raises():
    raise RuntimeError('There was an error!')


def terminated():
    time.sleep(3)


if __name__ == '__main__':
    jobs = []
    funcs = [
        exit_error,
        exit_ok,
        return_value,
        raises,
        terminated,
    ]
    for f in funcs:
        print('Starting process for', f.__name__)
        j = multiprocessing.Process(target=f, name=f.__name__)
        jobs.append(j)
        j.start()

    jobs[-1].terminate()

    for j in jobs:
        j.join()
        print('{:>15}.exitcode = {}'.format(j.name, j.exitcode))
        
# output
# Starting process for exit_error
# Starting process for exit_ok
# Starting process for return_value
# Starting process for raises
# Starting process for terminated
# Process raises:
# Traceback (most recent call last):
# File ".../lib/python3.6/multiprocessing/process.py", line 258,
# in _bootstrap
# self.run()
# File ".../lib/python3.6/multiprocessing/process.py", line 93,
# in run
# self._target(*self._args, **self._kwargs)
# File "multiprocessing_exitcode.py", line 28, in raises
# raise RuntimeError('There was an error!')
# RuntimeError: There was an error!
# exit_error.exitcode = 1
# exit_ok.exitcode = 0
# return_value.exitcode = 0
# raises.exitcode = 1
# terminated.exitcode = -15
複製代碼

記錄日誌

在調試併發問題時,訪問 multiprocessing 對象的內部結構頗有用。有一個方便的模塊級功能來啓用被調用的日誌,叫 log_to_stderr()。它使用logging並添加處理程序來設置記錄器對象 ,以便將日誌消息發送到標準錯誤通道。

import multiprocessing
import logging
import sys


def worker():
    print('Doing some work')
    sys.stdout.flush()


if __name__ == '__main__':
    multiprocessing.log_to_stderr(logging.DEBUG)
    p = multiprocessing.Process(target=worker)
    p.start()
    p.join()
    
# output
# [INFO/Process-1] child process calling self.run()
# Doing some work
# [INFO/Process-1] process shutting down
# [DEBUG/Process-1] running all "atexit" finalizers with priority >= 0
# [DEBUG/Process-1] running the remaining "atexit" finalizers
# [INFO/Process-1] process exiting with exitcode 0
# [INFO/MainProcess] process shutting down
# [DEBUG/MainProcess] running all "atexit" finalizers with priority >= 0
# [DEBUG/MainProcess] running the remaining "atexit" finalizers
複製代碼

默認狀況下,日誌記錄級別設置爲NOTSET不生成任何消息。傳遞不一樣的級別以將記錄器初始化爲所需的詳細程度。

要直接操做記錄器(更改其級別設置或添加處理程序),請使用get_logger()

import multiprocessing
import logging
import sys


def worker():
    print('Doing some work')
    sys.stdout.flush()


if __name__ == '__main__':
    multiprocessing.log_to_stderr()
    logger = multiprocessing.get_logger()
    logger.setLevel(logging.INFO)
    p = multiprocessing.Process(target=worker)
    p.start()
    p.join()
    
# output
# [INFO/Process-1] child process calling self.run()
# Doing some work
# [INFO/Process-1] process shutting down
# [INFO/Process-1] process exiting with exitcode 0
# [INFO/MainProcess] process shutting down 
複製代碼

子類化過程

雖然在單獨的進程中啓動子進程的最簡單方法是使用Process並傳遞目標函數,但也可使用自定義子類。

import multiprocessing


class Worker(multiprocessing.Process):

    def run(self):
        print('In {}'.format(self.name))
        return


if __name__ == '__main__':
    jobs = []
    for i in range(5):
        p = Worker()
        jobs.append(p)
        p.start()
    for j in jobs:
        j.join()
        
# output
# In Worker-1
# In Worker-3
# In Worker-2
# In Worker-4
# In Worker-5
複製代碼

派生類應該重寫run()以完成其工做。

向進程傳遞消息

與線程同樣,多個進程的常見使用模式是將做業劃分爲多個工做並行運行。有效使用多個流程一般須要在它們之間進行一些通訊,以即可以劃分工做並彙總結果。在進程之間通訊的一種簡單方法是使用 Queue來傳遞消息。任何能夠經過 pickle 序列化的對象均可以傳遞給 Queue

import multiprocessing


class MyFancyClass:

    def __init__(self, name):
        self.name = name

    def do_something(self):
        proc_name = multiprocessing.current_process().name
        print('Doing something fancy in {} for {}!'.format(proc_name, self.name))


def worker(q):
    obj = q.get()
    obj.do_something()


if __name__ == '__main__':
    queue = multiprocessing.Queue()

    p = multiprocessing.Process(target=worker, args=(queue,))
    p.start()

    queue.put(MyFancyClass('Fancy Dan'))

    # Wait for the worker to finish
    queue.close()
    queue.join_thread()
    p.join()
    
# output
# Doing something fancy in Process-1 for Fancy Dan!
複製代碼

這個簡短的示例僅將單個消息傳遞給單個工做程序,而後主進程等待工做程序完成。

下面看一個更復雜例子,它顯示瞭如何管理多個從 JoinableQueue 消耗數據的 worker,並將結果傳遞迴父進程。「poison pill」技術用來終止 workers。設置實際任務後,主程序會將每一個工做程序的一個「中止」值添加到隊列中。當 worker 遇到特殊值時,它會從循環中跳出。主進程使用任務隊列的join()方法在處理結果以前等待全部任務完成。

import multiprocessing
import time


class Consumer(multiprocessing.Process):

    def __init__(self, task_queue, result_queue):
        multiprocessing.Process.__init__(self)
        self.task_queue = task_queue
        self.result_queue = result_queue

    def run(self):
        proc_name = self.name
        while True:
            next_task = self.task_queue.get()
            if next_task is None:
                # Poison pill means shutdown
                print('{}: Exiting'.format(proc_name))
                self.task_queue.task_done()
                break
            print('{}: {}'.format(proc_name, next_task))
            answer = next_task()
            self.task_queue.task_done()
            self.result_queue.put(answer)


class Task:

    def __init__(self, a, b):
        self.a = a
        self.b = b

    def __call__(self):
        time.sleep(0.1)  # pretend to take time to do the work
        return '{self.a} * {self.b} = {product}'.format(
            self=self, product=self.a * self.b)

    def __str__(self):
        return '{self.a} * {self.b}'.format(self=self)


if __name__ == '__main__':
    # Establish communication queues
    tasks = multiprocessing.JoinableQueue()
    results = multiprocessing.Queue()

    # Start consumers
    num_consumers = multiprocessing.cpu_count() * 2
    print('Creating {} consumers'.format(num_consumers))
    consumers = [
        Consumer(tasks, results)
        for i in range(num_consumers)
    ]
    for w in consumers:
        w.start()

    # Enqueue jobs
    num_jobs = 10
    for i in range(num_jobs):
        tasks.put(Task(i, i))

    # Add a poison pill for each consumer
    for i in range(num_consumers):
        tasks.put(None)

    # Wait for all of the tasks to finish
    tasks.join()

    # Start printing results
    while num_jobs:
        result = results.get()
        print('Result:', result)
        num_jobs -= 1
        
# output
# Creating 8 consumers
# Consumer-1: 0 * 0
# Consumer-2: 1 * 1
# Consumer-3: 2 * 2
# Consumer-4: 3 * 3
# Consumer-5: 4 * 4
# Consumer-6: 5 * 5
# Consumer-7: 6 * 6
# Consumer-8: 7 * 7
# Consumer-3: 8 * 8
# Consumer-7: 9 * 9
# Consumer-4: Exiting
# Consumer-1: Exiting
# Consumer-2: Exiting
# Consumer-5: Exiting
# Consumer-6: Exiting
# Consumer-8: Exiting
# Consumer-7: Exiting
# Consumer-3: Exiting
# Result: 6 * 6 = 36
# Result: 2 * 2 = 4
# Result: 3 * 3 = 9
# Result: 0 * 0 = 0
# Result: 1 * 1 = 1
# Result: 7 * 7 = 49
# Result: 4 * 4 = 16
# Result: 5 * 5 = 25
# Result: 8 * 8 = 64
# Result: 9 * 9 = 81
複製代碼

儘管做業按順序進入隊列,但它們的執行是並行化的,所以沒法保證它們的完成順序。

進程間通訊

Event類提供一種簡單的方式進行進程之間的通訊。能夠在設置和未設置狀態之間切換事件。事件對象的用戶可使用可選的超時值等待它從未設置更改成設置。

import multiprocessing
import time


def wait_for_event(e):
    """Wait for the event to be set before doing anything"""
    print('wait_for_event: starting')
    e.wait()
    print('wait_for_event: e.is_set()->', e.is_set())


def wait_for_event_timeout(e, t):
    """Wait t seconds and then timeout"""
    print('wait_for_event_timeout: starting')
    e.wait(t)
    print('wait_for_event_timeout: e.is_set()->', e.is_set())


if __name__ == '__main__':
    e = multiprocessing.Event()
    w1 = multiprocessing.Process(
        name='block',
        target=wait_for_event,
        args=(e,),
    )
    w1.start()

    w2 = multiprocessing.Process(
        name='nonblock',
        target=wait_for_event_timeout,
        args=(e, 2),
    )
    w2.start()

    print('main: waiting before calling Event.set()')
    time.sleep(3)
    e.set()
    print('main: event is set')
    
# output
# main: waiting before calling Event.set()
# wait_for_event: starting
# wait_for_event_timeout: starting
# wait_for_event_timeout: e.is_set()-> False
# main: event is set
# wait_for_event: e.is_set()-> True
複製代碼

若是wait()超時,不會返回錯誤。調用者可使用 is_set() 檢查事件的狀態。

控制對資源的訪問

在多個進程之間共享單個資源的狀況下,能夠用 Lock 來避免訪問衝突。

import multiprocessing
import sys


def worker_with(lock, stream):
    with lock:
        stream.write('Lock acquired via with\n')


def worker_no_with(lock, stream):
    lock.acquire()
    try:
        stream.write('Lock acquired directly\n')
    finally:
        lock.release()


lock = multiprocessing.Lock()
w = multiprocessing.Process(
    target=worker_with,
    args=(lock, sys.stdout),
)
nw = multiprocessing.Process(
    target=worker_no_with,
    args=(lock, sys.stdout),
)

w.start()
nw.start()

w.join()
nw.join()

# output
# Lock acquired via with
# Lock acquired directly
複製代碼

在此示例中,若是兩個進程不一樣步它們對標準輸出的訪問與鎖定,則打印到控制檯的消息可能混雜在一塊兒。

同步操做

Condition 對象可用於同步工做流的一部分,可使某些對象並行運行,但其餘對象順序運行,即便它們位於不一樣的進程中。

import multiprocessing
import time


def stage_1(cond):
    """ perform first stage of work, then notify stage_2 to continue """
    name = multiprocessing.current_process().name
    print('Starting', name)
    with cond:
        print('{} done and ready for stage 2'.format(name))
        cond.notify_all()


def stage_2(cond):
    """wait for the condition telling us stage_1 is done"""
    name = multiprocessing.current_process().name
    print('Starting', name)
    with cond:
        cond.wait()
        print('{} running'.format(name))


if __name__ == '__main__':
    condition = multiprocessing.Condition()
    s1 = multiprocessing.Process(name='s1',
                                 target=stage_1,
                                 args=(condition,))
    s2_clients = [
        multiprocessing.Process(
            name='stage_2[{}]'.format(i),
            target=stage_2,
            args=(condition,),
        )
        for i in range(1, 3)
    ]

    for c in s2_clients:
        c.start()
        time.sleep(1)
    s1.start()

    s1.join()
    for c in s2_clients:
        c.join()
        
# output
# Starting stage_2[1]
# Starting stage_2[2]
# Starting s1
# s1 done and ready for stage 2
# stage_2[1] running
# stage_2[2] running
複製代碼

在此示例中,兩個進程並行運行 stage_2,但僅在 stage_1 完成後運行。

控制對資源的併發訪問

有時,容許多個 worker 同時訪問資源是有用的,同時仍限制總數。例如,鏈接池可能支持固定數量的併發鏈接,或者網絡應用程序可能支持固定數量的併發下載。Semaphore 是管理這些鏈接的一種方法。

import random
import multiprocessing
import time


class ActivePool:

    def __init__(self):
        super(ActivePool, self).__init__()
        self.mgr = multiprocessing.Manager()
        self.active = self.mgr.list()
        self.lock = multiprocessing.Lock()

    def makeActive(self, name):
        with self.lock:
            self.active.append(name)

    def makeInactive(self, name):
        with self.lock:
            self.active.remove(name)

    def __str__(self):
        with self.lock:
            return str(self.active)


def worker(s, pool):
    name = multiprocessing.current_process().name
    with s:
        pool.makeActive(name)
        print('Activating {} now running {}'.format(name, pool))
        time.sleep(random.random())
        pool.makeInactive(name)


if __name__ == '__main__':
    pool = ActivePool()
    s = multiprocessing.Semaphore(3)
    jobs = [
        multiprocessing.Process(
            target=worker,
            name=str(i),
            args=(s, pool),
        )
        for i in range(10)
    ]

    for j in jobs:
        j.start()

    while True:
        alive = 0
        for j in jobs:
            if j.is_alive():
                alive += 1
                j.join(timeout=0.1)
                print('Now running {}'.format(pool))
        if alive == 0:
            # all done
            break

# output 
# Activating 0 now running ['0', '1', '2']
# Activating 1 now running ['0', '1', '2']
# Activating 2 now running ['0', '1', '2']
# Now running ['0', '1', '2']
# Now running ['0', '1', '2']
# Now running ['0', '1', '2']
# Now running ['0', '1', '2']
# Activating 3 now running ['0', '1', '3']
# Activating 4 now running ['1', '3', '4']
# Activating 6 now running ['1', '4', '6']
# Now running ['1', '4', '6']
# Now running ['1', '4', '6']
# Activating 5 now running ['1', '4', '5']
# Now running ['1', '4', '5']
# Now running ['1', '4', '5']
# Now running ['1', '4', '5']
# Activating 8 now running ['4', '5', '8']
# Now running ['4', '5', '8']
# Now running ['4', '5', '8']
# Now running ['4', '5', '8']
# Now running ['4', '5', '8']
# Now running ['4', '5', '8']
# Activating 7 now running ['5', '8', '7']
# Now running ['5', '8', '7']
# Activating 9 now running ['8', '7', '9']
# Now running ['8', '7', '9']
# Now running ['8', '9']
# Now running ['8', '9']
# Now running ['9']
# Now running ['9']
# Now running ['9']
# Now running ['9']
# Now running [] 
複製代碼

在此示例中,ActivePool 類僅用做跟蹤在給定時刻正在運行的進程的便捷方式。實際資源池可能會爲新活動的進程分配鏈接或其餘值,並在任務完成時回收該值。這裏,pool 只用於保存活動進程的名稱,以顯示只有三個併發運行。

管理共享狀態

在前面的示例中,首先經過 Manager 建立特殊類型的列表,而後活動進程列表經過 ActivePool 在實例中集中維護。Manager負責協調全部用戶之間共享信息的狀態。

import multiprocessing
import pprint


def worker(d, key, value):
    d[key] = value


if __name__ == '__main__':
    mgr = multiprocessing.Manager()
    d = mgr.dict()
    jobs = [
        multiprocessing.Process(
            target=worker,
            args=(d, i, i * 2),
        )
        for i in range(10)
    ]
    for j in jobs:
        j.start()
    for j in jobs:
        j.join()
    print('Results:', d)
    
# output
# Results: {0: 0, 1: 2, 2: 4, 3: 6, 4: 8, 5: 10, 6: 12, 7: 14, 8: 16, 9: 18}
複製代碼

經過管理器建立列表,它將被共享,而且能夠在全部進程中看到更新。字典也支持。

共享命名空間

除了字典和列表,Manager還能夠建立共享Namespace

import multiprocessing


def producer(ns, event):
    ns.value = 'This is the value'
    event.set()


def consumer(ns, event):
    try:
        print('Before event: {}'.format(ns.value))
    except Exception as err:
        print('Before event, error:', str(err))
    event.wait()
    print('After event:', ns.value)


if __name__ == '__main__':
    mgr = multiprocessing.Manager()
    namespace = mgr.Namespace()
    event = multiprocessing.Event()
    p = multiprocessing.Process(
        target=producer,
        args=(namespace, event),
    )
    c = multiprocessing.Process(
        target=consumer,
        args=(namespace, event),
    )

    c.start()
    p.start()

    c.join()
    p.join()
    
# output
# Before event, error: 'Namespace' object has no attribute 'value'
# After event: This is the value
複製代碼

只要添加到命名空間Namespace,那麼全部接收Namespace實例的客戶端均可見。

重要的是,要知道命名空間中可變值內容的更新不會自動傳播。

import multiprocessing


def producer(ns, event):
    # DOES NOT UPDATE GLOBAL VALUE!
    ns.my_list.append('This is the value')
    event.set()


def consumer(ns, event):
    print('Before event:', ns.my_list)
    event.wait()
    print('After event :', ns.my_list)


if __name__ == '__main__':
    mgr = multiprocessing.Manager()
    namespace = mgr.Namespace()
    namespace.my_list = []

    event = multiprocessing.Event()
    p = multiprocessing.Process(
        target=producer,
        args=(namespace, event),
    )
    c = multiprocessing.Process(
        target=consumer,
        args=(namespace, event),
    )

    c.start()
    p.start()

    c.join()
    p.join()
    
# output
# Before event: []
# After event : []
複製代碼

要更新列表,須要再次將其添加到命名空間。

進程池

Pool類可用於管理固定數量 workers 的簡單狀況。返回值做爲列表返回。Pool 參數包括進程數和啓動任務進程時要運行的函數(每一個子進程調用一次)。

import multiprocessing


def do_calculation(data):
    return data * 2


def start_process():
    print('Starting', multiprocessing.current_process().name)


if __name__ == '__main__':
    inputs = list(range(10))
    print('Input :', inputs)

    builtin_outputs = map(do_calculation, inputs)
    print('Built-in:', builtin_outputs)

    pool_size = multiprocessing.cpu_count() * 2
    pool = multiprocessing.Pool(
        processes=pool_size,
        initializer=start_process,
    )
    pool_outputs = pool.map(do_calculation, inputs)
    pool.close()  # no more tasks
    pool.join()  # wrap up current tasks

    print('Pool :', pool_outputs)
    
# output
# Input : [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
# Built-in: <map object at 0x1007b2be0>
# Starting ForkPoolWorker-3
# Starting ForkPoolWorker-4
# Starting ForkPoolWorker-5
# Starting ForkPoolWorker-6
# Starting ForkPoolWorker-1
# Starting ForkPoolWorker-7
# Starting ForkPoolWorker-2
# Starting ForkPoolWorker-8
# Pool : [0, 2, 4, 6, 8, 10, 12, 14, 16, 18]
複製代碼

除了各個任務並行運行外,map()方法的結果在功能上等同於內置map()。因爲 Pool 並行處理其輸入,close()join()可用於主處理與任務進程進行同步,以確保徹底清除。

默認狀況下,Pool建立固定數量的工做進程並將 jobs 傳遞給它們,直到沒有其餘 jobs 爲止。設置 maxtasksperchild參數會告訴 Pool 在完成一些任務後從新啓動工做進程,從而防止長時間運行 workers 消耗更多的系統資源。

import multiprocessing


def do_calculation(data):
    return data * 2


def start_process():
    print('Starting', multiprocessing.current_process().name)


if __name__ == '__main__':
    inputs = list(range(10))
    print('Input :', inputs)

    builtin_outputs = map(do_calculation, inputs)
    print('Built-in:', builtin_outputs)

    pool_size = multiprocessing.cpu_count() * 2
    pool = multiprocessing.Pool(
        processes=pool_size,
        initializer=start_process,
        maxtasksperchild=2,
    )
    pool_outputs = pool.map(do_calculation, inputs)
    pool.close()  # no more tasks
    pool.join()  # wrap up current tasks

    print('Pool :', pool_outputs)
    
# output
# Input : [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
# Built-in: <map object at 0x1007b21d0>
# Starting ForkPoolWorker-1
# Starting ForkPoolWorker-2
# Starting ForkPoolWorker-4
# Starting ForkPoolWorker-5
# Starting ForkPoolWorker-6
# Starting ForkPoolWorker-3
# Starting ForkPoolWorker-7
# Starting ForkPoolWorker-8
# Pool : [0, 2, 4, 6, 8, 10, 12, 14, 16, 18]
複製代碼

即便沒有更多工做,Pool 也會在完成分配的任務後從新啓動 workers。在此輸出中,即便只有 10 個任務,也會建立 8 個 workers,而且每一個 worker 能夠一次完成其中兩個任務。

實現 MapReduce

Pool類能夠用來建立一個簡單的單臺服務器的 MapReduce 實現。雖然它沒有給出分佈式處理的所有好處,但它確實說明了將一些問題分解爲可分配的工做單元是多麼容易。

在基於 MapReduce 的系統中,輸入數據被分解爲塊以供不一樣的工做實例處理。使用簡單的變換將每一個輸入數據塊 映射到中間狀態。而後將中間數據收集在一塊兒並基於鍵值進行分區,以使全部相關值在一塊兒。最後,分區數據減小到結果。

# multiprocessing_mapreduce.py
import collections
import itertools
import multiprocessing


class SimpleMapReduce:

    def __init__(self, map_func, reduce_func, num_workers=None):
        """ map_func Function to map inputs to intermediate data. Takes as argument one input value and returns a tuple with the key and a value to be reduced. reduce_func Function to reduce partitioned version of intermediate data to final output. Takes as argument a key as produced by map_func and a sequence of the values associated with that key. num_workers The number of workers to create in the pool. Defaults to the number of CPUs available on the current host. """
        self.map_func = map_func
        self.reduce_func = reduce_func
        self.pool = multiprocessing.Pool(num_workers)

    def partition(self, mapped_values):
        """Organize the mapped values by their key. Returns an unsorted sequence of tuples with a key and a sequence of values. """
        partitioned_data = collections.defaultdict(list)
        for key, value in mapped_values:
            partitioned_data[key].append(value)
        return partitioned_data.items()

    def __call__(self, inputs, chunksize=1):
        """Process the inputs through the map and reduce functions given. inputs An iterable containing the input data to be processed. chunksize=1 The portion of the input data to hand to each worker. This can be used to tune performance during the mapping phase. """
        map_responses = self.pool.map(
            self.map_func,
            inputs,
            chunksize=chunksize,
        )
        partitioned_data = self.partition(
            itertools.chain(*map_responses)
        )
        reduced_values = self.pool.map(
            self.reduce_func,
            partitioned_data,
        )
        return reduced_values
複製代碼

下面的示例腳本使用 SimpleMapReduce 來計算本文的 reStructuredText 源中的「words」,忽略了一些標記。

# multiprocessing_wordcount.py 
import multiprocessing
import string

from multiprocessing_mapreduce import SimpleMapReduce


def file_to_words(filename):
    """Read a file and return a sequence of (word, occurences) values. """
    STOP_WORDS = set([
        'a', 'an', 'and', 'are', 'as', 'be', 'by', 'for', 'if',
        'in', 'is', 'it', 'of', 'or', 'py', 'rst', 'that', 'the',
        'to', 'with',
    ])
    TR = str.maketrans({
        p: ' '
        for p in string.punctuation
    })

    print('{} reading {}'.format(
        multiprocessing.current_process().name, filename))
    output = []

    with open(filename, 'rt') as f:
        for line in f:
            # Skip comment lines.
            if line.lstrip().startswith('..'):
                continue
            line = line.translate(TR)  # Strip punctuation
            for word in line.split():
                word = word.lower()
                if word.isalpha() and word not in STOP_WORDS:
                    output.append((word, 1))
    return output


def count_words(item):
    """Convert the partitioned data for a word to a tuple containing the word and the number of occurences. """
    word, occurences = item
    return (word, sum(occurences))


if __name__ == '__main__':
    import operator
    import glob

    input_files = glob.glob('*.rst')

    mapper = SimpleMapReduce(file_to_words, count_words)
    word_counts = mapper(input_files)
    word_counts.sort(key=operator.itemgetter(1))
    word_counts.reverse()

    print('\nTOP 20 WORDS BY FREQUENCY\n')
    top20 = word_counts[:20]
    longest = max(len(word) for word, count in top20)
    for word, count in top20:
        print('{word:<{len}}: {count:5}'.format(
            len=longest + 1,
            word=word,
            count=count)
        )
複製代碼

file_to_words() 函數將每一個輸入文件轉換爲包含單詞和數字1(表示單個匹配項)的元組序列。經過partition() 使用單詞做爲鍵來劃分數據,所以獲得的結構由一個鍵和1表示每一個單詞出現的值序列組成。count_words()在縮小階段,分區數據被轉換爲一組元組,其中包含一個單詞和該單詞的計數。

$ python3 -u multiprocessing_wordcount.py

ForkPoolWorker-1 reading basics.rst
ForkPoolWorker-2 reading communication.rst
ForkPoolWorker-3 reading index.rst
ForkPoolWorker-4 reading mapreduce.rst

TOP 20 WORDS BY FREQUENCY

process         :    83
running         :    45
multiprocessing :    44
worker          :    40
starting        :    37
now             :    35
after           :    34
processes       :    31
start           :    29
header          :    27
pymotw          :    27
caption         :    27
end             :    27
daemon          :    22
can             :    22
exiting         :    21
forkpoolworker  :    21
consumer        :    20
main            :    18
event           :    16
複製代碼



相關文檔:

pymotw.com/3/multiproc…

thief.one/2016/11/23/…

www.dongwm.com/archives/使用…

相關文章
相關標籤/搜索