〖Python〗-- Celery分佈式任務隊列

【Celery分佈式任務隊列】

1、Celery介紹和基本使用

Celery 是一個 基於python開發的分佈式異步消息任務隊列,經過它能夠輕鬆的實現任務的異步處理, 若是你的業務場景中須要用到異步任務,就能夠考慮使用celery, 舉幾個實例場景中可用的例子:html

  1. 你想對100臺機器執行一條批量命令,可能會花很長時間 ,但你不想讓你的程序等着結果返回,而是給你返回 一個任務ID,你過一段時間只須要拿着這個任務id就能夠拿到任務執行結果, 在任務執行ing進行時,你能夠繼續作其它的事情。 
  2. 你想作一個定時任務,好比天天檢測一下大家全部客戶的資料,若是發現今天 是客戶的生日,就給他發個短信祝福

Celery 在執行任務時須要經過一個消息中間件來接收和發送任務消息,以及存儲任務結果, 通常使用rabbitMQ or Redisnode

1.1 Celery有如下優勢:python

  簡單:一單熟悉了celery的工做流程後,配置和使用仍是比較簡單的linux

  高可用:當任務執行失敗或執行過程當中發生鏈接中斷,celery 會自動嘗試從新執行任務redis

  快速:一個單進程的celery每分鐘可處理上百萬個任務django

  靈活: 幾乎celery的各個組件均可以被擴展及自定製bash

Celery基本工做流程圖app

  

1.2 Celery安裝使用

Celery的默認broker是RabbitMQ, 僅需配置一行就能夠異步

  broker_url  =  'amqp://guest:guest@localhost:5672//'

 

使用Redis作broker也能夠async

  安裝redis組件

$ pip3 install -U "celery[redis]"
 
配置

Configuration is easy, just configure the location of your Redis database:

app.conf.broker_url = 'redis://localhost:6379/0'

Where the URL is in the format of:

redis://:password@hostname:port/db_number

all fields after the scheme are optional, and will default to localhost on port 6379, using database 0.
 

 

若是想獲取每一個任務的執行結果,還須要配置一下把任務結果存在哪

If you also want to store the state and return values of tasks in Redis, you should configure these settings:

app.conf.result_backend = 'redis://localhost:6379/0'

 

1. 3 開始使用Celery

  安裝celery模塊

    pip3 install celery

建立一個celery application 用來定義你的任務列表

  建立一個任務文件就叫tasks.py

 
from celery import Celery
 
app = Celery('tasks',
             broker='redis://localhost',
        #有用戶名密碼的話,broker="redis://:mima@127.0.0.1" backend='redis://localhost') @app.task def add(x,y): print("running...",x,y) return x+y
 

啓動Celery Worker來開始監聽並執行任務

$ celery -A tasks worker --loglevel=info

 

調用任務

  再打開一個終端, 進行命令行模式,調用任務

>>> from tasks import add
>>> add.delay(4, 4)
看你的worker終端會顯示收到 一個任務,此時你想看任務結果的話,須要在調用 任務時 賦值個變量
  >>> result  =  add.delay( 4 4 )

 

The ready() method returns whether the task has finished processing or not:

>>> result.ready() False 

You can wait for the result to complete, but this is rarely used since it turns the asynchronous call into a synchronous one:

>>> result.get(timeout=1) 8 

In case the task raised an exception, get() will re-raise the exception, but you can override this by specifying the propagate argument:

>>> result.get(propagate=False) 

If the task raised an exception you can also gain access to the original traceback:

>>> result.traceback 

在項目中使用celery 

能夠把celery配置成一個應用

目錄格式以下

proj/__init__.py
    /celery.py
    /tasks.py

proj/celery.py內容

 
from __future__ import absolute_import, unicode_literals
from celery import Celery
 
app = Celery('proj',
             broker='amqp://',
             backend='amqp://',
             include=['proj.tasks'])
 
# Optional configuration, see the application user guide.
app.conf.update(
    result_expires=3600,
)
 
if __name__ == '__main__':
    app.start()
 

proj/tasks.py中的內容

 
from __future__ import absolute_import, unicode_literals
from .celery import app


@app.task
def add(x, y):
    return x + y


@app.task
def mul(x, y):
    return x * y


@app.task
def xsum(numbers):
    return sum(numbers)
 

啓動worker 

$ celery -A proj worker -l info

輸出

 
-------------- celery@Zhangwei-MacBook-Pro.local v4.0.2 (latentcall)
---- **** -----
--- * ***  * -- Darwin-15.6.0-x86_64-i386-64bit 2017-01-26 21:50:24
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app:         proj:0x103a020f0
- ** ---------- .> transport:   redis://localhost:6379//
- ** ---------- .> results:     redis://localhost/
- *** --- * --- .> concurrency: 8 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
 -------------- [queues]
                .> celery           exchange=celery(direct) key=celery
 

後臺啓動worker

In the background

In production you’ll want to run the worker in the background, this is described in detail in the daemonization tutorial.

The daemonization scripts uses the celery multi command to start one or more workers in the background:

$ celery multi start w1 -A proj -l info
celery multi v4.0.0 (latentcall) > Starting nodes...  > w1.halcyon.local: OK 

You can restart it too:

$ celery  multi restart w1 -A proj -l info
celery multi v4.0.0 (latentcall) > Stopping nodes...  > w1.halcyon.local: TERM -> 64024 > Waiting for 1 node.....  > w1.halcyon.local: OK > Restarting node w1.halcyon.local: OK celery multi v4.0.0 (latentcall) > Stopping nodes...  > w1.halcyon.local: TERM -> 64052 

or stop it:

$ celery multi stop w1 -A proj -l info

The stop command is asynchronous so it won’t wait for the worker to shutdown. You’ll probably want to use the stopwait command instead, this ensures all currently executing tasks is completed before exiting:

$ celery multi stopwait w1 -A proj -l info

Celery 定時任務

celery支持定時任務,設定好任務的執行時間,celery就會定時自動幫你執行, 這個定時任務模塊叫celery beat

寫一個腳本 叫periodic_task.py

 
from celery import Celery
from celery.schedules import crontab
 
app = Celery()
 
@app.on_after_configure.connect
def setup_periodic_tasks(sender, **kwargs):
    # Calls test('hello') every 10 seconds.
    sender.add_periodic_task(10.0, test.s('hello'), name='add every 10')
 
    # Calls test('world') every 30 seconds
    sender.add_periodic_task(30.0, test.s('world'), expires=10)
 
    # Executes every Monday morning at 7:30 a.m.
    sender.add_periodic_task(
        crontab(hour=7, minute=30, day_of_week=1),
        test.s('Happy Mondays!'),
    )
 
@app.task
def test(arg):
    print(arg)
 

add_periodic_task 會添加一條定時任務

上面是經過調用函數添加定時任務,也能夠像寫配置文件 同樣的形式添加, 下面是每30s執行的任務

 
app.conf.beat_schedule = {
    'add-every-30-seconds': {
        'task': 'tasks.add',
        'schedule': 30.0,
        'args': (16, 16)
    },
}
app.conf.timezone = 'UTC'
 

  任務添加好了,須要讓celery單獨啓動一個進程來定時發起這些任務, 注意, 這裏是發起任務,不是執行,這個進程只會不斷的去檢查你的任務計劃, 每發現有任務須要執行了,就發起一個任務調用消息,交給celery worker去執行

啓動任務調度器 celery beat

$ celery -A periodic_task beat

輸出like below

 
celery beat v4.0.2 (latentcall) is starting.
__    -    ... __   -        _
LocalTime -> 2017-02-08 18:39:31
Configuration ->
    . broker -> redis://localhost:6379//
    . loader -> celery.loaders.app.AppLoader
    . scheduler -> celery.beat.PersistentScheduler
    . db -> celerybeat-schedule
    . logfile -> [stderr]@%WARNING
    . maxinterval -> 5.00 minutes (300s)
 

 

此時還差一步,就是還須要啓動一個worker,負責執行celery beat發起的任務

啓動celery worker來執行任務

 
$ celery -A periodic_task worker
  
 -------------- celery@Alexs-MacBook-Pro.local v4.0.2 (latentcall)
---- **** -----
--- * ***  * -- Darwin-15.6.0-x86_64-i386-64bit 2017-02-08 18:42:08
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app:         tasks:0x104d420b8
- ** ---------- .> transport:   redis://localhost:6379//
- ** ---------- .> results:     redis://localhost/
- *** --- * --- .> concurrency: 8 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
 -------------- [queues]
                .> celery           exchange=celery(direct) key=celery
 

好啦,此時觀察worker的輸出,是否是每隔一小會,就會執行一次定時任務呢!

注意:Beat needs to store the last run times of the tasks in a local database file (named celerybeat-schedule by default), so it needs access to write in the current directory, or alternatively you can specify a custom location for this file:

$ celery -A periodic_task beat -s /home/celery/var/run/celerybeat-schedule

更復雜的定時配置

上面的定時任務比較簡單,只是每多少s執行一個任務,但若是你想要每週一三五的早上8點給你發郵件怎麼辦呢?哈,其實也簡單,用crontab功能,跟linux自帶的crontab功能是同樣的,能夠個性化定製任務執行時間

Linux crontab http://www.cnblogs.com/peida/archive/2013/01/08/2850483.html 

 

 
from celery.schedules import crontab
 
app.conf.beat_schedule = {
    # Executes every Monday morning at 7:30 a.m.
    'add-every-monday-morning': {
        'task': 'tasks.add',
        'schedule': crontab(hour=7, minute=30, day_of_week=1),
        'args': (16, 16),
    },
}
 

上面的這條意思是每週1的早上7.30執行tasks.add任務

還有更多定時配置方式以下:

Example

  Meaning
crontab() Execute every minute.
crontab(minute=0, hour=0) Execute daily at midnight.
crontab(minute=0, hour='*/3') Execute every three hours: midnight, 3am, 6am, 9am, noon, 3pm, 6pm, 9pm.
crontab(minute=0,
hour='0,3,6,9,12,15,18,21')
Same as previous.
crontab(minute='*/15') Execute every 15 minutes.
crontab(day_of_week='sunday') Execute every minute (!) at Sundays.
crontab(minute='*',
hour='*', day_of_week='sun')
Same as previous.
crontab(minute='*/10',
hour='3,17,22', day_of_week='thu,fri')
Execute every ten minutes, but only between 3-4 am, 5-6 pm, and 10-11 pm on Thursdays or Fridays.
crontab(minute=0,hour='*/2,*/3') Execute every even hour, and every hour divisible by three. This means: at every hour except: 1am, 5am, 7am, 11am, 1pm, 5pm, 7pm, 11pm
crontab(minute=0, hour='*/5') Execute hour divisible by 5. This means that it is triggered at 3pm, not 5pm (since 3pm equals the 24-hour clock value of 「15」, which is divisible by 5).
crontab(minute=0, hour='*/3,8-17') Execute every hour divisible by 3, and every hour during office hours (8am-5pm).
crontab(0, 0,day_of_month='2') Execute on the second day of every month.
crontab(0, 0,
day_of_month='2-30/3')
Execute on every even numbered day.
crontab(0, 0,
day_of_month='1-7,15-21')
Execute on the first and third weeks of the month.
crontab(0, 0,day_of_month='11',
month_of_year='5')
Execute on the eleventh of May every year.
crontab(0, 0,
month_of_year='*/3')
Execute on the first month of every quarter.

上面能知足你絕大多數定時任務需求了,甚至還能根據潮起潮落來配置定時任務, 具體看 http://docs.celeryproject.org/en/latest/userguide/periodic-tasks.html#solar-schedules

Celery與django結合

django 能夠輕鬆跟celery結合實現異步任務,只需簡單配置便可

If you have a modern Django project layout like:

- proj/ - proj/__init__.py - proj/settings.py - proj/urls.py - manage.py

then the recommended way is to create a new proj/proj/celery.py module that defines the Celery instance:

file: proj/proj/celery.py 

 
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
 
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'proj.settings')
 
app = Celery('proj')
 
# Using a string here means the worker don't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
#   should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')
 
# Load task modules from all registered Django app configs.
app.autodiscover_tasks()
 
 
@app.task(bind=True)
def debug_task(self):
    print('Request: {0!r}'.format(self.request))
 

Then you need to import this app in your proj/proj/__init__.py module. This ensures that the app is loaded when Django starts so that the @shared_task decorator (mentioned later) will use it:

proj/proj/__init__.py: 

  

 
from __future__ import absolute_import, unicode_literals
 
# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from .celery import app as celery_app
 
__all__ = ['celery_app']
 

Note that this example project layout is suitable for larger projects, for simple projects you may use a single contained module that defines both the app and tasks, like in the First Steps with Celery tutorial.

Let’s break down what happens in the first module, first we import absolute imports from the future, so that our celery.py module won’t clash with the library:

from __future__ import absolute_import

Then we set the default DJANGO_SETTINGS_MODULE environment variable for the celery command-line program:

    
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'proj.settings')

You don’t need this line, but it saves you from always passing in the settings module to the celery program. It must always come before creating the app instances, as is what we do next:

app = Celery('proj')

This is our instance of the library.

We also add the Django settings module as a configuration source for Celery. This means that you don’t have to use multiple configuration files, and instead configure Celery directly from the Django settings; but you can also separate them if wanted.

The uppercase name-space means that all Celery configuration options must be specified in uppercase instead of lowercase, and start with CELERY_, so for example the task_always_eager` setting becomes CELERY_TASK_ALWAYS_EAGER, and the broker_url setting becomes CELERY_BROKER_URL.

You can pass the object directly here, but using a string is better since then the worker doesn’t have to serialize the object.

app.config_from_object('django.conf:settings', namespace='CELERY')

Next, a common practice for reusable apps is to define all tasks in a separate tasks.pymodule, and Celery does have a way to  auto-discover these modules:

  app.autodiscover_tasks()

With the line above Celery will automatically discover tasks from all of your installed apps, following the tasks.py convention:

  

-  app1 /
     -  tasks.py
     -  models.py
-  app2 /
     -  tasks.py
     -  models.py

Finally, the debug_task example is a task that dumps its own request information. This is using the new bind=True task option introduced in Celery 3.1 to easily refer to the current task instance.

而後在具體的app裏的tasks.py裏寫你的任務

 
# Create your tasks here
from __future__ import absolute_import, unicode_literals
from celery import shared_task
 
 
@shared_task
def add(x, y):
    return x + y
 
 
@shared_task
def mul(x, y):
    return x * y
 
 
@shared_task
def xsum(numbers):
    return sum(numbers)
 

 

在你的django views裏調用celery task

 
from django.shortcuts import render,HttpResponse
 
# Create your views here.
 
from  bernard import tasks
 
def task_test(request):
 
    res = tasks.add.delay(228,24)
    print("start running task")
    print("async task res",res.get() )
 
    return HttpResponse('res %s'%res.get())
 

 

在django中使用計劃任務功能

There’s  the django-celery-beat extension that stores the schedule in the Django database, and presents a convenient admin interface to manage periodic tasks at runtime.

To install and use this extension:

  1. Use pip to install the package:

    $ pip install django-celery-beat
    
  2. Add the django_celery_beat module to INSTALLED_APPS in your Django project’ settings.py:

        INSTALLED_APPS = ( ..., 'django_celery_beat', ) Note that there is no dash in the module name, only underscores. 
  3. Apply Django database migrations so that the necessary tables are created:

    $ python manage.py migrate
    
  4. Start the celery beat service using the django scheduler:

    $ celery -A proj beat -l info -S django
    
  5. Visit the Django-Admin interface to set up some periodic tasks.

 

在admin頁面裏,有3張表

配置完長這樣

 

 

此時啓動你的celery beat 和worker,會發現每隔2分鐘,beat會發起一個任務消息讓worker執行scp_task任務

相關文章
相關標籤/搜索