第十一章:Python の 網絡編程基礎(三)

本課主題

  • 多線程的建立和使用
  • 消息隊列的介紹
  • Python 操做 memached 和 redis 實戰
  • 本週做業

  

消息隊列的介紹

對列是在內存中建立的,若是整個進程裏的程序運行完畢以後會被清空,消息就清空了。html

  • Firt-in-First-Out 先進先出隊列,這是一個有兩扇門的升降機,先進先出形式。
    class Queue:
        '''Create a queue object with a given maximum size.
    
        If maxsize is <= 0, the queue size is infinite.
        '''
    
        def __init__(self, maxsize=0):
            self.maxsize = maxsize
            self._init(maxsize)
    
            # mutex must be held whenever the queue is mutating.  All methods
            # that acquire mutex must release it before returning.  mutex
            # is shared between the three conditions, so acquiring and
            # releasing the conditions also acquires and releases mutex.
            self.mutex = threading.Lock()
    
            # Notify not_empty whenever an item is added to the queue; a
            # thread waiting to get is notified then.
            self.not_empty = threading.Condition(self.mutex)
    
            # Notify not_full whenever an item is removed from the queue;
            # a thread waiting to put is notified then.
            self.not_full = threading.Condition(self.mutex)
    
            # Notify all_tasks_done whenever the number of unfinished tasks
            # drops to zero; thread waiting to join() is notified to resume
            self.all_tasks_done = threading.Condition(self.mutex)
            self.unfinished_tasks = 0
    
        def task_done(self):
            '''Indicate that a formerly enqueued task is complete.
    
            Used by Queue consumer threads.  For each get() used to fetch a task,
            a subsequent call to task_done() tells the queue that the processing
            on the task is complete.
    
            If a join() is currently blocking, it will resume when all items
            have been processed (meaning that a task_done() call was received
            for every item that had been put() into the queue).
    
            Raises a ValueError if called more times than there were items
            placed in the queue.
            '''
            with self.all_tasks_done:
                unfinished = self.unfinished_tasks - 1
                if unfinished <= 0:
                    if unfinished < 0:
                        raise ValueError('task_done() called too many times')
                    self.all_tasks_done.notify_all()
                self.unfinished_tasks = unfinished
    
        def join(self):
            '''Blocks until all items in the Queue have been gotten and processed.
    
            The count of unfinished tasks goes up whenever an item is added to the
            queue. The count goes down whenever a consumer thread calls task_done()
            to indicate the item was retrieved and all work on it is complete.
    
            When the count of unfinished tasks drops to zero, join() unblocks.
            '''
            with self.all_tasks_done:
                while self.unfinished_tasks:
                    self.all_tasks_done.wait()
    
        def qsize(self):
            '''Return the approximate size of the queue (not reliable!).'''
            with self.mutex:
                return self._qsize()
    
        def empty(self):
            '''Return True if the queue is empty, False otherwise (not reliable!).
    
            This method is likely to be removed at some point.  Use qsize() == 0
            as a direct substitute, but be aware that either approach risks a race
            condition where a queue can grow before the result of empty() or
            qsize() can be used.
    
            To create code that needs to wait for all queued tasks to be
            completed, the preferred technique is to use the join() method.
            '''
            with self.mutex:
                return not self._qsize()
    
        def full(self):
            '''Return True if the queue is full, False otherwise (not reliable!).
    
            This method is likely to be removed at some point.  Use qsize() >= n
            as a direct substitute, but be aware that either approach risks a race
            condition where a queue can shrink before the result of full() or
            qsize() can be used.
            '''
            with self.mutex:
                return 0 < self.maxsize <= self._qsize()
    
        def put(self, item, block=True, timeout=None):
            '''Put an item into the queue.
    
            If optional args 'block' is true and 'timeout' is None (the default),
            block if necessary until a free slot is available. If 'timeout' is
            a non-negative number, it blocks at most 'timeout' seconds and raises
            the Full exception if no free slot was available within that time.
            Otherwise ('block' is false), put an item on the queue if a free slot
            is immediately available, else raise the Full exception ('timeout'
            is ignored in that case).
            '''
            with self.not_full:
                if self.maxsize > 0:
                    if not block:
                        if self._qsize() >= self.maxsize:
                            raise Full
                    elif timeout is None:
                        while self._qsize() >= self.maxsize:
                            self.not_full.wait()
                    elif timeout < 0:
                        raise ValueError("'timeout' must be a non-negative number")
                    else:
                        endtime = time() + timeout
                        while self._qsize() >= self.maxsize:
                            remaining = endtime - time()
                            if remaining <= 0.0:
                                raise Full
                            self.not_full.wait(remaining)
                self._put(item)
                self.unfinished_tasks += 1
                self.not_empty.notify()
    
        def get(self, block=True, timeout=None):
            '''Remove and return an item from the queue.
    
            If optional args 'block' is true and 'timeout' is None (the default),
            block if necessary until an item is available. If 'timeout' is
            a non-negative number, it blocks at most 'timeout' seconds and raises
            the Empty exception if no item was available within that time.
            Otherwise ('block' is false), return an item if one is immediately
            available, else raise the Empty exception ('timeout' is ignored
            in that case).
            '''
            with self.not_empty:
                if not block:
                    if not self._qsize():
                        raise Empty
                elif timeout is None:
                    while not self._qsize():
                        self.not_empty.wait()
                elif timeout < 0:
                    raise ValueError("'timeout' must be a non-negative number")
                else:
                    endtime = time() + timeout
                    while not self._qsize():
                        remaining = endtime - time()
                        if remaining <= 0.0:
                            raise Empty
                        self.not_empty.wait(remaining)
                item = self._get()
                self.not_full.notify()
                return item
    
        def put_nowait(self, item):
            '''Put an item into the queue without blocking.
    
            Only enqueue the item if a free slot is immediately available.
            Otherwise raise the Full exception.
            '''
            return self.put(item, block=False)
    
        def get_nowait(self):
            '''Remove and return an item from the queue without blocking.
    
            Only get an item if one is immediately available. Otherwise
            raise the Empty exception.
            '''
            return self.get(block=False)
    
        # Override these methods to implement other queue organizations
        # (e.g. stack or priority queue).
        # These will only be called with appropriate locks held
    
        # Initialize the queue representation
        def _init(self, maxsize):
            self.queue = deque()
    
        def _qsize(self):
            return len(self.queue)
    
        # Put a new item in the queue
        def _put(self, item):
            self.queue.append(item)
    
        # Get an item from the queue
        def _get(self):
            return self.queue.popleft()
    class Queue源碼
    import queue
    
    q = queue.Queue()
    q.put(123)
    q.put(456)
    print(q.get())
    
    #123
    Queue()例子
  • Last-in-First-Out 後進先出隊列,這是一個只有單扇門的升降機,先進先出形式。
    class LifoQueue(Queue):
        '''Variant of Queue that retrieves most recently added entries first.'''
    
        def _init(self, maxsize):
            self.queue = []
    
        def _qsize(self):
            return len(self.queue)
    
        def _put(self, item):
            self.queue.append(item)
    
        def _get(self):
            return self.queue.pop()
    class LifoQueue源碼
    import queue
    
    q = queue.LifoQueue()
    q.put(123)
    q.put(456)
    print(q.get())
    
    #456
    LifoQueue()例子
  • 優先級先出對列 Priority Queue,會加一個優先級,優先級比較高的那一個會先出。
    class PriorityQueue(Queue):
        '''Variant of Queue that retrieves open entries in priority order (lowest first).
    
        Entries are typically tuples of the form:  (priority number, data).
        '''
    
        def _init(self, maxsize):
            self.queue = []
    
        def _qsize(self):
            return len(self.queue)
    
        def _put(self, item):
            heappush(self.queue, item)
    
        def _get(self):
            return heappop(self.queue)
    class PriorityQueue源碼
    import queue
    
    q = queue.PriorityQueue()
    q.put((1,'alex1'))
    q.put((6,'alex2'))
    q.put((3,'alex3'))
    
    print(q.get()) # 優先級越小愈優先
    
    """
    (1, 'alex1')
    """
    PriorityQueue()例子
  • 雙向對列
    class deque(object):
        """
        deque([iterable[, maxlen]]) --> deque object
        
        A list-like sequence optimized for data accesses near its endpoints.
        """
        def append(self, *args, **kwargs): # real signature unknown
            """ Add an element to the right side of the deque. """
            pass
    
        def appendleft(self, *args, **kwargs): # real signature unknown
            """ Add an element to the left side of the deque. """
            pass
    
        def clear(self, *args, **kwargs): # real signature unknown
            """ Remove all elements from the deque. """
            pass
    
        def copy(self, *args, **kwargs): # real signature unknown
            """ Return a shallow copy of a deque. """
            pass
    
        def count(self, value): # real signature unknown; restored from __doc__
            """ D.count(value) -> integer -- return number of occurrences of value """
            return 0
    
        def extend(self, *args, **kwargs): # real signature unknown
            """ Extend the right side of the deque with elements from the iterable """
            pass
    
        def extendleft(self, *args, **kwargs): # real signature unknown
            """ Extend the left side of the deque with elements from the iterable """
            pass
    
        def index(self, value, start=None, stop=None): # real signature unknown; restored from __doc__
            """
            D.index(value, [start, [stop]]) -> integer -- return first index of value.
            Raises ValueError if the value is not present.
            """
            return 0
    
        def insert(self, index, p_object): # real signature unknown; restored from __doc__
            """ D.insert(index, object) -- insert object before index """
            pass
    
        def pop(self, *args, **kwargs): # real signature unknown
            """ Remove and return the rightmost element. """
            pass
    
        def popleft(self, *args, **kwargs): # real signature unknown
            """ Remove and return the leftmost element. """
            pass
    
        def remove(self, value): # real signature unknown; restored from __doc__
            """ D.remove(value) -- remove first occurrence of value. """
            pass
    
        def reverse(self): # real signature unknown; restored from __doc__
            """ D.reverse() -- reverse *IN PLACE* """
            pass
    
        def rotate(self, *args, **kwargs): # real signature unknown
            """ Rotate the deque n steps to the right (default n=1).  If n is negative, rotates left. """
            pass
    
        def __add__(self, *args, **kwargs): # real signature unknown
            """ Return self+value. """
            pass
    
        def __bool__(self, *args, **kwargs): # real signature unknown
            """ self != 0 """
            pass
    
        def __contains__(self, *args, **kwargs): # real signature unknown
            """ Return key in self. """
            pass
    
        def __copy__(self, *args, **kwargs): # real signature unknown
            """ Return a shallow copy of a deque. """
            pass
    
        def __delitem__(self, *args, **kwargs): # real signature unknown
            """ Delete self[key]. """
            pass
    
        def __eq__(self, *args, **kwargs): # real signature unknown
            """ Return self==value. """
            pass
    
        def __getattribute__(self, *args, **kwargs): # real signature unknown
            """ Return getattr(self, name). """
            pass
    
        def __getitem__(self, *args, **kwargs): # real signature unknown
            """ Return self[key]. """
            pass
    
        def __ge__(self, *args, **kwargs): # real signature unknown
            """ Return self>=value. """
            pass
    
        def __gt__(self, *args, **kwargs): # real signature unknown
            """ Return self>value. """
            pass
    
        def __iadd__(self, *args, **kwargs): # real signature unknown
            """ Implement self+=value. """
            pass
    
        def __imul__(self, *args, **kwargs): # real signature unknown
            """ Implement self*=value. """
            pass
    
        def __init__(self, iterable=(), maxlen=None): # known case of _collections.deque.__init__
            """
            deque([iterable[, maxlen]]) --> deque object
            
            A list-like sequence optimized for data accesses near its endpoints.
            # (copied from class doc)
            """
            pass
    
        def __iter__(self, *args, **kwargs): # real signature unknown
            """ Implement iter(self). """
            pass
    
        def __len__(self, *args, **kwargs): # real signature unknown
            """ Return len(self). """
            pass
    
        def __le__(self, *args, **kwargs): # real signature unknown
            """ Return self<=value. """
            pass
    
        def __lt__(self, *args, **kwargs): # real signature unknown
            """ Return self<value. """
            pass
    
        def __mul__(self, *args, **kwargs): # real signature unknown
            """ Return self*value.n """
            pass
    
        @staticmethod # known case of __new__
        def __new__(*args, **kwargs): # real signature unknown
            """ Create and return a new object.  See help(type) for accurate signature. """
            pass
    
        def __ne__(self, *args, **kwargs): # real signature unknown
            """ Return self!=value. """
            pass
    
        def __reduce__(self, *args, **kwargs): # real signature unknown
            """ Return state information for pickling. """
            pass
    
        def __repr__(self, *args, **kwargs): # real signature unknown
            """ Return repr(self). """
            pass
    
        def __reversed__(self): # real signature unknown; restored from __doc__
            """ D.__reversed__() -- return a reverse iterator over the deque """
            pass
    
        def __rmul__(self, *args, **kwargs): # real signature unknown
            """ Return self*value. """
            pass
    
        def __setitem__(self, *args, **kwargs): # real signature unknown
            """ Set self[key] to value. """
            pass
    
        def __sizeof__(self): # real signature unknown; restored from __doc__
            """ D.__sizeof__() -- size of D in memory, in bytes """
            pass
    
        maxlen = property(lambda self: object(), lambda self, v: None, lambda self: None)  # default
        """maximum size of a deque or None if unbounded"""
    
    
        __hash__ = None
    class deque源碼
    import queue
    
    q = queue.deque()
    q.append(123)
    q.append(333)
    q.append(888)
    q.append(999)
    
    q.appendleft(456)
    
    print(q.pop())
    print(q.popleft())
    
    """
    999
    456
    """
    deque()例子

     

消息隊列的好處

處理併發的能力變大啦python

對列的好處是什麼,若是沒有這個隊列的話,每一個連結都有一個最大的鏈接數,在等着的過程當中若是沒有消息對列的話,服務端便須要維護這條鏈接,這也是對服務端形成資源的消耗和浪費,而這條服務端跟客戶端的條接便須要掛起,第1、沒有新的鏈接能夠進來,第2、正在鏈接的客務端其實只是等待着。
redis

若是有對列的存在,它沒有鏈接數的限制,你就不須要去擔憂或維護這個空鏈接。數據庫

一次產生訂單,可能中間有不少不一樣的步驟須要建立,每次產生訂單都會耗時耗資源。單次處理請求的能力會下降。ubuntu

目的是提升處理併發的能力和能夠支持瞬間爆發的客戶請求vim

當沒有消息對列:客戶提交了訂單的請求,客戶端和服務器端同樣鏈接等待服務器端查詢後返回數據給客戶端api

當有消息對列時:客戶提交了訂單的請求,把消息推送到消息對列中,此時客戶端和服務器端的鏈接能夠斷開,不須要一直鏈接,當服務器端完整了查詢後它會 更新數據庫的一個狀態,此時,客戶端會自動刷新到去得到該請求的狀態。服務器

另一個好處是當你把請求放在消息對列中,數據處理能力變得更可擴展、更靈活,只須要多添加幾個服務器端,它們就會到對列裏拿請求來幫忙。多線程

基本操做

這是一個先進先出的消息對列併發

  • 建立對列:Quene(x), x表明隊列最大長度,過了最大個數放不進去就會卡住。
        def __init__(self, maxsize=0):
            self.maxsize = maxsize
            self._init(maxsize)
    
            # mutex must be held whenever the queue is mutating.  All methods
            # that acquire mutex must release it before returning.  mutex
            # is shared between the three conditions, so acquiring and
            # releasing the conditions also acquires and releases mutex.
            self.mutex = threading.Lock()
    
            # Notify not_empty whenever an item is added to the queue; a
            # thread waiting to get is notified then.
            self.not_empty = threading.Condition(self.mutex)
    
            # Notify not_full whenever an item is removed from the queue;
            # a thread waiting to put is notified then.
            self.not_full = threading.Condition(self.mutex)
    
            # Notify all_tasks_done whenever the number of unfinished tasks
            # drops to zero; thread waiting to join() is notified to resume
            self.all_tasks_done = threading.Condition(self.mutex)
            self.unfinished_tasks = 0
    Queue.__init__( )方法
    import queue
    q = queue.Queue(10) 
    基本建立對列的語法 queue.Queue(x)
    import queue
    
    # 這是一個有兩扇門的升降機,先進先出形式。
    q = queue.Queue(2) #只容許2我的排隊
    
    # 在隊列中加數據
    q.put(11)
    q.put(22)
    q.put(33, timeout=2) # 過了2秒尚未空閒位置客人就會鬧着,程序會報錯
    
    """
    Traceback (most recent call last):
      File "/s13/Day11/practice/s2.py", line 32, in <module>
        q.put(33, timeout=2)
      File "queue.py", line 141, in put
        raise Full
    queue.Full
    """
    數據放不進隊列中
  • 最大支持當前隊列的個數:maxsize (字段)
  • 放數據 - put( )
    put(self, item, block=True, timeout=None)
    # put - 在隊列中加數據
    #   timeout: 能夠加一個超時的時限,過了就報錯
    #   block: 設置是否阻塞
    
    q = queue.Queue(10)
    
    # 在隊列中加數據
    q.put(11)
    q.put(22)
    q.put(33, block=False, timeout=2)
    放數據 put( )
  • 放數據 - put_nowait( ),默應是不阻塞的:block = False
  • 取數據 - get( )
    get(self, block=True, timeout=None)
    # get - 在隊列中取數據,默應是阻塞
    #   timeout: 能夠加一個超時的時限,過了就報錯
    #   block: 設置是否阻塞
    
    #First-in-first-out
    q = queue.Queue(10)
    
    # 在隊列中加數據
    q.put(11)
    q.put(22)
    
    # 在隊列中取數據
    print(q.qsize()) #查看對列的長度
    print(q.get())
    print(q.get(timeout=2))
    
    """
    True
    11
    """
    取數據 get( )
  • 取數據 - get_nowait( ),默應是不阻塞的:block = False
  • 檢查當前對列是否爲空 - empty( )
    q = queue.Queue(10)
    print(q.empty()) # True 由於當前沒有消息在隊列中
    q.put(11) # 在隊列中加數據
    print(q.empty()) # False 由於當前有消息在隊列中
    
    """
    True
    False
    """
    empty( )
  • 檢查當前對列是否滿了 - full( )
  • 檢查當前對列裏有幾個真實元素 - qsize( )
  • task_done( ) - 阻塞進程,當隊列中任務執行完畢以後,再也不阻塞
    join( ) - 等待整個任務所有完成後才終止
    import queue
    
    q = queue.Queue(5)
    q.put(123)
    q.put(456)
    
    print(q.get())
    q.task_done() # 完成一個動做要調用 task_done() 代表本身已經完成任務
    
    print(q.get())
    q.task_done()
    
    q.join() # 表示若是對列裏的任務沒有完任完成,就會等待着,不會終止程序。
    task_done( )和join( ) 

heapq

在集合中找最大或者是最小值的時候能夠用 heapq 來解決

portfolio = [
       {'name': 'IBM', 'shares': 100, 'price': 91.1},
       {'name': 'AAPL', 'shares': 50, 'price': 543.22},
       {'name': 'FB', 'shares': 200, 'price': 21.09},
       {'name': 'HPQ', 'shares': 35, 'price': 31.75},
       {'name': 'YHOO', 'shares': 45, 'price': 16.35},
       {'name': 'ACME', 'shares': 75, 'price': 115.65}
]

cheap = heapq.nsmallest(3,portfolio, key = lambda s: s['price'])
expensive = heapq.nlargest(3,portfolio, key = lambda s: s['price'])
heapq

 

Python 操做 memached 和 redis 實戰

它本質上是經過 socket 來鏈接而後進行socket通訊

 

Memcached 天生支持​​集羣

[更新中]

[C1,1]

[C2,2]

[C3,1]

[C1,C2,C2,C3]

數字/ len([C1,C2,C2,C3])

 

 

Memached 介紹和操做實戰

  1. 安裝
  2. 安裝其對應的模塊 (API)

 

Python API 操做 Memached 數據庫

  • add
  • replace
  • set 和 set_multi
  • delete 和 delete_multi
  • append 和 prepend
  • decr 和 incr
  • gets和cas

 

 

 

 

 

 

Redis 介紹和操做實戰

  1. 安裝
    wget http://download.redis.io/redis-stable.tar.gz
    tar xvzf redis-stable.tar.gz
    cd redis-stable
    make
    安裝穩定版的Redis
    172.16.201.133:6379> ping
    PONG #鏈接正常會返回 Pong
    安裝後的小測試
  2. 安裝其對應的模塊 (API)

 

Cli 操做 Redis 數據庫

  • 鏈接 Redis,先在虛擬器輸入 >> redis-server 來啓動 Redis服務器,而後能夠輸入下面代碼鏈接數據庫。
    redis-cli -h 172.16.201.133 -p 6379 -a mypass
    Redis鏈接
    sudo vim /etc/redis/redis.conf
    
    #把 bind 127.0.0.1 改爲 0.0.0.0
    bind 0.0.0.0
    
    #從新啓動 Redis 服務器
    sudo /etc/init.d/redis-server restart
    遠端鏈接 Redis
    user@py-ubuntu:~$ redis-server 
    1899:C 20 Oct 14:58:33.558 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
    1899:M 20 Oct 14:58:33.559 * Increased maximum number of open files to 10032 (it was originally set to 1024).
                    _._                                                  
               _.-``__ ''-._                                             
          _.-``    `.  `_.  ''-._           Redis 3.0.6 (00000000/0) 64 bit
      .-`` .-```.  ```\/    _.,_ ''-._                                   
     (    '      ,       .-`  | `,    )     Running in standalone mode
     |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
     |    `-._   `._    /     _.-'    |     PID: 1899
      `-._    `-._  `-./  _.-'    _.-'                                   
     |`-._`-._    `-.__.-'    _.-'_.-'|                                  
     |    `-._`-._        _.-'_.-'    |           http://redis.io        
      `-._    `-._`-.__.-'_.-'    _.-'                                   
     |`-._`-._    `-.__.-'    _.-'_.-'|                                  
     |    `-._`-._        _.-'_.-'    |                                  
      `-._    `-._`-.__.-'_.-'    _.-'                                   
          `-._    `-.__.-'    _.-'                                       
              `-._        _.-'                                           
                  `-.__.-'                                               
    
    1899:M 20 Oct 14:58:33.564 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
    1899:M 20 Oct 14:58:33.564 # Server started, Redis version 3.0.6
    1899:M 20 Oct 14:58:33.564 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
    1899:M 20 Oct 14:58:33.564 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
    1899:M 20 Oct 14:58:33.565 * DB loaded from disk: 0.000 seconds
    1899:M 20 Oct 14:58:33.565 * The server is now ready to accept connections on port 6379
    
    
    user@py-ubuntu:~$ redis-cli -h 192.168.80.128 -p 6379
    192.168.80.128:6379> 
    Redis 服務器
  • String
    SET mykey "apple" # 設置1條key,value組合
    SETNX mykey "redis" # 設置1條key,value組合若是當前庫沒有這條key存在
    MSET k1 "v1" k2 "v2" k3 "v3" # 同時設置多條key,value組合
    String操做(redis-cli)
  • List
  • Hash
    172.16.201.133:6379> HMSET person name "Janice" sex "F" age 20 #設置person的key,value組合
    OK
    
    172.16.201.133:6379> HGETALL person #查看person的全部key,value
    1) "name"
    2) "Janice"
    3) "sex"
    4) "F"
    5) "age"
    6) "20"
    
    172.16.201.133:6379> HEXISTS person name #查看person中有沒有name這個key,若是有,返回1
    (integer) 1
    
    172.16.201.133:6379> HEXISTS person birthday #查看person中有沒有birthday這個key,若是沒有,返回0
    (integer) 0
    
    172.16.201.133:6379> HGET person name #查看person中name key的值
    "Janice"
    
    172.16.201.133:6379> HKEYS person #查看person有哪些key
    1) "name"
    2) "sex"
    3) "age"
    
    172.16.201.133:6379> HMGET person name sex age #查看person這些key的值
    1) "Janice"
    2) "F"
    3) "20"
    
    172.16.201.133:6379> HVALS person #查看person的全部值
    1) "Janice"
    2) "F"
    3) "20"
    
    172.16.201.133:6379> HLEN person #查看person的長度
    (integer) 3
    Hash操做(redis-cli)
  • Set 
  • Publisher-Subscriber
    #Subscriber
    172.16.201.133:6379> SUBSCRIBE redisChat
    Reading messages... (press Ctrl-C to quit)
    1) "subscribe"
    2) "redisChat"
    3) (integer) 1
    --------------------------------------------------------
    1) "message"
    2) "redisChat"
    3) "Redis is a great caching technique"
    1) "message"
    2) "redisChat"
    3) "Learn redis by tutorials point"
    
    #Publisher
    172.16.201.133:6379> PUBLISH redisChat "Redis is a great caching technique"
    (integer) 1
    172.16.201.133:6379> PUBLISH redisChat "Learn redis by tutorials point"
    (integer) 1
    Pub-Sub操做(redis-cli)
  • xxxx

 

Python API 操做 Redis 數據庫

鏈接 Redis 有兩種方法,一種是直接鏈接,一種是經過線程池的方式去鏈接,試說明兩種方法的優勢和缺點。發送數據前的鏈接是很是耗時的,你發送數據可能須要1秒,但每次鏈接可能須要5秒,若是維護了一個鏈接池的話,就不用每次都從新鏈接數據庫。

  • 普通鏈接
    import redis
    
    r = redis.Redis(host='172.16.201.133', port=6379)
    r.set('fruits','apple') # 設置key
    print(r.get('fruits')) # 返回字節
    鏈接Redis
  • 鏈接池方式鏈接:先維護一個鏈接池,而後在本地拿到這個鏈接池就直接發送數據
    import redis
    
    pool = redis.ConnectionPool(host='172.16.201.133', port=6379) #建立一個線程池
    r = redis.Redis(connection_pool=pool) # 把線程池傳入這個 redis 對象裏
    
    r.set('fruits','apple') # 設置key
    print(r.get('fruits')) # 返回字節
    鏈接池方式鏈接Redis
  • String
  • List
    >>> import redis
    >>> pool = redis.ConnectionPool(host='172.16.201.133', port=6379)
    >>> r = redis.Redis(connection_pool=pool)
    >>> r.lpush('li',11,22,33,44,55,66,77,88,99)
    9
    
    >>> r.lpop('li') # get the list from the left
    b'99'
    
    >>> r.lrange('li',2,5)
    [b'66', b'55', b'44', b'33']
    
    >>> r.rpop('li') # get the list from the right
    b'11'
    操做List(redis-api)
  • Hash
    >>> import redis
    >>> r = redis.Redis(host='172.16.201.133', port=6379)
    >>> r.hmset('person',{'name':'Janice','sex':'F','age':20})
    True
    
    >>> r.hset('movies','name','Doctors')
    0
    
    >>> r.hsetnx('movies','name','Secret Garden')
    0
    
    >>> print(r.hget('movies','name'))
    b'Doctors'
    
    >>> print(r.hget('person','name'))
    b'Janice'
    
    >>> print(r.hgetall('person'))
    {b'sex': b'F', b'name': b'Janice', b'age': b'20'}
    
    >>> print(r.hmget('person',['name','sex','age']))
    [b'Janice', b'F', b'20']
    
    >>> print(r.hexists('person','name'))
    True
    
    >>> print(r.hexists('person','birthday')) 
    False
    
    >>> print(r.hvals('person'))
    [b'Janice', b'F', b'20']
    
    >>> print(r.hvals('movies'))
    [b'Doctors']
    
    >>> print(r.hlen('person'))
    3
    
    >>> print(r.hdel('person','sex','gender','age'))
    2
    
    >>> print(r.hgetall('person'))
    {b'name': b'Janice'}
    操做Hash(redis-api)
  • Set
    >>> import redis
    >>> r = redis.Redis(host='172.16.201.133', port=6379)
    >>> r.set('fruits', 'apple')
    True
    
    >>> r.setnx('k1', 'v1')
    True
    
    >>> r.mset(k2='v2', k3='v3')
    True
    
    >>> r.get('fruits')
    b'apple'
    
    >>> r.get('k1')
    b'v1'
    
    >>> r.mget('k2','k3')
    [b'v2', b'v3']
    
    >>> r.getrange('fruits',1,4)
    b'pole'
    
    >>> r.strlen('fruits')
    5
    操做Set(redis-api)
  • Publisher 和 Subscriber:發佈和訂閱。試說明其原理
  • Pipeline:經過管隊來一致性對 redis 進行操做
  • xxxx

 

 

 

 

 

 

 

本週做業

類 Fabric 主機管理程序開發:

  1. 運行程序列出主機組或者主機列表
  2. 選擇指定主機或主機組
  3. 選擇讓主機或者主機組執行命令或者向其傳輸文件(上傳/下載)

 

 

 

參考資料

銀角大王:Python之路【第九篇】:Python操做 RabbitMQ、Redis、Memcache、SQLAlchemy

金角大王:

其餘:Redis 數據類型詳解

相關文章
相關標籤/搜索