生產者-消費者模型:python
看下面的例子哦:git
1 import threading,queue
2 import time 3 def consumer(n): 4 while True: 5 print("\033[32;1mconsumer [%s]\033[0m get task: %s"%(n,q.get())) 6 time.sleep(1) 7 q.task_done() 8 def producer(n): 9 count=1 10 while True: 11 print('producer [%s] produced a new task: %s'%(n,count)) 12 q.put(count) 13 count += 1 14 q.join()#queue is emtpy 15 print('all tasks has been cosumed by consumers...') 16 q=queue.Queue() 17 c1=threading.Thread(target=consumer,args=[1,]) 18 c2=threading.Thread(target=consumer,args=[2,]) 19 c3=threading.Thread(target=consumer,args=[3,]) 20 21 p=threading.Thread(target=producer,args=['xiaoyu',]) 22 p2=threading.Thread(target=producer,args=['xiaoXiao',]) 23 c1.start() 24 c2.start() 25 c3.start() 26 p.start() 27 p2.start()
隊列q盛放生產者的成果,而且用來在隊列爲空的時候提醒生產者開始生產。程序員
看一下這裏用到的隊列,github
1 import queue
2 class Foo(object): 3 def __init__(self,n): 4 self.n = n 5 6 q = queue.Queue(maxsize=30) 7 #q = queue.LifoQueue(maxsize=30) 8 #q = queue.PriorityQueue(maxsize=30) 9 q.put((2,[1,2,3])) 10 #q.put(Foo(1)) 11 q.put((10,1)) 12 q.put((3,1)) 13 q.put((5,30)) 14 q.task_done() 15 q.join() 16 print(q.get()) 17 print(q.get()) 18 print(q.get()) 19 print(q.get())
協程數據庫
協程,又稱微線程,纖程。英文名Coroutine。一句話說明什麼是協程:協程是一種用戶態的輕量級線程。編程
協程擁有本身的寄存器上下文和棧。協程調度切換時,將寄存器上下文和棧保存到其餘地方,在切回來的時候,恢復先前保存的寄存器上下文和棧。所以:數組
協程能保留上一次調用時的狀態(即全部局部狀態的一個特定組合),每次過程重入時,就至關於進入上一次調用的狀態,換種說法:進入上一次離開時所處邏輯流的位置。安全
協程的好處:服務器
缺點:網絡
使用yield實現協程操做例子
1 import time 2 import queue 3 def consumer(name): 4 print("--->starting eating baozi...") 5 while True: 6 new_baozi = yield 7 print("[%s] is eating baozi %s" % (name,new_baozi)) 8 #time.sleep(1) 9 10 def producer(): 11 12 r = con.__next__() 13 r = con2.__next__() 14 n = 0 15 while n < 5: 16 n +=1 17 con.send(n) 18 con2.send(n) 19 print("\033[32;1m[producer]\033[0m is making baozi %s" %n ) 20 21 22 if __name__ == '__main__': 23 con = consumer("c1") 24 con2 = consumer("c2") 25 p = producer()
1 from greenlet import greenlet 2 def test1(): 3 print (12) 4 gr2.switch() 5 print (34) 6 gr2.switch() 7 def test2(): 8 print(56) 9 gr1.switch() 10 print (78) 11 12 gr1 = greenlet(test1) 13 gr2 = greenlet(test2) 14 gr1.switch()
Gevent 是一個第三方庫,能夠輕鬆經過gevent實現併發同步或異步編程,在gevent中用到的主要模式是Greenlet, 它是以C擴展模塊形式接入Python的輕量級協程。 Greenlet所有運行在主程序操做系統進程的內部,但它們被協做式地調度。
1 import gevent 2 3 def foo(): 4 print('Running in foo') 5 gevent.sleep(0) 6 print('Explicit context switch to foo again') 7 8 def bar(): 9 print('Explicit context to bar') 10 gevent.sleep(0) 11 print('Implicit context switch back to bar') 12 13 gevent.joinall([ 14 gevent.spawn(foo), 15 gevent.spawn(bar), 16 ])
1 輸出: 2 3 Running in foo 4 Explicit context to bar 5 Explicit context switch to foo again 6 Implicit context switch back to bar
同步與異步的性能區別
1 import gevent 2 3 def task(pid): 4 """ 5 Some non-deterministic task 6 """ 7 gevent.sleep(0.5) 8 print('Task %s done' % pid) 9 10 def synchronous(): 11 for i in range(1,10): 12 task(i) 13 14 def asynchronous(): 15 threads = [gevent.spawn(task, i) for i in range(10)] 16 gevent.joinall(threads) 17 18 print('Synchronous:') 19 synchronous() 20 21 print('Asynchronous:') 22 asynchronous()
上面程序的重要部分是將task函數封裝到Greenlet內部線程的gevent.spawn
。 初始化的greenlet列表存放在數組threads
中,此數組被傳給gevent.joinall
函數,後者阻塞當前流程,並執行全部給定的greenlet。執行流程只會在 全部greenlet執行完後纔會繼續向下走。
遇到IO阻塞時會自動切換任務
1 from gevent import monkey; monkey.patch_all() 2 import gevent 3 from urllib2 import urlopen 4 #from urllib.request import urlopen 5 6 def f(url): 7 print('GET: %s' % url) 8 resp = urlopen(url) 9 data = resp.read() 10 print('%d bytes received from %s.' % (len(data), url)) 11 12 gevent.joinall([ 13 gevent.spawn(f, 'https://www.python.org/'), 14 gevent.spawn(f, 'https://www.yahoo.com/'), 15 gevent.spawn(f, 'https://github.com/'), 16 ])
經過gevent實現單線程下的多socket併發
1 import socket 2 3 from gevent import socket,monkey 4 monkey.patch_all() 5 6 def server(port): 7 s = socket.socket() 8 s.bind(('0.0.0.0', port)) 9 s.listen(5000) 10 while True: 11 cli, addr = s.accept() 12 gevent.spawn(handle_request, cli,addr) 13 def handle_request(s,addr): 14 try: 15 while True: 16 data = s.recv(1024) 17 print("recv from[%s]:%s"%(addr,data) ) 18 s.send(data) 19 if not data: 20 s.shutdown(socket.SHUT_WR) 21 22 except Exception as ex: 23 pass #print(ex) 24 finally: 25 s.close() 26 if __name__ == '__main__': 27 server(8001)
1 import socket 2 3 HOST = 'localhost' # The remote host 4 PORT = 8001 # The same port as used by the server 5 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) 6 s.connect((HOST, PORT)) 7 while True: 8 msg = bytes(input(">>:"),encoding="utf8") 9 s.sendall(msg) 10 data = s.recv(1024) 11 #print(data) 12 13 print('Received', repr(data)) 14 s.close()
事件驅動編程是一種編程範式,這裏程序的執行流由外部事件來決定。它的特色是包含一個事件循環,當外部事件發生時使用回調機制來觸發相應的處理。另外兩種常見的編程範式是(單線程)同步以及多線程編程。
讓咱們用例子來比較和對比一下單線程、多線程以及事件驅動編程模型。下圖展現了隨着時間的推移,這三種模式下程序所作的工做。這個程序有3個任務須要完成,每一個任務都在等待I/O操做時阻塞自身。阻塞在I/O操做上所花費的時間已經用灰色框標示出來了。
(1)在單線程同步模型中,任務按照順序執行。若是某個任務由於I/O而阻塞,其餘全部的任務都必須等待,直到它完成以後它們才能依次執行。這種明確的執行順序和串行化處理的行爲是很容易推斷得出的。若是任務之間並無互相依賴的關係,但仍然須要互相等待的話這就使得程序沒必要要的下降了運行速度。
(2)在多線程版本中,這3個任務分別在獨立的線程中執行。這些線程由操做系統來管理,在多處理器系統上能夠並行處理,或者在單處理器系統上交錯執行。這使得當某個線程阻塞在某個資源的同時其餘線程得以繼續執行。與完成相似功能的同步程序相比,這種方式更有效率,但程序員必須寫代碼來保護共享資源,防止其被多個線程同時訪問。多線程程序更加難以推斷,由於這類程序不得不經過線程同步機制如鎖、可重入函數、線程局部存儲或者其餘機制來處理線程安全問題,若是實現不當就會致使出現微妙且使人痛不欲生的bug。
(3)在事件驅動版本的程序中,3個任務交錯執行,但仍然在一個單獨的線程控制中。當處理I/O或者其餘昂貴的操做時,註冊一個回調到事件循環中,而後當I/O操做完成時繼續執行。回調描述了該如何處理某個事件。事件循環輪詢全部的事件,當事件到來時將它們分配給等待處理事件的回調函數。這種方式讓程序儘量的得以執行而不須要用到額外的線程。事件驅動型程序比多線程程序更容易推斷出行爲,由於程序員不須要關心線程安全問題。
當咱們面對以下的環境時,事件驅動模型一般是一個好的選擇:
當應用程序須要在任務間共享可變的數據時,這也是一個不錯的選擇,由於這裏不須要採用同步處理。
網絡應用程序一般都有上述這些特色,這使得它們可以很好的契合事件驅動編程模型。
首先列一下,sellect、poll、epoll三者的區別
select
select最先於1983年出如今4.2BSD中,它經過一個select()系統調用來監視多個文件描述符的數組,當select()返回後,該數組中就緒的文件描述符便會被內核修改標誌位,使得進程能夠得到這些文件描述符從而進行後續的讀寫操做。
select目前幾乎在全部的平臺上支持,其良好跨平臺支持也是它的一個優勢,事實上從如今看來,這也是它所剩很少的優勢之一。
select的一個缺點在於單個進程可以監視的文件描述符的數量存在最大限制,在Linux上通常爲1024,不過能夠經過修改宏定義甚至從新編譯內核的方式提高這一限制。
另外,select()所維護的存儲大量文件描述符的數據結構,隨着文件描述符數量的增大,其複製的開銷也線性增加。同時,因爲網絡響應時間的延遲使得大量TCP鏈接處於非活躍狀態,但調用select()會對全部socket進行一次線性掃描,因此這也浪費了必定的開銷。
poll
poll在1986年誕生於System V Release 3,它和select在本質上沒有多大差異,可是poll沒有最大文件描述符數量的限制。
poll和select一樣存在一個缺點就是,包含大量文件描述符的數組被總體複製於用戶態和內核的地址空間之間,而不論這些文件描述符是否就緒,它的開銷隨着文件描述符數量的增長而線性增大。
另外,select()和poll()將就緒的文件描述符告訴進程後,若是進程沒有對其進行IO操做,那麼下次調用select()和poll()的時候將再次報告這些文件描述符,因此它們通常不會丟失就緒的消息,這種方式稱爲水平觸發(Level Triggered)。
epoll
直到Linux2.6纔出現了由內核直接支持的實現方法,那就是epoll,它幾乎具有了以前所說的一切優勢,被公認爲Linux2.6下性能最好的多路I/O就緒通知方法。
epoll能夠同時支持水平觸發和邊緣觸發(Edge Triggered,只告訴進程哪些文件描述符剛剛變爲就緒狀態,它只說一遍,若是咱們沒有采起行動,那麼它將不會再次告知,這種方式稱爲邊緣觸發),理論上邊緣觸發的性能要更高一些,可是代碼實現至關複雜。
epoll一樣只告知那些就緒的文件描述符,並且當咱們調用epoll_wait()得到就緒文件描述符時,返回的不是實際的描述符,而是一個表明就緒描述符數量的值,你只須要去epoll指定的一個數組中依次取得相應數量的文件描述符便可,這裏也使用了內存映射(mmap)技術,這樣便完全省掉了這些文件描述符在系統調用時複製的開銷。
另外一個本質的改進在於epoll採用基於事件的就緒通知方式。在select/poll中,進程只有在調用必定的方法後,內核纔對全部監視的文件描述符進行掃描,而epoll事先經過epoll_ctl()來註冊一個文件描述符,一旦基於某個文件描述符就緒時,內核會採用相似callback的回調機制,迅速激活這個文件描述符,當進程調用epoll_wait()時便獲得通知。
請看下面的例子:
1 #!/usr/bin/env python 2 # -*- coding:utf-8 -*- 3 #_*_coding:utf-8_*_ 4 __author__ = 'Alex Li' 5 6 import select 7 import socket 8 import sys 9 import queue 10 11 # Create a TCP/IP socket 12 server = socket.socket(socket.AF_INET, socket.SOCK_STREAM) 13 server.setblocking(False) 14 15 # Bind the socket to the port 16 server_address = ('localhost', 10000) 17 print(sys.stderr, 'starting up on %s port %s' % server_address) 18 server.bind(server_address) 19 20 # Listen for incoming connections 21 server.listen(5) 22 23 # Sockets from which we expect to read 24 inputs = [ server ] 25 26 # Sockets to which we expect to write 27 outputs = [ ] 28 29 message_queues = {} 30 while inputs: 31 32 # Wait for at least one of the sockets to be ready for processing 33 print( '\nwaiting for the next event') 34 readable, writable, exceptional = select.select(inputs, outputs, inputs,2) 35 # Handle inputs 36 for s in readable: 37 38 if s is server: #new connection 39 # A "readable" server socket is ready to accept a connection 40 connection, client_address = s.accept() 41 print('new connection from', client_address) 42 connection.setblocking(False) 43 inputs.append(connection) 44 45 # Give the connection a queue for data we want to send 46 message_queues[connection] = queue.Queue() 47 else: 48 data = s.recv(1024) 49 if data: 50 # A readable client socket has data 51 print(sys.stderr, 'received "%s" from %s' % (data, s.getpeername()) ) 52 message_queues[s].put(data) 53 # Add output channel for response 54 if s not in outputs: 55 outputs.append(s) 56 else: 57 # Interpret empty result as closed connection 58 print('closing', client_address, 'after reading no data') 59 # Stop listening for input on the connection 60 if s in outputs: 61 outputs.remove(s) #既然客戶端都斷開了,我就不用再給它返回數據了,因此這時候若是這個客戶端的鏈接對象還在outputs列表中,就把它刪掉 62 inputs.remove(s) #inputs中也刪除掉 63 s.close() #把這個鏈接關閉掉 64 65 # Remove message queue 66 del message_queues[s] 67 # Handle outputs 68 for s in writable: 69 try: 70 next_msg = message_queues[s].get_nowait() 71 except queue.Empty: 72 # No messages waiting so stop checking for writability. 73 print('output queue for', s.getpeername(), 'is empty') 74 outputs.remove(s) 75 else: 76 print( 'sending "%s" to %s' % (next_msg, s.getpeername())) 77 s.send(next_msg) 78 # Handle "exceptional conditions" 79 for s in exceptional: 80 print('handling exceptional condition for', s.getpeername() ) 81 # Stop listening for input on the connection 82 inputs.remove(s) 83 if s in outputs: 84 outputs.remove(s) 85 s.close() 86 87 # Remove message queue 88 del message_queues[s]
1 #!/usr/bin/env python 2 # -*- coding:utf-8 -*- 3 import socket 4 import sys 5 6 messages = [ 'This is the message. ', 7 'It will be sent ', 8 'in parts.', 9 ] 10 server_address = ('localhost', 10000) 11 12 # Create a TCP/IP socket 13 socks = [ socket.socket(socket.AF_INET, socket.SOCK_STREAM), 14 socket.socket(socket.AF_INET, socket.SOCK_STREAM), 15 socket.socket(socket.AF_INET, socket.SOCK_STREAM), 16 socket.socket(socket.AF_INET, socket.SOCK_STREAM), 17 ] 18 19 # Connect the socket to the port where the server is listening 20 print >>sys.stderr, 'connecting to %s port %s' % server_address 21 for s in socks: 22 s.connect(server_address) 23 24 for message in messages: 25 26 # Send messages on both sockets 27 for s in socks: 28 print >>sys.stderr, '%s: sending "%s"' % (s.getsockname(), message) 29 s.send(message) 30 31 # Read responses on both sockets 32 for s in socks: 33 data = s.recv(1024) 34 print >>sys.stderr, '%s: received "%s"' % (s.getsockname(), data) 35 if not data: 36 print >>sys.stderr, 'closing socket', s.getsockname() 37 s.close()
1 #!/usr/bin/env python 2 # -*- coding:utf-8 -*- 3 import socket, select 4 EOL1 = b'\n\n' 5 EOL2 = b'\n\r\n' 6 response = b'HTTP/1.0 200 OK\r\nDate: Mon, 1 Jan 1996 01:01:01 GMT\r\n' 7 response += b'Content-Type: text/plain\r\nContent-Length: 13\r\n\r\n' 8 response += b'Hello, world!' 9 serversocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) 10 serversocket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) 11 serversocket.bind(('0.0.0.0', 8080)) 12 serversocket.listen(1) 13 serversocket.setblocking(0) 14 epoll = select.epoll() 15 epoll.register(serversocket.fileno(), select.EPOLLIN) 16 try: 17 connections = {}; requests = {}; responses = {} 18 while True: 19 events = epoll.poll(1) 20 for fileno, event in events: 21 if fileno == serversocket.fileno(): 22 connection, address = serversocket.accept() 23 connection.setblocking(0) 24 epoll.register(connection.fileno(), select.EPOLLIN) 25 connections[connection.fileno()] = connection 26 requests[connection.fileno()] = b'' 27 responses[connection.fileno()] = response 28 elif event & select.EPOLLIN: 29 requests[fileno] += connections[fileno].recv(1024) 30 if EOL1 in requests[fileno] or EOL2 in requests[fileno]: 31 epoll.modify(fileno, select.EPOLLOUT) 32 connections[fileno].setsockopt(socket.IPPROTO_TCP, socket.TCP_CORK, 1) 33 print('-'*40 + '\n' + requests[fileno].decode()[:-2]) 34 elif event & select.EPOLLOUT: 35 byteswritten = connections[fileno].send(responses[fileno]) 36 responses[fileno] = responses[fileno][byteswritten:] 37 if len(responses[fileno]) == 0: 38 connections[fileno].setsockopt(socket.IPPROTO_TCP, socket.TCP_CORK, 0) 39 epoll.modify(fileno, 0) 40 connections[fileno].shutdown(socket.SHUT_RDWR) 41 elif event & select.EPOLLHUP: 42 epoll.unregister(fileno) 43 connections[fileno].close() 44 del connections[fileno] 45 finally: 46 epoll.unregister(serversocket.fileno())
selectors:
1 import selectors 2 import socket 3 4 sel = selectors.DefaultSelector() 5 6 def accept(sock, mask): 7 conn, addr = sock.accept() # Should be ready 8 print('accepted', conn, 'from', addr) 9 conn.setblocking(False) 10 sel.register(conn, selectors.EVENT_READ, read) 11 12 def read(conn, mask): 13 data = conn.recv(1000) # Should be ready 14 if data: 15 print('echoing', repr(data), 'to', conn) 16 conn.send(data) # Hope it won't block 17 else: 18 print('closing', conn) 19 sel.unregister(conn) 20 conn.close() 21 22 sock = socket.socket() 23 sock.bind(('localhost', 10000)) 24 sock.listen(100) 25 sock.setblocking(False) 26 sel.register(sock, selectors.EVENT_READ, accept) 27 28 while True: 29 events = sel.select() 30 for key, mask in events: 31 callback = key.data 32 callback(key.fileobj, mask)
該模塊基於SSH用於鏈接遠程服務器並執行相關操做
SSHClient
用於鏈接遠程服務器並執行基本命令
基於用戶名密碼鏈接:
import paramiko # 建立SSH對象 ssh = paramiko.SSHClient() # 容許鏈接不在know_hosts文件中的主機 ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) # 鏈接服務器 ssh.connect(hostname='c1.salt.com', port=22, username='wupeiqi', password='123') # 執行命令 stdin, stdout, stderr = ssh.exec_command('df') # 獲取命令結果 result = filter(lambda x: x is not None,[stdout.read(),stderr.read()])[0] # 關閉鏈接 ssh.close()
1 import paramiko 2 3 transport = paramiko.Transport(('hostname', 22)) 4 transport.connect(username='wupeiqi', password='123') 5 6 ssh = paramiko.SSHClient() 7 ssh._transport = transport 8 9 stdin, stdout, stderr = ssh.exec_command('df') 10 print stdout.read() 11 12 transport.close() 13 14 SSHClient 封裝 Transport
基於公鑰密鑰鏈接:
1 import paramiko 2 3 private_key = paramiko.RSAKey.from_private_key_file('/home/auto/.ssh/id_rsa') 4 5 # 建立SSH對象 6 ssh = paramiko.SSHClient() 7 # 容許鏈接不在know_hosts文件中的主機 8 ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) 9 # 鏈接服務器 10 ssh.connect(hostname='c1.salt.com', port=22, username='wupeiqi', key=private_key) 11 12 # 執行命令 13 stdin, stdout, stderr = ssh.exec_command('df') 14 # 獲取命令結果 15 result = stdout.read() 16 17 # 關閉鏈接 18 ssh.close()
1 import paramiko 2 3 private_key = paramiko.RSAKey.from_private_key_file('/home/auto/.ssh/id_rsa') 4 5 transport = paramiko.Transport(('hostname', 22)) 6 transport.connect(username='wupeiqi', pkey=private_key) 7 8 ssh = paramiko.SSHClient() 9 ssh._transport = transport 10 11 stdin, stdout, stderr = ssh.exec_command('df') 12 13 transport.close()
SFTPClient
用於鏈接遠程服務器並執行上傳下載
基於用戶名密碼上傳下載
1 import paramiko 2 3 transport = paramiko.Transport(('hostname',22)) 4 transport.connect(username='wupeiqi',password='123') 5 6 sftp = paramiko.SFTPClient.from_transport(transport) 7 # 將location.py 上傳至服務器 /tmp/test.py 8 sftp.put('/tmp/location.py', '/tmp/test.py') 9 # 將remove_path 下載到本地 local_path 10 sftp.get('remove_path', 'local_path') 11 12 transport.close()
基於公鑰密鑰上傳下載
1 import paramiko 2 3 private_key = paramiko.RSAKey.from_private_key_file('/home/auto/.ssh/id_rsa') 4 5 transport = paramiko.Transport(('hostname', 22)) 6 transport.connect(username='wupeiqi', pkey=private_key ) 7 8 sftp = paramiko.SFTPClient.from_transport(transport) 9 # 將location.py 上傳至服務器 /tmp/test.py 10 sftp.put('/tmp/location.py', '/tmp/test.py') 11 # 將remove_path 下載到本地 local_path 12 sftp.get('remove_path', 'local_path') 13 14 transport.close()
一、數據庫操做
1 show databases; 2 use [databasename]; 3 create database [name];
二、數據表操做
1 show tables; 2 3 create table students 4 ( 5 id int not null auto_increment primary key, 6 name char(8) not null, 7 sex char(4) not null, 8 age tinyint unsigned not null, 9 tel char(13) null default "-" 10 );
1 CREATE TABLE `wb_blog` ( 2 `id` smallint(8) unsigned NOT NULL, 3 `catid` smallint(5) unsigned NOT NULL DEFAULT '0', 4 `title` varchar(80) NOT NULL DEFAULT '', 5 `content` text NOT NULL, 6 PRIMARY KEY (`id`), 7 UNIQUE KEY `catename` (`catid`) 8 ) ;
三、數據操做
1 insert into students(name,sex,age,tel) values('alex','man',18,'151515151') 2 3 delete from students where id =2; 4 5 update students set name = 'sb' where id =1; 6 7 select * from students
四、其餘
1 主鍵 2 外鍵 3 左右鏈接
Python MySQL API
1、插入數據
1 import MySQLdb 2 3 conn = MySQLdb.connect(host='127.0.0.1',user='root',passwd='1234',db='mydb') 4 5 cur = conn.cursor() 6 7 reCount = cur.execute('insert into UserInfo(Name,Address) values(%s,%s)',('peony','usa')) 8 # reCount = cur.execute('insert into UserInfo(Name,Address) values(%(id)s, %(name)s)',{'id':12345,'name':'wupeiqi'}) 9 10 conn.commit() 11 12 cur.close() 13 conn.close() 14 15 print reCount
1 import MySQLdb 2 3 conn = MySQLdb.connect(host='127.0.0.1',user='root',passwd='1234',db='mydb') 4 5 cur = conn.cursor() 6 7 li =[ 8 ('peony','usa'), 9 ('jeff','usa'), 10 ] 11 reCount = cur.executemany('insert into UserInfo(Name,Address) values(%s,%s)',li) 12 13 conn.commit() 14 cur.close() 15 conn.close() 16 17 print (reCount)
注意:cur.lastrowid
2、刪除數據
import MySQLdb conn = MySQLdb.connect(host='127.0.0.1',user='root',passwd='1234',db='mydb') cur = conn.cursor() reCount = cur.execute('delete from UserInfo') conn.commit() cur.close() conn.close() print (reCount)
3、修改數據
1 import MySQLdb 2 3 conn = MySQLdb.connect(host='127.0.0.1',user='root',passwd='1234',db='mydb') 4 5 cur = conn.cursor() 6 7 reCount = cur.execute('update UserInfo set Name = %s',('alin',)) 8 9 conn.commit() 10 cur.close() 11 conn.close() 12 13 print (reCount)
4、查數據
1 # ############################## fetchone/fetchmany(num) ############################## 2 3 import MySQLdb 4 5 conn = MySQLdb.connect(host='127.0.0.1',user='root',passwd='1234',db='mydb') 6 cur = conn.cursor() 7 8 reCount = cur.execute('select * from UserInfo') 9 10 print cur.fetchone() 11 print cur.fetchone() 12 cur.scroll(-1,mode='relative') 13 print cur.fetchone() 14 print cur.fetchone() 15 cur.scroll(0,mode='absolute') 16 print cur.fetchone() 17 print cur.fetchone() 18 19 cur.close() 20 conn.close() 21 22 print reCount 23 24 25 26 # ############################## fetchall ############################## 27 28 import MySQLdb 29 30 conn = MySQLdb.connect(host='127.0.0.1',user='root',passwd='1234',db='mydb') 31 #cur = conn.cursor(cursorclass = MySQLdb.cursors.DictCursor) 32 cur = conn.cursor() 33 34 reCount = cur.execute('select Name,Address from UserInfo') 35 36 nRet = cur.fetchall() 37 38 cur.close() 39 conn.close() 40 41 print reCount 42 print nRet 43 for i in nRet: 44 print i[0],i[1]