最近對mongoDB數據庫進行性能分析,須要對數據庫進行加壓。java
加壓時,最初採用threading模塊寫了個多線程程序,測試的效果不理想。python
單機讀數據庫每秒請求數只能達到1000次/s.而開發的java程序請求數能達到6000-7000次/s。mongodb
證實受限於GIL,python的多線程表現確實不理想。數據庫
後來,採用了multiprocessing模塊,採用多進程的方式進行加壓。多線程
通過測試證實,multiprocessing的性能仍是不錯,和開發java程序的性能至關。app
腳本以下:async
#!/usr/bin/env python from pymongo import Connection,MongoClient,MongoReplicaSetClient import multiprocessing import time #connection = MongoClient('mongodb://10.120.11.212:27017/') #connection = Connection(['10.120.11.122','10.120.11.221','10.120.11.212'], 27017) '''數據庫採用了讀寫分離設置,鏈接mongoDB的模式要配對''' connection=MongoReplicaSetClient( '10.120.11.122:27017,10.120.11.221:27017,10.120.11.212:27017', replicaSet='rs0', read_preference=3 # read_preference=3 ) db = connection['cms'] db.authenticate('cms', 'cms') #計時器 def func_time(func): def _wrapper(*args,**kwargs): start = time.time() func(*args,**kwargs) print func.__name__,'run:',time.time()-start return _wrapper #插入測試方法 def insert(num): posts = db.userinfo for x in range(num): post = {"_id" : str(x), "author": str(x), "text": "My first blog post!" } posts.insert(post) #查詢測試方法 def query(num): get=db.device for i in xrange(num): get.find_one({"scanid":"010000138101010000009aaaaa"}) @func_time def main(process_num,num): pool = multiprocessing.Pool(processes=process_num) for i in xrange(num): pool.apply_async(query, (num, )) pool.close() pool.join() print "Sub-process(es) done." if __name__ == "__main__": # query(500,1) main(800,500)
原文發表於http://www.cnblogs.com/reach296/post