mongo

1 QA.http://api.mongodb.com/python/current/faq.htmlhtml

參考:http://api.mongodb.com/python/current/faq.html#how-does-connection-pooling-work-in-pymongo python

client = MongoClient(host, port, maxPoolSize=50, waitQueueMultiple=500, waitQueueTimeoutMS=100)

Create this client once for each process, and reuse it for all operations. It is a common mistake to create a new client for each request, which is very inefficient.mongodb

MongoClient 不能夠被屢次建立。shell

When 500 threads are waiting for a socket, the 501st that needs a socket raises ExceededMaxWaiters.api

A thread that waits more than 100ms (in this example) for a socket raises ConnectionFailure服務器

When close() is called by any thread, all idle sockets are closed, and all sockets that are in use will be closed as they are returned to the pool.併發

 

關於_id:app

PyMongo adds an _id field in this manner for a few reasons:socket

  • All MongoDB documents are required to have an _id field.
  • If PyMongo were to insert a document without an _id MongoDB would add one itself, but it would not report the value back to PyMongo.
  • Copying the document to insert before adding the _id field would be prohibitively expensive for most high write volume applications.

If you don’t want PyMongo to add an _id to your documents, insert only documents that already have an _id field, added by your application.tcp

 

What does CursorNotFound cursor id not valid at server mean?

Cursors in MongoDB can timeout on the server if they’ve been open for a long time without any operations being performed on them. This can lead to an CursorNotFound exception being raised when attempting to iterate the cursor.

How do I change the timeout value for cursors?

MongoDB doesn’t support custom timeouts for cursors, but cursor timeouts can be turned off entirely. Pass no_cursor_timeout=True to find().

 

MongoDB1.3版本以上都經過MongoClient類進行鏈接,其策略默認就是長鏈接,並且沒法修改。因此鏈接數其實取決於fpm的客戶進程數。若是fpm量太大,必然會致使鏈接數過多的問題。若是你全部機器上一共有1000個fpm,那麼就會建立1000個長鏈接,按mongodb服務端的策略,每一個鏈接最低消耗1M內存,那這1G內存就沒了。因此直接方案是每次使用完後進行close操做,這樣不會讓服務端須要保持大量的鏈接。而close函數也有一個坑,就是默認只關閉寫鏈接(好比master或者replica sets的primary),若是要關閉所有鏈接,須要添加參數true即:$mongo->close(true)每次關閉鏈接的方案能夠有效減小服務器的併發鏈接數,除非你的操做自己很是慢。可是一樣也有它的問題,好比每次不能複用以前的tcp鏈接,須要從新進行鏈接,這樣鏈接耗時會比較高,特別是用replica sets的時候,須要建立多個tcp鏈接。因此最終可能只有兩個方案一是減少fpm的數量二是自建鏈接池,經過鏈接池將之個客戶端的鏈接收斂成固定數量對MongoDB的鏈接。

相關文章
相關標籤/搜索