默認狀況下,一個線程的棧要預留1M的內存空間
而一個進程中可用的內存空間只有2G,因此理論上一個進程中最多能夠開2048個線程
可是內存固然不可能徹底拿來做線程的棧,因此實際數目要比這個值要小。
你也能夠經過鏈接時修改默認棧大小,將其改的比較小,這樣就能夠多開一些線程。
如將默認棧的大小改爲512K,這樣理論上最多就能夠開4096個線程。
即便物理內存再大,一個進程中能夠起的線程總要受到2GB這個內存空間的限制。
比方說你的機器裝了64GB物理內存,但每一個進程的內存空間仍是4GB,其中用戶態可用的仍是2GB。
若是是同一臺機器內的話,能起多少線程也是受內存限制的。每一個線程對象都要站用非頁面內存,而非頁面內存也是有限的,當非頁面內存被耗盡時,也就沒法建立線程了。
若是物理內存很是大,同一臺機器內能夠跑的線程數目的限制值會愈來愈大。 windows
在Windows下寫個程序,一個進程Fork出2000個左右線程就會異常退出了,爲何?服務器
這個問題的產生是由於windows32位系統,一個進程所能使用的最大虛擬內存爲2G,而一個線程的默認線程棧StackSize爲1024K(1M),這樣當線程數量逼近2000時,2000*1024K=2G(大約),內存資源就至關於耗盡。app
MSDN原文:async
「The number of threads a process can create is limited by the available virtual memory. By default, every thread has one megabyte of stack space. Therefore, you can create at most 2,028 threads. If you reduce the default stack size, you can create more threads. However, your application will have better performance if you create one thread per processor and build queues of requests for which the application maintains the context information. A thread would process all requests in a queue before processing requests in the next queue.」ide
如何突破2000個限制?ui
能夠經過修改CreateThread參數來縮小線程棧StackSize,例如spa
#define MAX_THREADS 50000
DWORD WINAPI ThreadProc( LPVOID lpParam ){
while(1){
Sleep(100000);
}
return 0;
}
int main() {
DWORD dwThreadId[MAX_THREADS];
HANDLE hThread[MAX_THREADS];
for(int i = 0; i < MAX_THREADS; ++i)
{
hThread[i] = CreateThread(0, 64, ThreadProc, 0, STACK_SIZE_PARAM_IS_A_RESERVATION, &dwThreadId[i]);
if(0 == hThread[i])
{
DWORD e = GetLastError();
printf("%d\r\n",e);
break;
}
}
ThreadProc(0);
}
服務器端程序設計操作系統
若是你的服務器端程序設計成:來一個client鏈接請求則建立一個線程,那麼就會存在2000個限制(在硬件內存和CPU個數必定的狀況下)。建議以下:線程
The "one thread per client" model is well-known not to scale beyond a dozen clients or so. If you're going to be handling more than that many clients simultaneously, you should move to a model where instead of dedicating a thread to a client, you instead allocate an object. (Someday I'll muse on the duality between threads and objects.) Windows provides I/O completion ports and a thread pool to help you convert from a thread-based model to a work-item-based model.
1. Serve many clients with each thread, and use nonblocking I/O and level-triggered readiness notification
2. Serve many clients with each thread, and use nonblocking I/O and readiness change notification
3. Serve many clients with each server thread, and use asynchronous I/O