Search code examples
multithreadingthreadpoolprocessors

Threads and Processors


I'm currently learning about multithreading, threads, threadpools, and such. I have read that the number of threads cannot be more than the number of logical processors your computer has (or at least there is no advantage to be gained since your CPU cannot handle more).

So what is the expected behavior if you write code that created hundreds of threads on a computer with say, 12 logical processors? Do they queue up? Do they wait for each other? Or does it give you an error? If you have a process that could benefit from 100 continuously running threads, but only have 12 cores, what is the best way to handle this? I often open my task manager to see hundreds of processes and thousands of threads are running. How does that work?

Also, what if I'm running a program say in Windows, with a bunch of other applications running (ie. Chrome, MS Excel, Skype, ect...), and perhaps a bunch of background services (ie. Windows Defender, Wifi services, ect...) Do these other applications take up logical processors, thus decreasing the number of logical processors available to my threaded program?


Solution

  • As Thilo hinted at, modern personal computers are perpetually creating, executing, and destroying dozens if not hundreds of threads/processes at any given slice of time. The number of threads being actually processed on the CPU can not exceed the number of logical cores, however, that does not mean that there can not be many more threads waiting to be executed.

    What is the expected behavior if you write code that created hundreds of threads on a computer with say, 12 logical processors?

    If we make the assumption that the environment (framework, platform, language, OS, hardware) being used to create these threads can support than many concurrent threads, then the result would be that each thread is scheduled by the operating system based on the number of available cores and the priority of those threads relative to that of other running threads/processes. This behavior can vary drastically depending on the specific operating system, tool set, program type (kernel mode / user mode on windows), and hardware running the code.

    As a side note, traditional threads can be expensive to use because it forces the processor to do a context-switch (flush caches, load new context, execute). There are other techniques that can work around this issue to an extent (such as the task parallelism library for .Net, or the parallel patterns library for C++) when solving specific problems.

    If you have a process that could benefit from 100 continuously running threads, but only have 12 cores, what is the best way to handle this?

    This depends on the task at hand and the environment you are working in. Asynchronous programming is a very big topic in computer science, and as such, there are a large number of techniques and libraries available - each with its benefits and draw backs.