although I have been playing with pthreads, OpenMP, intel TBB, and threading in general for a while I still don't understand what is the main difference between a message passing interface implementation like OpenMP and a classic threading library it's still unclear to me.
Assuming that writing all the boilerplate code for a threading pool it's not a problem in my case, and I'm using C++, the difference between this 2 technologies boils down to ... ?
I'm also interested in operating with threads over the network while distributing tasks to all the connected machines.
Right now I'm also not considering the limitations in terms of number of platforms supported by OpenMP/OpenMPI because I would like to understand just how this 2 concepts work.
As a complement to Mike Seymour answer :
The main trade-off depends on what you have to share between your process and threads. With shared memory, you actually share the data between the execution contexts.
With messaging, you need to copy the data to pass it between the execution contexts (threads, processes, processes over several computers).
If your data is small (read: data transmission time is small) compared to the execution time of your context, then MPI should not have a significant overhead compared to shared memory.
At the opposite, if the data to be shared is large (data transmission time) is of the same order of magnitude compared to your execution time, then MPI may not be a good idea.
Last, it you want to cross the boundaries of a single computer, shared memory is out of the game.