I'm just interested in difference between OpenMP and MPI. Can MPI also parallelize while and recursive function like 'task' in OpenMP? If MPI can run the task-based program, then a reduction on task is also able?
You should not look at MPI as parallelizing operations: it is essentially based on distributed memory, so you should start by partitioning the data, and then expressing the operations on each data subset. In the particular case of a while
loop, MPI can only do something with that if each iteration corresponds to a different data item to operate on.
Alternatively look at an MPI program as consisting of a fixed number of tasks: the MPI processes. These "tasks" are created when your MPI run starts and they exist until the end of the run.
So: MPI and OpenMP are built on very different models. Sometimes you can translate between the two, often you can no. The clear case where you can is that of the parallel loop over an array: then OpenMP divides the dataspace over the threads, and in MPI you'd divide the dataspace over the processes.
As @HighPerformanceMark remarks, if you want dynamic tasks, you have to program in a manager/worker model. However, note that this may involve moving lots of data, so parallel speedup is not guaranteed.