Search code examples
parallel-processingmpidistributed-computingdistributed-system

Is this strategy a Parallel computing or Distributed Computing? MPI


I have a function to calculate fitness value, say func(). In my implementation, I have used MPI for Parallelization.

There are 3 machines in the MPI cluster connected via LAN. These machines are installed with NFS protocol. Memory is not shared among these machines.

The main while loop runs 500 times.

Inside this while loop, I use MPI to parallelize the 9 func() calls. Meaning, 9 times func() is called inside main while loop, I parallelized that in a way each of 3 nodes gets to call 3 func() calls and return the results to the master node.

MPI Workflow diagram - please refer to this diagram enter image description here

What Happens Inside each node please refer this diagram enter image description here

This continues 500 times in the while loop. (Meaning, in each next loop, again 9 func() calls are parallelized)

Is this strategy called a parallel computing or a distributed computing?

Considering the definitions, parallel computing is parallelizing multiple tasks in parallel and distributed computing is distributing a single task on multiple nodes having a common goal. I feel it's parallel computing.

But, here I am executing on different machines, so should I consider it as distributed computing?

Please clear this doubt.


Solution

  • If you use distributed computing to solve a single problem, then it is also parallel computing. You are using multiple computers (or processors) to solve which satisfies the simple definiton of parallel computing.

    Parallel computing uses two or more processors (cores, computers) in combination to solve a single problem.

    But, Not all parallel computing is distributed. You can perform parallel tasks to solve a problem using shared memory (Using programming models like OpenMP) where you only use a single computer.

    Personal Opinion: You can use MPI to solve a problem by using shared memory or in a single computer (without using shared memory) but it remains parallel computing (by broad definition of distributed computing, there should be multiple computers to make it a distributed computing even though MPI has it's own memory space and uses messsage passing)

    A distributed computer system consists of multiple software components that are on multiple computers, but run as a single system.

    In your case it is both distributed and parallel. As Gilles Gouaillardet pointed out in comments:

    Your program is MPI, so it is both parallel (several tasks collaborate to achieve a goal) and distributed (each task has its own memory space, and communicate with other tasks via message - e.g. no shared memory)