Skip to content

Commit

Permalink
Merge pull request #119 from nesi/Johnryder23-patch-1
Browse files Browse the repository at this point in the history
Update 064-parallel.md
  • Loading branch information
Johnryder23 authored Oct 7, 2024
2 parents 1410802 + a6b8432 commit 43892ca
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions _episodes/064-parallel.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,9 @@ To understand the different types of Parallel Computing we first need to clarify

**CPU**: Unit that does the computations.

**Task**: One or more CPUs that share memory.
**Task**: Like a thread, but multiple tasks do not need to share memory.

**Node**: The physical hardware. The upper limit on how many CPUs can be in a task.
**Node**: A single computer of the cluster. Nodes are made up of CPUs and RAM.

**Shared Memory**: When multiple CPUs are used within a single task.

Expand All @@ -51,7 +51,7 @@ Number of threads to use is specified by the Slurm option `--cpus-per-task`.

Distributed-memory multiproccessing divides work among _tasks_, a task may contain multiple CPUs (provided they all share memory, as discussed previously).

Message Passing Interface (MPI) is a communication standard for distributed-memory multiproccessing. While there are other standards, often 'MPI' is used synonymously with Distributed parallelism.
Message Passing Interface (MPI) is a communication standard for distributed-memory multiproccessing. While there are other standards, often 'MPI' is used synonymously with Distributed parallelism.

Each task has it's own exclusive memory, tasks can be spread across multiple nodes, communicating via and _interconnect_. This allows MPI jobs to be much larger than shared memory jobs. It also means that memory requirements are more likely to increase proportionally with CPUs.

Expand Down

0 comments on commit 43892ca

Please sign in to comment.