Skip to content

Commit

Permalink
Merge pull request #150 from JuliaGeodynamics/add-docs-for-checkpointing
Browse files Browse the repository at this point in the history
Add docs for checkpointing
  • Loading branch information
albert-de-montserrat authored Oct 3, 2024
2 parents 23599bb + ccbb47f commit e8b4ceb
Show file tree
Hide file tree
Showing 4 changed files with 90 additions and 1 deletion.
2 changes: 2 additions & 0 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,8 @@ makedocs(;
"field_advection2D_MPI.md",
"field_advection3D.md",
],
"I/O" =>"IO.md",
"Mixed GPU/CPU" =>"mixed_CPU_GPU.md",
"Public API" => "API.md"
],
)
Expand Down
42 changes: 42 additions & 0 deletions docs/src/IO.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
# Checkpointing

## Writing checkpoint files

It is customary to employ [checkpointing](https://en.wikipedia.org/wiki/Application_checkpointing) during simulation that involve many time steps. A checkpoint file then needs to be written to disk. Such file allows for restarting a simulation from last checkpoint file written to disk.
Moreover, checkpoint files may occupy a lost of disk space. Here is how to write essential particle information in a checkpoint file in [jld2 format](https://github.com/JuliaIO/JLD2.jl):

```julia
jldsave(
"my_file.jld2";
particles = Array(particles),
phases = Array(phases),
phase_ratios = Array(phase_ratios),
particle_args = Array.(particle_args),
)
```
This will save particle information to the file `my_file.jld2`, which can be reused in order to restart a simulation.

If file size are huge, on may cast all the fields from particle structures into `Float32`. While this will spare disk space, it may hinder the reproducibility at restart.

```julia
jldsave(
"my_file.jld2";
particles = Array(Float32, particles),
phases = Array(Float32, phases),
phase_ratios = Array(Float32, phase_ratios),
particle_args = Array.(Float32, particle_args),
)
```

## Loading a checkpoint file

In order to restart a simulation, one needs to load the checkpoint file of interest. This is how to read the particle information from the checkpoint file `my_file.jld2`:

```julia
data = load("my_file.jld2")
particles = TA(backend)(Float64, data["particles"])
phases = TA(backend)(Float64, data["phases"])
phase_ratios = TA(backend)(Float64, data["phase_ratios"])
particle_args = TA(backend).(Float64, data["particle_args"])
```
The function `TA(backend)` will automatically cast the data to the appropriate type, depending on the requested backend.
2 changes: 1 addition & 1 deletion docs/src/field_advection3D.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ First we load JustPIC
using JustPIC
```

and the correspondent 3D module
and the corresponding 3D module

```julia
using JustPIC._3D
Expand Down
45 changes: 45 additions & 0 deletions docs/src/mixed_CPU_GPU.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
# Mixed CPU and GPU computations

If GPU memory is a limiting factor for your computation, it may be preferable to carry out particle operations on the CPU rather than on the GPU.
This involves basically 4 steps:

1) *At the top of the script*. The JustPIC backend must be set to CPU, in contrast with other employed packages (e.g. ParallelStencil):
```julia
const backend = JustPIC.CPUBackend
```

2) *At memory allocation stage*. A copy of relevant CPU arrays must be allocated on the GPU memory. For example, phase ratios on mesh vertices:
```julia
phv_GPU = @zeros(nx+1, ny+1, nz+1, celldims=(N_phases))
```
where `N_phases` is the number of different material phases and `@zeros()` allocates on the GPU.

Similarly, GPU arrays must be copied to CPU memory:
```julia
V_CPU = (
x = zeros(nx+1, ny+2, nz+2),
y = zeros(nx+2, ny+1, nz+2),
z = zeros(nx+2, ny+2, nz+1),
)
```
where `zeros()` allocates on the CPU memory.

4) *At each time step*. The particle will be stored on the CPU memory. It is hence necessary to transfer some information from the CPU to the GPU memory. For example, here's a transfer of phase proportions:

```julia
phv_GPU.data .= CuArray(phase_ratios.vertex).data
```
!!! we explicitly write `CuArray` - would be better to have something more explicit like `GPUArray` - is there such a thing?

5) *At each time step*. Once velocity computation are finalised on the GPU, they need to be transferred to the CPU:

```julia
V_CPU.x .= TA(backend)(V.x)
V_CPU.y .= TA(backend)(V.y)
V_CPU.z .= TA(backend)(V.z)
```
Advection can then be applied by calling the `advection()` function:

```julia
advection!(particles, RungeKutta2(), values(V), (grid_vx, grid_vy, grid_vz), Δt)
```

0 comments on commit e8b4ceb

Please sign in to comment.