-
Notifications
You must be signed in to change notification settings - Fork 3
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #150 from JuliaGeodynamics/add-docs-for-checkpointing
Add docs for checkpointing
- Loading branch information
Showing
4 changed files
with
90 additions
and
1 deletion.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,42 @@ | ||
# Checkpointing | ||
|
||
## Writing checkpoint files | ||
|
||
It is customary to employ [checkpointing](https://en.wikipedia.org/wiki/Application_checkpointing) during simulation that involve many time steps. A checkpoint file then needs to be written to disk. Such file allows for restarting a simulation from last checkpoint file written to disk. | ||
Moreover, checkpoint files may occupy a lost of disk space. Here is how to write essential particle information in a checkpoint file in [jld2 format](https://github.com/JuliaIO/JLD2.jl): | ||
|
||
```julia | ||
jldsave( | ||
"my_file.jld2"; | ||
particles = Array(particles), | ||
phases = Array(phases), | ||
phase_ratios = Array(phase_ratios), | ||
particle_args = Array.(particle_args), | ||
) | ||
``` | ||
This will save particle information to the file `my_file.jld2`, which can be reused in order to restart a simulation. | ||
|
||
If file size are huge, on may cast all the fields from particle structures into `Float32`. While this will spare disk space, it may hinder the reproducibility at restart. | ||
|
||
```julia | ||
jldsave( | ||
"my_file.jld2"; | ||
particles = Array(Float32, particles), | ||
phases = Array(Float32, phases), | ||
phase_ratios = Array(Float32, phase_ratios), | ||
particle_args = Array.(Float32, particle_args), | ||
) | ||
``` | ||
|
||
## Loading a checkpoint file | ||
|
||
In order to restart a simulation, one needs to load the checkpoint file of interest. This is how to read the particle information from the checkpoint file `my_file.jld2`: | ||
|
||
```julia | ||
data = load("my_file.jld2") | ||
particles = TA(backend)(Float64, data["particles"]) | ||
phases = TA(backend)(Float64, data["phases"]) | ||
phase_ratios = TA(backend)(Float64, data["phase_ratios"]) | ||
particle_args = TA(backend).(Float64, data["particle_args"]) | ||
``` | ||
The function `TA(backend)` will automatically cast the data to the appropriate type, depending on the requested backend. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,45 @@ | ||
# Mixed CPU and GPU computations | ||
|
||
If GPU memory is a limiting factor for your computation, it may be preferable to carry out particle operations on the CPU rather than on the GPU. | ||
This involves basically 4 steps: | ||
|
||
1) *At the top of the script*. The JustPIC backend must be set to CPU, in contrast with other employed packages (e.g. ParallelStencil): | ||
```julia | ||
const backend = JustPIC.CPUBackend | ||
``` | ||
|
||
2) *At memory allocation stage*. A copy of relevant CPU arrays must be allocated on the GPU memory. For example, phase ratios on mesh vertices: | ||
```julia | ||
phv_GPU = @zeros(nx+1, ny+1, nz+1, celldims=(N_phases)) | ||
``` | ||
where `N_phases` is the number of different material phases and `@zeros()` allocates on the GPU. | ||
|
||
Similarly, GPU arrays must be copied to CPU memory: | ||
```julia | ||
V_CPU = ( | ||
x = zeros(nx+1, ny+2, nz+2), | ||
y = zeros(nx+2, ny+1, nz+2), | ||
z = zeros(nx+2, ny+2, nz+1), | ||
) | ||
``` | ||
where `zeros()` allocates on the CPU memory. | ||
|
||
4) *At each time step*. The particle will be stored on the CPU memory. It is hence necessary to transfer some information from the CPU to the GPU memory. For example, here's a transfer of phase proportions: | ||
|
||
```julia | ||
phv_GPU.data .= CuArray(phase_ratios.vertex).data | ||
``` | ||
!!! we explicitly write `CuArray` - would be better to have something more explicit like `GPUArray` - is there such a thing? | ||
|
||
5) *At each time step*. Once velocity computation are finalised on the GPU, they need to be transferred to the CPU: | ||
|
||
```julia | ||
V_CPU.x .= TA(backend)(V.x) | ||
V_CPU.y .= TA(backend)(V.y) | ||
V_CPU.z .= TA(backend)(V.z) | ||
``` | ||
Advection can then be applied by calling the `advection()` function: | ||
|
||
```julia | ||
advection!(particles, RungeKutta2(), values(V), (grid_vx, grid_vy, grid_vz), Δt) | ||
``` |