diff --git a/docs/make.jl b/docs/make.jl index 26732ee4..83ac8126 100644 --- a/docs/make.jl +++ b/docs/make.jl @@ -21,6 +21,8 @@ makedocs(; "field_advection2D_MPI.md", "field_advection3D.md", ], + "I/O" =>"IO.md", + "Mixed GPU/CPU" =>"mixed_CPU_GPU.md", "Public API" => "API.md" ], ) diff --git a/docs/src/IO.md b/docs/src/IO.md new file mode 100644 index 00000000..d72046ac --- /dev/null +++ b/docs/src/IO.md @@ -0,0 +1,42 @@ +# Checkpointing + +## Writing checkpoint files + +It is customary to employ [checkpointing](https://en.wikipedia.org/wiki/Application_checkpointing) during simulation that involve many time steps. A checkpoint file then needs to be written to disk. Such file allows for restarting a simulation from last checkpoint file written to disk. +Moreover, checkpoint files may occupy a lost of disk space. Here is how to write essential particle information in a checkpoint file in [jld2 format](https://github.com/JuliaIO/JLD2.jl): + +```julia +jldsave( + "my_file.jld2"; + particles = Array(particles), + phases = Array(phases), + phase_ratios = Array(phase_ratios), + particle_args = Array.(particle_args), +) +``` +This will save particle information to the file `my_file.jld2`, which can be reused in order to restart a simulation. + +If file size are huge, on may cast all the fields from particle structures into `Float32`. While this will spare disk space, it may hinder the reproducibility at restart. + +```julia +jldsave( + "my_file.jld2"; + particles = Array(Float32, particles), + phases = Array(Float32, phases), + phase_ratios = Array(Float32, phase_ratios), + particle_args = Array.(Float32, particle_args), +) +``` + +## Loading a checkpoint file + +In order to restart a simulation, one needs to load the checkpoint file of interest. This is how to read the particle information from the checkpoint file `my_file.jld2`: + +```julia +data = load("my_file.jld2") +particles = TA(backend)(Float64, data["particles"]) +phases = TA(backend)(Float64, data["phases"]) +phase_ratios = TA(backend)(Float64, data["phase_ratios"]) +particle_args = TA(backend).(Float64, data["particle_args"]) +``` +The function `TA(backend)` will automatically cast the data to the appropriate type, depending on the requested backend. diff --git a/docs/src/field_advection3D.md b/docs/src/field_advection3D.md index d23d378a..ed09e6d1 100644 --- a/docs/src/field_advection3D.md +++ b/docs/src/field_advection3D.md @@ -6,7 +6,7 @@ First we load JustPIC using JustPIC ``` -and the correspondent 3D module +and the corresponding 3D module ```julia using JustPIC._3D diff --git a/docs/src/mixed_CPU_GPU.md b/docs/src/mixed_CPU_GPU.md new file mode 100644 index 00000000..5dbc4f46 --- /dev/null +++ b/docs/src/mixed_CPU_GPU.md @@ -0,0 +1,45 @@ +# Mixed CPU and GPU computations + +If GPU memory is a limiting factor for your computation, it may be preferable to carry out particle operations on the CPU rather than on the GPU. +This involves basically 4 steps: + +1) *At the top of the script*. The JustPIC backend must be set to CPU, in contrast with other employed packages (e.g. ParallelStencil): +```julia +const backend = JustPIC.CPUBackend +``` + +2) *At memory allocation stage*. A copy of relevant CPU arrays must be allocated on the GPU memory. For example, phase ratios on mesh vertices: +```julia +phv_GPU = @zeros(nx+1, ny+1, nz+1, celldims=(N_phases)) +``` +where `N_phases` is the number of different material phases and `@zeros()` allocates on the GPU. + +Similarly, GPU arrays must be copied to CPU memory: +```julia +V_CPU = ( + x = zeros(nx+1, ny+2, nz+2), + y = zeros(nx+2, ny+1, nz+2), + z = zeros(nx+2, ny+2, nz+1), +) +``` +where `zeros()` allocates on the CPU memory. + +4) *At each time step*. The particle will be stored on the CPU memory. It is hence necessary to transfer some information from the CPU to the GPU memory. For example, here's a transfer of phase proportions: + +```julia +phv_GPU.data .= CuArray(phase_ratios.vertex).data +``` +!!! we explicitly write `CuArray` - would be better to have something more explicit like `GPUArray` - is there such a thing? + +5) *At each time step*. Once velocity computation are finalised on the GPU, they need to be transferred to the CPU: + +```julia +V_CPU.x .= TA(backend)(V.x) +V_CPU.y .= TA(backend)(V.y) +V_CPU.z .= TA(backend)(V.z) +``` +Advection can then be applied by calling the `advection()` function: + +```julia +advection!(particles, RungeKutta2(), values(V), (grid_vx, grid_vy, grid_vz), Δt) +```