Skip to content
forked from ruohai0925/IAMR

A parallel, adaptive mesh refinement (AMR) code that solves the variable-density incompressible Navier-Stokes equations.

Notifications You must be signed in to change notification settings

S-Explorer/IAMR

 
 

Repository files navigation

Overview

This repo builds upon IAMR code that solves the multiphase incompressible flows. The Navier-Stokes euqations are solved on a semi-staggered grid using the projection method. The gas-liquid interface is captured using either the level set (LS) method or the conservative level set (CLS) method. The fluid-solid interface is resolved using the diffused immersed boundary method (DIBM). The particle-wall as well as the particle-particle collisions are also captured by the adaptive collision time model (ACTM). This code aims at simulating the multiphase flow and fluid structure interaction (FSI) problems on both CPUs and GPUs with/without subcycling.

Features

  • LS method and reinitialization schemes
  • Diffused Immersed Boundary Method
  • Particle Collision Algorithms

Examples

Profiles of drop interface in the RSV problem
Profiles of drop interface in the RSV problem at t/T=1 after one rotation. Black line: Analytical Solution; Red line: 64*64; Blue line: 128*128; Green line: 256*256

Short Description 1
(a) Density profile at t/T=2.42 using LS method. (b) Density profile at t/T=2.42 using IAMR convective scheme.

Short Description 2
Comparison of the tip locations of the falling fluid and the rising fluid.

Cluster of monodisperse particles
Contours of velocity magnitude in y − z plane

Install

Download

Our code is rely on AMReX framework, you must download such repo as follow:

  1. AMReX:git clone https://github.com/ruohai0925/amrex
  2. AMReX-Hydro:git clone https://github.com/ruohai0925/AMReX-Hydro
  3. this repo:git clone https://github.com/ruohai0925/IAMR/tree/development

After that, you will find three folder in your current directory:AMReX,AMReX-Hydro,IAMR.

Compile

We recommend using the GNU compiler to compile the program on the Linux platform. The compilation process requires preparing a make file, which you can find in the example folder under Tutorials. It is strongly recommended to use the GNUmakefile prepared in advance by the example. If you want to know more about the configuration information of the compilation parameters, you can check it in the AMReX building.

For example, if we want to compile in the FlowPastSphere , refer to the following steps:

  1. cd to the FlowPastSphere directory

    cd IAMR/Tutorials/FlowPastSphere
  2. Modify compilation parameters in GNUmakefile.

    The compilation parameters depend on your computing platform. If you use MPI to run your program, then set USE_MPI = TRUE. If you are running the program with Nvidia GPU(CUDA), then set USE_CUDA = TRUE. When using GPU runtime, please make sure your CUDA environment is ok. For the compilation parameters in the file, you can find the relevant information in options.

  3. Compile

    After preparing the above settings, you can compile the program:

    make

    You can add parameters after make, such as -f your_make_file to customize your parameter file, and -j 8 to specify 8 threads to participate in the compilation.

    If the compilation is successful, an executable file will be generated. Usually the executable file is named in the following format:amr[DIM].[Compiler manufacturers].[computing platform].[is Debug].ex. The executable file name of the Release version of the three-dimensional compiled using the GNU compiler in the MPI environment is: amr3d.GNU.MPI.ex.

Usage

You should takes an input file as its first command-line argument. The file may contain a set of parameter definitions that will overrides defaults set in the code. In the FlowPastSphere, you can find a file named inputs.3d.flow_past_spher, run:

./amr3d.GNU.MPI.ex inputs.3d.flow_past_spher

If you use MPI to run your program, you can type:

mpirun -np how_many_threads amr3d.GNU.MPI.ex inputs.3d.flow_past_spher

This code typically generates subfolders in the current folder that are named plt00000, plt00010, etc, and chk00000, chk00010, etc. These are called plotfiles and checkpoint files. The plotfiles are used for visualization of derived fields; the checkpoint files are used for restarting the code.

Tools

In the Tools folder, there are some post-processing script for DIBM. The postProcess submodule is used to process the generated particle data file. The specific usage can be seen in the README.

In addition, a Fortran code for generating a particle bed is provided, You can customize the domain size and particle size you want.

! Line 10 - 15
! domain scheme
LX_DOMAIN=3.0D0*Pi
LY_DOMAIN=6.0D0*Pi
LZ_DOMAIN=3.0D0*Pi
! D
SPD=1.D0

Compile the code and run it, and a file named SPC.DAT will be generated, in which each line represents the position of a particle.

gfortran -o fixBed DomainFill.F90
./fixBed

Acknowledgements

We are grateful to Ann Almgren, Andy Nonaka, Andrew Myers, Axel Huebl, and Weiqun Zhang in the Lawrence Berkeley National Laboratory (LBNL) for their discussions related to AMReX and IAMR. Y.Z. and Z.Z. also thank Prof.Lian Shen, Prof.Ruifeng Hu, and Prof.Xiaojing Zheng during their Ph.D. studies.

Contact

If you have any question, or want to contribute to the code, please don't hesitate to contact us.

About

A parallel, adaptive mesh refinement (AMR) code that solves the variable-density incompressible Navier-Stokes equations.

Resources

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • C++ 54.9%
  • Mathematica 39.2%
  • Python 2.7%
  • Makefile 1.5%
  • Shell 1.0%
  • Fortran 0.6%
  • TeX 0.1%