Developers
CMake is our primary build system. If you are new to CMake, this short tutorial from the HEP Software foundation is the perfect place to get started. If you just want to use CMake to build the project, jump into sections 1. Introduction, 2. Building with CMake and 9. Finding Packages.
Dependencies
Before you start, you will need a copy of the WarpX source code:
git clone https://github.com/ECP-WarpX/WarpX.git $HOME/src/warpx
cd $HOME/src/warpx
WarpX depends on popular third party software.
On your development machine, follow the instructions here.
If you are on an HPC machine, follow the instructions here.
Compile
From the base of the WarpX source directory, execute:
# find dependencies & configure
# see additional options below, e.g.
# -DCMAKE_INSTALL_PREFIX=$HOME/sw/warpx
cmake -S . -B build
# compile, here we use four threads
cmake --build build -j 4
That’s it!
A 3D WarpX binary is now in build/bin/
and can be run with a 3D example inputs file.
Most people execute the binary directly or copy it out.
If you want to install the executables in a programmatic way, run this:
# for default install paths, you will need administrator rights, e.g. with sudo:
cmake --build build --target install
You can inspect and modify build options after running cmake -S . -B build
with either
ccmake build
or by adding arguments with -D<OPTION>=<VALUE>
to the first CMake call.
For example, this builds WarpX in 2D geometry and enables Nvidia GPU (CUDA) support:
cmake -S . -B build -DWarpX_DIMS=2 -DWarpX_COMPUTE=CUDA
Build Options
CMake Option |
Default & Values |
Description |
---|---|---|
|
RelWithDebInfo/Release/Debug |
Type of build, symbols & optimizations |
|
system-dependent path |
Install path prefix |
|
ON/OFF |
Print all compiler commands to the terminal during build |
|
Additional options for |
|
|
ON/OFF |
Build the WarpX executable application |
|
ON/OFF |
Ascent in situ visualization |
|
NOACC/OMP/CUDA/SYCL/HIP |
On-node, accelerated computing backend |
|
3/2/1/RZ |
Simulation dimensionality |
|
ON/OFF |
Embedded boundary support (not supported in RZ yet) |
|
ON/OFF |
Add GPU kernel timers (cost function, +4 registers/kernel) |
|
ON/OFF |
Compile WarpX with interprocedural optimization (aka LTO) |
|
ON/OFF |
Build WarpX as a shared library, e.g., for PICMI Python |
|
ON/OFF |
Multi-node support (message-passing) |
|
ON/OFF |
MPI thread-multiple support, i.e. for |
|
ON/OFF |
openPMD I/O (HDF5, ADIOS) |
|
SINGLE/DOUBLE |
Floating point precision (single/double) |
|
SINGLE/DOUBLE |
Particle floating point precision (single/double), defaults to WarpX_PRECISION value if not set |
|
ON/OFF |
Spectral solver |
|
ON/OFF |
QED support (requires PICSAR) |
|
ON/OFF |
QED table generation support (requires PICSAR and Boost) |
|
ON/OFF |
SENSEI in situ visualization |
WarpX can be configured in further detail with options from AMReX, which are documented in the AMReX manual:
Developers might be interested in additional options that control dependencies of WarpX. By default, the most important dependencies of WarpX are automatically downloaded for convenience:
CMake Option |
Default & Values |
Description |
---|---|---|
|
ON/OFF |
Build shared libraries for dependencies |
|
First found |
Set to |
|
ON/OFF |
Print CUDA code generation statistics from |
|
None |
Path to AMReX source directory (preferred if set) |
|
|
Repository URI to pull and build AMReX from |
|
we set and maintain a compatible commit |
Repository branch for |
|
ON/OFF |
Needs a pre-installed AMReX library if set to |
|
None |
Path to openPMD-api source directory (preferred if set) |
|
|
Repository URI to pull and build openPMD-api from |
|
|
Repository branch for |
|
ON/OFF |
Needs a pre-installed openPMD-api library if set to |
|
None |
Path to PICSAR source directory (preferred if set) |
|
|
Repository URI to pull and build PICSAR from |
|
we set and maintain a compatible commit |
Repository branch for |
|
ON/OFF |
Needs a pre-installed PICSAR library if set to |
For example, one can also build against a local AMReX copy.
Assuming AMReX’ source is located in $HOME/src/amrex
, add the cmake
argument -DWarpX_amrex_src=$HOME/src/amrex
.
Relative paths are also supported, e.g. -DWarpX_amrex_src=../amrex
.
Or build against an AMReX feature branch of a colleague.
Assuming your colleague pushed AMReX to https://github.com/WeiqunZhang/amrex/
in a branch new-feature
then pass to cmake
the arguments: -DWarpX_amrex_repo=https://github.com/WeiqunZhang/amrex.git -DWarpX_amrex_branch=new-feature
.
You can speed up the install further if you pre-install these dependencies, e.g. with a package manager.
Set -DWarpX_<dependency-name>_internal=OFF
and add installation prefix of the dependency to the environment variable CMAKE_PREFIX_PATH.
Please see the introduction to CMake if this sounds new to you.
If you re-compile often, consider installing the Ninja build system.
Pass -G Ninja
to the CMake configuration call to speed up parallel compiles.
Configure your compiler
If you don’t want to use your default compiler, you can set the following environment variables. For example, using a Clang/LLVM:
export CC=$(which clang)
export CXX=$(which clang++)
If you also want to select a CUDA compiler:
export CUDACXX=$(which nvcc)
export CUDAHOSTCXX=$(which clang++)
We also support adding additional compiler flags via environment variables such as CXXFLAGS/LDFLAGS:
# example: treat all compiler warnings as errors
export CXXFLAGS="-Werror"
Note
Please clean your build directory with rm -rf build/
after changing the compiler.
Now call cmake -S . -B build
(+ further options) again to re-initialize the build configuration.
Run
An executable WarpX binary with the current compile-time options encoded in its file name will be created in build/bin/
.
Note that you need separate binaries to run 1D, 2D, 3D, and RZ geometry inputs scripts.
Additionally, a symbolic link named warpx
can be found in that directory, which points to the last built WarpX executable.
More details on running simulations are in the section Run WarpX. Alternatively, read on and also build our PICMI Python interface.
PICMI Python Bindings
Note
Preparation: make sure you work with up-to-date Python tooling.
python3 -m pip install -U pip setuptools wheel
python3 -m pip install -U cmake
For PICMI Python bindings, configure WarpX to produce a library and call our pip_install
CMake target:
# find dependencies & configure
cmake -S . -B build -DWarpX_LIB=ON
# build and then call "python3 -m pip install ..."
cmake --build build --target pip_install -j 4
That’s it! You can now run a first 3D PICMI script from our examples.
Developers could now change the WarpX source code and then call the build line again to refresh the Python installation.
Note
These commands build one -DWarpX_DIMS=...
dimensionality (default: 3
) at a time.
If your build/lib*/
directory contains previously built libwarpx*
libraries, then --target pip_install
picks them up as well.
A new call to cmake --build build ...
will only rebuild one dimensionality, as set via WarpX_DIMS
.
If you like to build a WarpX Python package that supports all dimensionalities, you can run this:
for d in 1 2 3 RZ; do
cmake -S . -B build -DWarpX_DIMS=$d -DWarpX_LIB=ON
cmake --build build -j 4
done
cmake --build build --target pip_install
Tip
If you do not develop with a user-level package manager, e.g., because you rely on a HPC system’s environment modules, then consider to set up a virtual environment via Python venv.
Otherwise, without a virtual environment, you likely need to add the CMake option -DPYINSTALLOPTIONS="--user"
.
Python Bindings (Package Management)
This section is relevant for Python package management, mainly for maintainers or people that rather like to interact only with pip
.
One can build and install pywarpx
from the root of the WarpX source tree:
python3 -m pip wheel -v .
python3 -m pip install pywarpx*whl
This will call the CMake logic above implicitly.
Using this workflow has the advantage that it can build and package up multiple libraries with varying WarpX_DIMS
into one pywarpx
package.
Environment variables can be used to control the build step:
Environment Variable |
Default & Values |
Description |
---|---|---|
|
NOACC/OMP/CUDA/SYCL/HIP |
On-node, accelerated computing backend |
|
|
Simulation dimensionalities (semicolon-separated list) |
|
ON/OFF |
Embedded boundary support (not supported in RZ yet) |
|
ON/OFF |
Multi-node support (message-passing) |
|
ON/OFF |
openPMD I/O (HDF5, ADIOS) |
|
SINGLE/DOUBLE |
Floating point precision (single/double) |
|
SINGLE/DOUBLE |
Particle floating point precision (single/double), defaults to WarpX_PRECISION value if not set |
|
ON/OFF |
Spectral solver |
|
ON/OFF |
PICSAR QED (requires PICSAR) |
|
ON/OFF |
QED table generation (requires PICSAR and Boost) |
|
|
Number of threads to use for parallel builds |
|
ON/OFF |
Build shared libraries for dependencies |
|
ON/OFF |
Prefer static libraries for HDF5 dependency (openPMD) |
|
ON/OFF |
Prefer static libraries for ADIOS1 dependency (openPMD) |
|
None |
Absolute path to AMReX source directory (preferred if set) |
|
None (uses cmake default) |
Repository URI to pull and build AMReX from |
|
None (uses cmake default) |
Repository branch for |
|
ON/OFF |
Needs a pre-installed AMReX library if set to |
|
None |
Absolute path to openPMD-api source directory (preferred if set) |
|
ON/OFF |
Needs a pre-installed openPMD-api library if set to |
|
None |
Absolute path to PICSAR source directory (preferred if set) |
|
ON/OFF |
Needs a pre-installed PICSAR library if set to |
|
First found |
Set to |
|
None |
If set, search for pre-built WarpX C++ libraries (see below) |
Note that we currently change the WARPX_MPI
default intentionally to OFF
, to simplify a first install from source.
Some hints and workflows follow.
Developers, that want to test a change of the source code but did not change the pywarpx
version number, can force a reinstall via:
python3 -m pip install --force-reinstall --no-deps -v .
Some Developers like to code directly against a local copy of AMReX, changing both code-bases at a time:
WARPX_AMREX_SRC=$PWD/../amrex python3 -m pip install --force-reinstall --no-deps -v .
Additional environment control as common for CMake (see above) can be set as well, e.g. CC
, CXX`, and CMAKE_PREFIX_PATH
hints.
So another sophisticated example might be: use Clang as the compiler, build with local source copies of PICSAR and AMReX, support the PSATD solver, MPI and openPMD, hint a parallel HDF5 installation in $HOME/sw/hdf5-parallel-1.10.4
, and only build 3D geometry:
CC=$(which clang) CXX=$(which clang++) WARPX_AMREX_SRC=$PWD/../amrex WARPX_PICSAR_SRC=$PWD/../picsar WARPX_PSATD=ON WARPX_MPI=ON WARPX_DIMS=3 CMAKE_PREFIX_PATH=$HOME/sw/hdf5-parallel-1.10.4:$CMAKE_PREFIX_PATH python3 -m pip install --force-reinstall --no-deps -v .
Here we wrote this all in one line, but one can also set all environment variables in a development environment and keep the pip call nice and short as in the beginning.
Note that you need to use absolute paths for external source trees, because pip builds in a temporary directory, e.g. export WARPX_AMREX_SRC=$HOME/src/amrex
.
The Python library pywarpx
can also be created by pre-building WarpX into one or more shared libraries externally.
For example, a package manager might split WarpX into a C++ package and a Python package.
If the C++ libraries are already pre-compiled, we can pick them up in the Python build step instead of compiling them again:
# build WarpX executables and libraries
for d in 1 2 3 RZ; do
cmake -S . -B build -DWarpX_DIMS=$d -DWarpX_LIB=ON
cmake --build build -j 4
done
# Python package
PYWARPX_LIB_DIR=$PWD/build/lib python3 -m pip wheel .
# install
python3 -m pip install pywarpx-*whl
WarpX release managers might also want to generate a self-contained source package that can be distributed to exotic architectures:
python setup.py sdist --dist-dir .
python3 -m pip wheel -v pywarpx-*.tar.gz
python3 -m pip install *whl
The above steps can also be executed in one go to build from source on a machine:
python3 setup.py sdist --dist-dir .
python3 -m pip install -v pywarpx-*.tar.gz
Last but not least, you can uninstall pywarpx
as usual with:
python3 -m pip uninstall pywarpx