Installation and Configuration
Installation
DanceQ is a header-only library and does not need to be installed. You can simply clone the repository and start:
git clone https://gitlab.com/DanceQ/danceq.git
Docker
Compiling C++ code and configuring all dependencies can be painful - we know this! An easy option where you do not need to fight with compiler versions and correct paths is provided by Docker. Therefore, we have created multiple images that contains all necessary libraries, binaries, and the correct path to compile and run the code, examples, and unit tests. The images are stored in our gitlab registry:
danceq: based on gcc image for linux (3.5 GiB)
danceq/ios: based on gcc image for ios with M1 ARM processor (3.5 GiB)
danceq/debian: based on gcc image for linux (3.5 GiB)
danceq/ubuntu-with-mkl based on intel-oneapi for linux (14.5 GiB)
Note
Please contact us if you are missing an image for your platform.
To get started, you need to make sure that docker is installed and running. You can start the docker service using
sudo systemctl start docker.service
and additionally in case you want to autostart the service
sudo systemctl enable docker.service
Now, you can use the gitlab container registry and download our docker container (e.g. for ios with an ARM-based M1 processor):
sudo docker pull registry.gitlab.com/danceq/danceq/ios
and finally execute the container, which provides you with a shell in the correct directory:
sudo docker run -it registry.gitlab.com/danceq/danceq/ios
In this shell, you can, for example, build and run the Eigen example:
mkdir ${DANCEQ_DIR}/examples/SparseMatrix_petsc_Hamiltonian_real_time_evolution/build
cd ${DANCEQ_DIR}/examples/SparseMatrix_petsc_Hamiltonian_real_time_evolution/build
cmake ..
make
mpirun -n 2 ./main -L 20 -n 10
That’s it! In this docker container, we have also packaged all dependencies, so that you can play with all aspects of the library and examples, build the documentation and run the unit tests. You may even want to build and run your own production code in this container: we ship it with gcc.
Note
In case you want to set up the package on your own system without using a docker container, we describe the necessary steps below. The cmake configuration and dependencies is already set correctly inside the container but need to reflect the location of the libraries on your system otherwise.
CMake
We use CMake to build our code.
The main configuration file, config/CMakeConfig.cmake
, is located in the repository.
To export the correct paths on your system you can use the sample script config/sample.cfg
:
################################################################
## DanceQ repository on your system
export DANCEQ_DIR="/path/to/DanceQ"
## g++ for linux and clang++ for ios
export COMPILER="g++"
################################################################
################################################################
## Dense matrix operations can be done using Eigen
## -> https://eigen.tuxfamily.org/index.php?title=Main_Page
export EIGEN_DIR="/path/to/eigen"
################################################################
################################################################
## It is possible to combine Eigen with advanced LAPACK libraries such as Intel's MKL
## -> https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl.html
export MKL_DIR="/path/to/mkl/"
################################################################
################################################################
## Sparse matrix operators can be done using Petsc and Slepc that are based on MPI
## -> https://petsc.org/release/
## -> https://slepc.upv.es/
export PETSC_DIR="/path/to/petsc"
export SLEPC_DIR="/path/to/slepc"
export PETSC_ARCH_COMPLEX="petsc_complex"
export PETSC_ARCH_REAL="petsc_real"
################################################################
################################################################
## This needs to be the same MPI as used for Petsc and Slepc
## If you have downloaded MPI with petsc, you can use export MPI_DIR=$PETSC_DIR/$PETSC_ARCH
## Note that $PETSC_ARCH can be either "petsc_complex" or "petsc_real" -> It has to match your application!
export MPI_DIR="/path/to/mpi"
export MPI_COMPILER="mpic++"
## MPI programs have to be executed with mpirun located in $MPI_DIR/bin
export PATH=$MPI_DIR/bin:$PATH
################################################################
Note
You have to edit and execute the configuration file: source config/sample.cfg
While the core of the program can be simply included as plain cpp files we provide several extensions for state-of-the-art linear algebra packages: Eigen, Petsc, and Slepc. To use them, you have to tell your computer where to find the correct libraries. A step-by-step guide is below.
openMP
openMP provides the possibility to parallelize the code using multiple threads on a single computing node. This a shared memory approach where each thread has full access to the memory.
If installed (which is usually case for most operating systems) it can be enabled with cmake -DOMP=ON ..
using our CMake files.
Before executing the code, the number of threads have to be set via (4 in this case):
export OMP_NUM_THREADS=4
Shared memory is usually required when the you are handling dense matrices as in this example.
Eigen
Similarly to DanceQ, Eigen is a header-only package with LAPACK routines. It provides its own implementation but also can act as a wrapper for other libraries like Intel’s MKL. The operator in matrix form can be simply obtained by create_EigenDenseMatrix():
/* Returns operator as a dense Eigen matrix with the correct ScalarType */
uint64_t L {16};
uint64_t n {8};
danceq::Operator op(L,n); // creates an empty "Operator"
op.add_operator(1.0, {1,2}, {"S+", "S-"}); // adds S^+_1 S^-_2 to the operator
op.add_operator(1.0, {1,2}, {"S-", "S+"}); // adds S^-_1 S^+_2 to the operator
auto matrix = op.create_EigenDenseMatrix();
The function is only available if the EIGEN_DIR
is set correctly on your system:
git clone https://gitlab.com/libeigen/eigen.git
export EIGEN_DIR=$(pwd)/eigen
The first command downloads the Eigen repository and the second commands exports the correct path that tells CMake where to find it. Now, you can use the Eigen extension that DanceQ provides.
Further, Eigen can be used as wrapper for other LAPACK libraries like MKL.
To utilize it, you have to set the path MKL_DIR
on your system: export MKL_DIR="/path/to/mkl/"
.
Download and installation instructions can be found here.
MKL can be enabled in the Eigen example with cmake -DMKL=ON ..
.
Multiple cores, e.g., four, are set by:
export OMP_NUM_THREADS=4
export MKL_NUM_THREADS=4
MPI
In contrast to openMP, MPI provides the possibility to parallelize the code using distributed memory. In this case, each process is in charge of its own data and can not access the data other processes. This is largely salable approach and many nodes can work together in large computing facilities to accumulate up \(100\) TB (and more) of memory and with up to \(10\,000\) (and more) cores.
Different open-source version such as MPICH are available and have to be installed on your system. Your local computing cluster usually has an installed version which is optimized and should be used!
Note
The correct program provided by your version (mpirun
or mpiexec
) has to be used to execute the binary.
The number of processes can be set with (4 in this case):
mpirun -n 4 ./main ....
Petsc and Slepc
Petsc and Slepc are MPI-based libraries with several linear algebra routines. Petsc provides state-of-the-art matrix operations with distributed memory which are utilized by Slepc to execute different linear algebra tasks. This allows scalable computation with sparse matrices and vectors distributed over multiple nodes. Once the packages are installed, the Petsc objects can be obtained by create_PetscSparseMatrix() (sparse matrix) or create_PetscShellMatrix() (matrix-free shell):
/* Returns operator as a sparse matrix that is distributed over multiple nodes */
auto matrix = H.create_PetscSparseMatrix();
/* Returns operator as a matrix-free shell that calculates the elements on-the-fly */
auto matrix_shell = H.create_PetscShellMatrix();
Both objects can be used by Slepc to attack various problems.
As a first step, you have to download both repositories and set the paths:
# Petsc: https://petsc.org/release
git clone https://gitlab.com/petsc/petsc
cd petsc
git checkout release
export PETSC_DIR=$(pwd)
# Slepc: https://slepc.upv.es/
git clone https://gitlab.com/slepc/slepc
cd slepc
git checkout release
export SLEPC_DIR=$(pwd)
When you configure both libraries, you have to specify the primitive datatype such that it matches ScalarType in the Operator class.
In the following, we set up a version based on complex doubles with the name PETSC_ARCH_COMPLEX="petsc_complex"
.
cd $PETSC_DIR
export PETSC_ARCH_COMPLEX="petsc_complex"
export PETSC_ARCH=$PETSC_ARCH_COMPLEX
./configure --with-scalar-type=complex --with-fc=0 --download-f2cblaslapack --download-mpich --with-64-bit-indices
The command above defines a complex version of Petsc and downloads the current MPICH and fblaslapack version.
This may takes a few minutes…
Alternately, if you have MPI or any LAPACK library like MKL already installed, you can include them with: --with-blaslapack-dir=/path/to/lapack --with-mpi-dir=/path/to/mpi
instead of the respective --download
options.
Next, you have to build the complex version using make:
make PETSC_DIR=$PETSC_DIR PETSC_ARCH=$PETSC_ARCH_COMPLEX all
make PETSC_DIR=$PETSC_DIR PETSC_ARCH=$PETSC_ARCH_COMPLEX check
Slepc is built on top of the just compiled Petsc version:
cd $SLEPC_DIR
./configure
make SLEPC_DIR=$SLEPC_DIR PETSC_DIR=$PETSC_DIR PETSC_ARCH=$PETSC_ARCH all
make SLEPC_DIR=$SLEPC_DIR PETSC_DIR=$PETSC_DIR check
Warning
If you have downloaded the MPICH with Petsc you have to build your code with this MPI library:
export MPI_DIR=$PETSC_DIR/$PETSC_ARCH
This is particularly important if you have multiple MPI versions on you computer!
To execute the program correctly, you have to add the correct mpirun
to your PATH
:
export PATH=$MPI_DIR/bin:$PATH
Similarly to the complex version, you can set up a real version with:
# Petsc
cd $PETSC_DIR
export PETSC_ARCH_REAL="petsc_real"
export PETSC_ARCH=$PETSC_ARCH_REAL
./configure --with-scalar-type=real --with-fc=0 --download-f2cblaslapack --download-mpich --with-64-bit-indices
make PETSC_DIR=$PETSC_DIR PETSC_ARCH=$PETSC_ARCH_REAL all
make PETSC_DIR=$PETSC_DIR PETSC_ARCH=$PETSC_ARCH_REAL check
# Slepc
cd $SLEPC_DIR
./configure
make SLEPC_DIR=$SLEPC_DIR PETSC_DIR=$PETSC_DIR PETSC_ARCH=$PETSC_ARCH all
make SLEPC_DIR=$SLEPC_DIR PETSC_DIR=$PETSC_DIR check
# MPI
export MPI_DIR=$PETSC_DIR/$PETSC_ARCH
export PATH=$MPI_DIR/bin:$PATH
Executing the commands from above exports all required paths to compile and run the parallelized MPI code with Petsc and Slepc. Now, you can use the matrix-free solver in the examples:
cd $DANCEQ_DIR/examples/petsc_matrix_free
mkdir build & cd build
cmake .. & make
mpirun -n 2 ./main -L 20 -n 10
Tip
Per default, Petsc is built without optimization and in the debugging modus. To gain a better performance, you can use the following option:
OPTFLAGS="-O3"
./configure --with-scalar-type=real --download-fblaslapack --download-mpich --COPTFLAGS=$OPTFLAGS --CXXOPTFLAGS=$OPTFLAGS --FOPTFLAGS=$OPTFLAGS --with-debugging=0