.. _Installation: ############################## Installation and Configuration ############################## Installation ------------ :ref:`DanceQ` is a **header-only** library and does not need to be installed. You can simply clone the `repository `_ and start: .. code-block:: console git clone https://gitlab.com/DanceQ/danceq.git Docker ------ Compiling **C++** code and configuring all dependencies can be painful - we know this! An easy option where you do not need to fight with compiler versions and correct paths is provided by `Docker `_. Therefore, we have created multiple images that contains all necessary libraries, binaries, and the correct path to compile and run the code, examples, and unit tests. The images are stored in our `gitlab registry `_: * **danceq**: based on `gcc image `_ for linux (3.5 GiB) * **danceq/ios**: based on `gcc image `_ for ios with M1 ARM processor (3.5 GiB) * **danceq/debian**: based on `gcc image `_ for linux (3.5 GiB) * **danceq/ubuntu-with-mkl** based on `intel-oneapi `_ for linux (14.5 GiB) .. note:: Please contact `us `_ if you are missing an image for your platform. To get started, you need to make sure that docker is installed and running. You can start the docker service using .. code-block:: console sudo systemctl start docker.service and additionally in case you want to autostart the service .. code-block:: console sudo systemctl enable docker.service Now, you can use the `gitlab container registry `_ and download our docker container (e.g. for ios with an ARM-based M1 processor): .. code-block:: console sudo docker pull registry.gitlab.com/danceq/danceq/ios and finally execute the container, which provides you with a shell in the correct directory: .. code-block:: console sudo docker run -it registry.gitlab.com/danceq/danceq/ios In this shell, you can, for example, build and run the :ref:`Eigen example`: .. code-block:: console mkdir ${DANCEQ_DIR}/examples/SparseMatrix_petsc_Hamiltonian_real_time_evolution/build cd ${DANCEQ_DIR}/examples/SparseMatrix_petsc_Hamiltonian_real_time_evolution/build cmake .. make mpirun -n 2 ./main -L 20 -n 10 That's it! In this docker container, we have also packaged all dependencies, so that you can play with all aspects of the library and examples, build the documentation and run the unit tests. You may even want to build and run your own production code in this container: we ship it with **gcc**. .. note:: In case you want to set up the package on your own system without using a docker container, we describe the necessary steps below. The cmake configuration and dependencies is already set correctly inside the container but need to reflect the location of the libraries on your system otherwise. .. _Config_and_CMake: CMake ----- We use `CMake `_ to build our code. The main configuration file, :download:`config/CMakeConfig.cmake<../../config/CMakeConfig.cmake>`, is located in the `repository `_. To export the correct paths on **your** system **you** can use the sample script :download:`config/sample.cfg<../../config/sample.cfg>`: .. literalinclude:: ../../config/sample.cfg .. note:: **You** have to edit and execute the configuration file: ``source config/sample.cfg`` While the core of the program can be simply included as plain **cpp** files we provide several :ref:`extensions` for *state-of-the-art* linear algebra packages: `Eigen `_, `Petsc `_, and `Slepc `_. To use them, you have to tell your computer where to find the correct libraries. A step-by-step guide is below. .. _openMP_docs: openMP ------ `openMP `_ provides the possibility to parallelize the code using multiple threads on a single computing node. This a shared memory approach where each thread has full access to the memory. If installed (which is usually case for most operating systems) it can be enabled with ``cmake -DOMP=ON ..`` using our :ref:`CMake files`. Before executing the code, the number of threads have to be set via (4 in this case): .. code-block:: console export OMP_NUM_THREADS=4 Shared memory is usually required when the you are handling dense matrices as in :ref:`this example`. .. _Eigen_setup: Eigen ----- Similarly to :ref:`DanceQ`, `Eigen `_ is a **header-only** package with `LAPACK `_ routines. It provides its own implementation but also can act as a wrapper for other libraries like `Intel's MKL `_. The :ref:`operator` in matrix form can be simply obtained by :ref:`create_EigenDenseMatrix()`: .. code-block:: cpp /* Returns operator as a dense Eigen matrix with the correct ScalarType */ uint64_t L {16}; uint64_t n {8}; danceq::Operator op(L,n); // creates an empty "Operator" op.add_operator(1.0, {1,2}, {"S+", "S-"}); // adds S^+_1 S^-_2 to the operator op.add_operator(1.0, {1,2}, {"S-", "S+"}); // adds S^-_1 S^+_2 to the operator auto matrix = op.create_EigenDenseMatrix(); The function is only available if the ``EIGEN_DIR`` is set correctly on **your** system: .. code-block:: console git clone https://gitlab.com/libeigen/eigen.git export EIGEN_DIR=$(pwd)/eigen The first command downloads the `Eigen repository `_ and the second commands exports the correct path that tells :ref:`CMake` where to find it. Now, you can use the `Eigen `_ extension that :ref:`DanceQ` provides. Further, :ref:`Eigen` can be used as wrapper for other `LAPACK `_ libraries like `MKL `_. To utilize it, you have to set the path ``MKL_DIR`` on your system: ``export MKL_DIR="/path/to/mkl/"``. Download and installation instructions can be found `here `_. `MKL `_ can be enabled in the :ref:`Eigen example` with ``cmake -DMKL=ON ..``. Multiple cores, *e.g.*, four, are set by: .. code-block:: console export OMP_NUM_THREADS=4 export MKL_NUM_THREADS=4 .. _MPI_docs: MPI --- In contrast to :ref:`openMP`, `MPI `_ provides the possibility to parallelize the code using distributed memory. In this case, each process is in charge of its own data and can **not** access the data other processes. This is largely salable approach and many nodes can work together in large computing facilities to accumulate up :math:`100` TB (and more) of memory and with up to :math:`10\,000` (and more) cores. Different open-source version such as `MPICH `_ are available and have to be installed on your system. Your local computing cluster usually has an installed version which is optimized and should be used! .. note:: The correct program provided by your version (``mpirun`` or ``mpiexec``) has to be used to execute the binary. The number of processes can be set with (4 in this case): .. code-block:: console mpirun -n 4 ./main .... .. _Petsc_setup: Petsc and Slepc --------------- `Petsc `_ and `Slepc `_ are `MPI-based `_ libraries with several linear algebra routines. `Petsc `_ provides *state-of-the-art* matrix operations with distributed memory which are utilized by `Slepc `_ to execute different linear algebra tasks. This allows scalable computation with sparse matrices and vectors distributed over multiple nodes. Once the packages are installed, the `Petsc objects `_ can be obtained by :ref:`create_PetscSparseMatrix()` (sparse matrix) or :ref:`create_PetscShellMatrix()` (matrix-free shell): .. code-block:: cpp /* Returns operator as a sparse matrix that is distributed over multiple nodes */ auto matrix = H.create_PetscSparseMatrix(); /* Returns operator as a matrix-free shell that calculates the elements on-the-fly */ auto matrix_shell = H.create_PetscShellMatrix(); Both objects can be used by `Slepc `_ to attack various problems. As a first step, **you** have to download both repositories and set the paths: .. code-block:: console # Petsc: https://petsc.org/release git clone https://gitlab.com/petsc/petsc cd petsc git checkout release export PETSC_DIR=$(pwd) # Slepc: https://slepc.upv.es/ git clone https://gitlab.com/slepc/slepc cd slepc git checkout release export SLEPC_DIR=$(pwd) When you configure both libraries, you have to specify the primitive datatype such that it matches **ScalarType** in the :ref:`Operator class`. In the following, we set up a version based on **complex** doubles with the name ``PETSC_ARCH_COMPLEX="petsc_complex"``. .. code-block:: console cd $PETSC_DIR export PETSC_ARCH_COMPLEX="petsc_complex" export PETSC_ARCH=$PETSC_ARCH_COMPLEX ./configure --with-scalar-type=complex --with-fc=0 --download-f2cblaslapack --download-mpich --with-64-bit-indices The command above defines a **complex** version of `Petsc `_ and downloads the current `MPICH `_ and `fblaslapack `_ version. This may takes a few minutes... Alternately, if you have `MPI `_ or any `LAPACK `_ library like `MKL `_ already installed, **you** can include them with: ``--with-blaslapack-dir=/path/to/lapack --with-mpi-dir=/path/to/mpi`` instead of the respective ``--download`` options. Next, you have to build the **complex** version using `make `_: .. code-block:: console make PETSC_DIR=$PETSC_DIR PETSC_ARCH=$PETSC_ARCH_COMPLEX all make PETSC_DIR=$PETSC_DIR PETSC_ARCH=$PETSC_ARCH_COMPLEX check `Slepc `_ is built on top of the just compiled `Petsc `_ version: .. code-block:: console cd $SLEPC_DIR ./configure make SLEPC_DIR=$SLEPC_DIR PETSC_DIR=$PETSC_DIR PETSC_ARCH=$PETSC_ARCH all make SLEPC_DIR=$SLEPC_DIR PETSC_DIR=$PETSC_DIR check .. warning:: If you have downloaded the `MPICH `_ with `Petsc `_ you have to build **your** code with this `MPI `_ library: .. code-block:: console export MPI_DIR=$PETSC_DIR/$PETSC_ARCH This is particularly important if **you** have multiple `MPI `_ versions on you computer! To execute the program correctly, **you** have to add the correct ``mpirun`` to **your** ``PATH``: .. code-block:: console export PATH=$MPI_DIR/bin:$PATH Similarly to the **complex** version, you can set up a **real** version with: .. code-block:: console # Petsc cd $PETSC_DIR export PETSC_ARCH_REAL="petsc_real" export PETSC_ARCH=$PETSC_ARCH_REAL ./configure --with-scalar-type=real --with-fc=0 --download-f2cblaslapack --download-mpich --with-64-bit-indices make PETSC_DIR=$PETSC_DIR PETSC_ARCH=$PETSC_ARCH_REAL all make PETSC_DIR=$PETSC_DIR PETSC_ARCH=$PETSC_ARCH_REAL check # Slepc cd $SLEPC_DIR ./configure make SLEPC_DIR=$SLEPC_DIR PETSC_DIR=$PETSC_DIR PETSC_ARCH=$PETSC_ARCH all make SLEPC_DIR=$SLEPC_DIR PETSC_DIR=$PETSC_DIR check # MPI export MPI_DIR=$PETSC_DIR/$PETSC_ARCH export PATH=$MPI_DIR/bin:$PATH Executing the commands from above exports all required paths to compile and run the parallelized `MPI `_ code with `Petsc `_ and `Slepc `_. Now, **you** can use the **matrix-free** solver in the :ref:`examples`: .. code-block:: console cd $DANCEQ_DIR/examples/petsc_matrix_free mkdir build & cd build cmake .. & make mpirun -n 2 ./main -L 20 -n 10 .. tip:: Per default, `Petsc `_ is built without optimization and in the debugging modus. To gain a better performance, you can use the following option: .. code-block:: console OPTFLAGS="-O3" ./configure --with-scalar-type=real --download-fblaslapack --download-mpich --COPTFLAGS=$OPTFLAGS --CXXOPTFLAGS=$OPTFLAGS --FOPTFLAGS=$OPTFLAGS --with-debugging=0