Skip to content

Commit

Permalink
Merge branch 'master' into programmers-docs
Browse files Browse the repository at this point in the history
  • Loading branch information
ilfreddy authored Apr 22, 2024
2 parents c37e8f6 + 1e7ea8e commit 97d67d2
Show file tree
Hide file tree
Showing 119 changed files with 7,694 additions and 1,931 deletions.
10 changes: 5 additions & 5 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
version: 2

variables:
ubuntu-2004: &ubuntu-2004
ubuntu-2204: &ubuntu-2204
docker:
- image: ghcr.io/mrchemsoft/metamr/circleci_ubuntu-20.04:sha-343e011
- image: ghcr.io/mrchemsoft/metamr/circleci_ubuntu-22.04:sha-9f6ecd4
name: tsubame
user: merzbow
working_directory: ~/mrchem
Expand Down Expand Up @@ -52,14 +52,14 @@ variables:
jobs:
serial-py3:
<<: *ubuntu-2004
<<: *ubuntu-2204
steps:
- checkout
- *configure-serial
- *build
- *tests
omp-py3:
<<: *ubuntu-2004
<<: *ubuntu-2204
environment:
- OMP_NUM_THREADS: '2'
steps:
Expand All @@ -68,7 +68,7 @@ jobs:
- *build
- *tests
mpi-py3:
<<: *ubuntu-2004
<<: *ubuntu-2204
steps:
- checkout
- *configure-mpi
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/build-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -42,19 +42,19 @@ jobs:
activate-environment: mrchem-gha
environment-file: .github/mrchem-gha.yml
channel-priority: true
python-version: 3.6
python-version: 3.9
use-only-tar-bz2: true # IMPORTANT: This needs to be set for caching to work properly!

- name: Configure
shell: bash -l {0}
run: |
python ./setup --type=$BUILD_TYPE --omp --arch-flags=false --generator=Ninja --prefix=$GITHUB_WORKSPACE/Software/MRChem build
- name: Build
shell: bash -l {0}
run: |
cmake --build build --config $BUILD_TYPE --target install -- -v -d stats
- name: Test
shell: bash -l {0}
run: |
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/code-coverage.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,18 +26,18 @@ jobs:
activate-environment: mrchem-codecov
environment-file: .github/mrchem-codecov.yml
channel-priority: true
python-version: 3.6
python-version: 3.9

- name: Configure
shell: bash -l {0}
run: |
python ./setup --type=$BUILD_TYPE --arch-flags=false --coverage --generator=Ninja --prefix=$GITHUB_WORKSPACE/Software/MRChem build
- name: Build
shell: bash -l {0}
run: |
cmake --build build --config $BUILD_TYPE --target install -- -v -d stats
- name: Test MRChem and generate coverage report
shell: bash -l {0}
run: |
Expand Down
2 changes: 2 additions & 0 deletions doc/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,8 @@ Features in MRChem-1.1:
- Electric field
+ Solvent effects
- Cavity-free PCM
- Poisson-Boltzmann PCM
- Linearized Poisson-Boltzmann PCM
* Properties:
+ Ground state energy
+ Dipole moment
Expand Down
2 changes: 1 addition & 1 deletion doc/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Installation
Build prerequisites
-------------------

- Python-3.7 (or later)
- Python-3.9 (or later)
- CMake-3.14 (or later)
- GNU-5.4 or Intel-17 (or later) compilers (C++14 standard)

Expand Down
31 changes: 29 additions & 2 deletions doc/programmers/code_reference/environment.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,11 +21,38 @@ Permittivity
:protected-members:
:private-members:

SCRF
DHScreening
------------

.. doxygenclass:: mrchem::SCRF
.. doxygenclass:: mrchem::DHScreening
:project: MRChem
:members:
:protected-members:
:private-members:

GPESolver
------------

.. doxygenclass:: mrchem::GPESolver
:project: MRChem
:members:
:protected-members:
:private-members:

PBESolver
------------

.. doxygenclass:: mrchem::PBESolver
:project: MRChem
:members:
:protected-members:
:private-members:

LPBESolver
------------

.. doxygenclass:: mrchem::LPBESolver
:project: MRChem
:members:
:protected-members:
:private-members:
18 changes: 17 additions & 1 deletion doc/programming.bib
Original file line number Diff line number Diff line change
Expand Up @@ -333,7 +333,6 @@ @article{Fosso-Tande2013
abstract = {We describe and present results of the implementation of the surface and volume polarization for electrostatics (SVPE) solvation model. Unlike most other implementations of the solvation model where the solute and the solvent are described with multiple numerical representations, our implementation uses a multiresolution, adaptive multiwavelet basis to describe both the solute and the solvent. This requires reformulation to use integral equations throughout as well as a conscious management of numerical properties of the basis. {\textcopyright} 2013 Elsevier B.V. All rights reserved.},
author = {Fosso-Tande, Jacob and Harrison, Robert J.},
doi = {10.1016/j.cplett.2013.01.065},
file = {:home/ggerez/.local/share/data/Mendeley Ltd./Mendeley Desktop/Downloaded/Fosso-Tande, Harrison - 2013 - Implicit solvation models in a multiresolution multiwavelet basis.pdf:pdf},
issn = {00092614},
journal = {Chem. Phys. Lett.},
pages = {179--184},
Expand All @@ -343,3 +342,20 @@ @article{Fosso-Tande2013
year = {2013}
}

@article{gerez2023,
author = {Gerez S, Gabriel A. and Di Remigio Eikås, Roberto and Jensen, Stig Rune and Bjørgve, Magnar and Frediani, Luca},
title = {Cavity-Free Continuum Solvation: Implementation and Parametrization in a Multiwavelet Framework},
journal = {Journal of Chemical Theory and Computation},
volume = {19},
number = {7},
pages = {1986-1997},
year = {2023},
doi = {10.1021/acs.jctc.2c01098},
note ={PMID: 36933225},
URL = {
https://doi.org/10.1021/acs.jctc.2c01098
},
eprint = {
https://doi.org/10.1021/acs.jctc.2c01098
}
}
1 change: 0 additions & 1 deletion doc/users/betzy_example.job
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,5 @@
#SBATCH --tasks-per-node=12

export UCX_LOG_LEVEL=ERROR
export OMP_NUM_THREADS=15

~/my_path/to/mrchem --launcher='mpirun --rank-by node --map-by socket --bind-to numa' h2o
60 changes: 60 additions & 0 deletions doc/users/geometry_optimization.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
-------------------------------
Running a geometry optimization
-------------------------------
In the following we will assume to have a valid user input file for the water
molecule called ``h2o.inp``, e.g. like this

.. include:: h2o_geopt.inp
:literal:

A geometry optimization can be run by adding ``GeometryOptimizer`` section to any normal ``.inp`` file and setting the run keyword to ``true``:

.. include:: geopt_section.inp
:literal:

This will start a geometry optimization with the default settings.

Obtaining accurate forces
-------------------------

In the above H_2O input example the ``world_prec`` parameter is chosen really small. This is necessary to get accurate forces.
If a looser precision it is chosen, the geometry optimization may not converge. Pay attention to the warning::

WARNING: Noise in force is larger than 0.2 times the larges force component!!!

Geometry optimization onvergence cannot be guaranteed!!!

This is printed when the noise level is too high. Usually, geometry optimizations will not converge when this warning is printed.
In that case, either tighten the ``world_prec``, ``orb_thrs`` (or both) _or_ loosen the convergence criterion of the geometry optimization.


Pre-relax input geometries
--------------------------

Running high precision multi resolution wavelet calculations is computationally expensive.
It is therefore not advisable to use an input geometry with high forces, a small ``world_prec`` and start the simulation.
An optimized workflow would look something like this:

1. Optimize the geometry with a gaussian basis set. This can be done with a number of gaussian basis set codes
2. Use inaccurate forces (``world_prec`` ~ 1e-4) and a rather loose convergence criterion (``max_force_component`` ~ 1e-2) for the
geometry optimization for a pre-relaxation with MRChem.
3. Do a tight geometry optimization (``max_force_component`` ~ 5e-4) and with an accurate MRChem calculation (``world_prec`` ~ 1e-6)


Reuse orbitals
--------------

For tight geometry optimizations where the input structure is already close to the local minimum (using cheaper pre-relaxations), it makes sense
to use the orbitals from the geometry optimization iteration *i* for the start of iteration *i+1*. This feature can be enabled by setting::

use_previous_guess = true


Choosing an initial step size
-----------------------------

If there are some problems in the first couple of geometry optimization iterations (energy and force norm increasing) the initial step size should be chosen
manually. If a conservative choice (``init_step_size`` ~ 0.8 ) does not solve the problem, the problem is usually in the input
geometry (wrong units, unphysical, ...) or in the potential energy surface (too much noise, error in the DFT input section, ...).

Convergence problem can be analyzed by visualizing the optimization trajectory and plots of the energy and force norm versus the geometry optimization iterations.
3 changes: 3 additions & 0 deletions doc/users/geopt_section.inp
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
GeometryOptimizer{
run = true
}
14 changes: 14 additions & 0 deletions doc/users/h2o_geopt.inp
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
world_prec = 1.0e-6
world_unit = angstrom

WaveFunction {
method = lda
}

Molecule {
$coords
O 0.00000 0.00000 0.11779
H 0.00000 0.75545 -0.47116
H 0.00000 -0.75545 -0.47116
$end
}
1 change: 1 addition & 0 deletions doc/users/manual.rst
Original file line number Diff line number Diff line change
Expand Up @@ -42,3 +42,4 @@ in more detail in the sections below.
user_ref
qcengine
program_json
geometry_optimization.rst
12 changes: 6 additions & 6 deletions doc/users/running.rst
Original file line number Diff line number Diff line change
Expand Up @@ -102,6 +102,8 @@ the code on 16 threads (all sharing the same physical memory space)::

$ OMP_NUM_THREADS=16 mrchem h2o

Note that this is the number of threads will be set by ``OMP_NUM_THREADS`` only
if the code is compiled without MPI support, see below.

Distributed memory MPI
++++++++++++++++++++++
Expand Down Expand Up @@ -131,10 +133,12 @@ as it will be literally prepended to the ``mrchem.x`` command when the
each `NUMA <https://en.wikipedia.org/wiki/Non-uniform_memory_access>`_
domain (usually one per socket) of your CPU, and MPI across NUMA domains and
ultimately machines. Ideally, the number of OpenMP threads should be
between 8-20. E.g. on hardware with two sockets of 16 cores each, use
OMP_NUM_THREADS=16 and scale the number of MPI processes by the size
between 8-20. E.g. on hardware with two sockets of 16 cores each, scale
the number of MPI processes by the size
of the molecule, typically one process per ~5 orbitals or so (and
definitely not *more* than one process per orbital).
The actual number of threads will be set automatically regardless of the
value of ``OMP_NUM_THREADS``.


Job example (Betzy)
Expand Down Expand Up @@ -173,10 +177,6 @@ a very small molecule for such setup!).
assigned to any other core, which would result in much reduced performance). The 16
cores of the group may then be used by the threads initiated by that MPI process.

``--oversubscribe``
To tell MPI that it is should accept that the number of MPI processes times
the number of threads is larger than the number of available cores.

**Advanced option**:
Alternatively one can get full control of task placement using the Slurm workload
manager by replacing ``mpirun`` with ``srun`` and setting explicit CPU masks as::
Expand Down
1 change: 1 addition & 0 deletions doc/users/schema_input.json
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@
},
"mpi": { # Section for MPI specification
"bank_size": int, # Number of MPI ranks in memory bank
"omp_threads": int, # Number of omp threads
"numerically_exact": bool, # Guarantee MPI invariant results
"shared_memory_size": int # Size (MB) of MPI shared memory blocks
},
Expand Down
15 changes: 13 additions & 2 deletions doc/users/user_inp.rst
Original file line number Diff line number Diff line change
Expand Up @@ -115,6 +115,7 @@ This section defines some parameters that are used in MPI runs (defaults shown):
MPI {
bank_size = -1 # Number of processes used as memory bank
omp_threads = -1 # Number of omp threads to use
numerically_exact = false # Guarantee MPI invariant results
share_nuclear_potential = false # Use MPI shared memory window
share_coulomb_potential = false # Use MPI shared memory window
Expand All @@ -131,6 +132,11 @@ it is likely more efficient to set `bank_size = 0`, otherwise it's recommended
to use the default. If a particular calculation runs out of memory, it might
help to increase the number of bank processes from the default value.

The number of threads to use in OpenMP can be forced using the omp_threads flag.
For MPI runs, it is strongly advised to leave the default, as the optimal value
can be difficult to guess. The environment variable OMP_NUM_THREADS is not used
for MPI runs.

The ``numerically_exact`` keyword will trigger algorithms that guarantee that
the computed results are invariant (within double precision) with respect to
the number or MPI processes. These exact algorithms require more memory and are
Expand Down Expand Up @@ -194,7 +200,7 @@ as the ``world_origin`` is the true origin).
WaveFunction
------------

Here we give the wavefunction method and whether we run spin restricted (alpha
Here we give the wavefunction method, environment used (for solvent models) and whether we run spin restricted (alpha
and beta spins are forced to occupy the same spatial orbitals) or not (method
must be specified, otherwise defaults are shown):

Expand All @@ -203,6 +209,7 @@ must be specified, otherwise defaults are shown):
WaveFunction {
method = <wavefunction_method> # Core, Hartree, HF or DFT
restricted = true # Spin restricted/unrestricted
environment = pcm # Environment (pcm, pcm-pb, pcm-lpb) defaults to none
}
There are currently four methods available: Core Hamiltonian, Hartree,
Expand All @@ -212,6 +219,11 @@ B3LYP``), *or* you can set ``method = DFT`` and specify a "non-standard"
functional in the separate DFT section (see below). See
:ref:`User input reference` for a list of available default functionals.

The solvent model implemented is a cavity free PCM, described in :cite:`gerez2023`.
In this model we have implemented the Generalized Poisson equation solver, keyword ``pcm``, a
Poisson-Boltzmann solver, keyword ``pcm-pb`` and a Linearized Poisson-Boltzmann solver, keyword ``pcm-lpb``.
Further details for the calculation have to be included in the ``PCM`` section, see :ref: `User input reference` for details.

.. note::

Restricted open-shell wavefunctions are not supported.
Expand Down Expand Up @@ -674,4 +686,3 @@ avoid overwriting the files by default). So, in order to use MW orbitals from a
previous calculation, you must either change one of the paths
(``Response.path_orbitals`` or ``Files.guess_X_p`` etc), or manually copy the
files between the default locations.

Loading

0 comments on commit 97d67d2

Please sign in to comment.