Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: fix parameter links #4239

Merged
merged 4 commits into from
Oct 23, 2024
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 11 additions & 9 deletions deepmd/utils/argcheck.py
Original file line number Diff line number Diff line change
Expand Up @@ -1387,14 +1387,16 @@


def descrpt_variant_type_args(exclude_hybrid: bool = False) -> Variant:
link_lf = make_link("loc_frame", "model/descriptor[loc_frame]")
link_se_e2_a = make_link("se_e2_a", "model/descriptor[se_e2_a]")
link_se_e2_r = make_link("se_e2_r", "model/descriptor[se_e2_r]")
link_se_e3 = make_link("se_e3", "model/descriptor[se_e3]")
link_se_a_tpe = make_link("se_a_tpe", "model/descriptor[se_a_tpe]")
link_hybrid = make_link("hybrid", "model/descriptor[hybrid]")
link_se_atten = make_link("se_atten", "model/descriptor[se_atten]")
link_se_atten_v2 = make_link("se_atten_v2", "model/descriptor[se_atten_v2]")
link_lf = make_link("loc_frame", "model[standard]/descriptor[loc_frame]")
Fixed Show fixed Hide fixed
link_se_e2_a = make_link("se_e2_a", "model[standard]/descriptor[se_e2_a]")
Fixed Show fixed Hide fixed
link_se_e2_r = make_link("se_e2_r", "model[standard]/descriptor[se_e2_r]")
Fixed Show fixed Hide fixed
link_se_e3 = make_link("se_e3", "model[standard]/descriptor[se_e3]")
Fixed Show fixed Hide fixed
link_se_a_tpe = make_link("se_a_tpe", "model[standard]/descriptor[se_a_tpe]")
Fixed Show fixed Hide fixed
link_hybrid = make_link("hybrid", "model[standard]/descriptor[hybrid]")
Fixed Show fixed Hide fixed
link_se_atten = make_link("se_atten", "model[standard]/descriptor[se_atten]")
Fixed Show fixed Hide fixed
link_se_atten_v2 = make_link(
Fixed Show fixed Hide fixed
"se_atten_v2", "model[standard]/descriptor[se_atten_v2]"
)
njzjz marked this conversation as resolved.
Show resolved Hide resolved
doc_descrpt_type = "The type of the descritpor. See explanation below. \n\n\
- `loc_frame`: Defines a local frame at each atom, and the compute the descriptor as local coordinates under this frame.\n\n\
- `se_e2_a`: Used by the smooth edition of Deep Potential. The full relative coordinates are used to construct the descriptor.\n\n\
Expand Down Expand Up @@ -1692,7 +1694,7 @@
# --- Modifier configurations: --- #
def modifier_dipole_charge():
doc_model_name = "The name of the frozen dipole model file."
doc_model_charge_map = f"The charge of the WFCC. The list length should be the same as the {make_link('sel_type', 'model/fitting_net[dipole]/sel_type')}. "
doc_model_charge_map = f"The charge of the WFCC. The list length should be the same as the {make_link('sel_type', 'model[standard]/fitting_net[dipole]/sel_type')}. "
doc_sys_charge_map = f"The charge of real atoms. The list length should be the same as the {make_link('type_map', 'model/type_map')}"
doc_ewald_h = "The grid spacing of the FFT grid. Unit is A"
doc_ewald_beta = f"The splitting parameter of Ewald sum. Unit is A^{-1}"
Expand Down
2 changes: 1 addition & 1 deletion doc/freeze/compress.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ The model compression interface requires the version of DeePMD-kit used in the o

Descriptors with `se_e2_a`, `se_e3`, `se_e2_r` and `se_atten_v2` types are supported by the model compression feature. `Hybrid` mixed with the above descriptors is also supported.

Notice: Model compression for the `se_atten_v2` descriptor is exclusively designed for models with the training parameter {ref}`attn_layer <model/descriptor[se_atten_v2]/attn_layer>` set to 0.
Notice: Model compression for the `se_atten_v2` descriptor is exclusively designed for models with the training parameter {ref}`attn_layer <model[standard]/descriptor[se_atten_v2]/attn_layer>` set to 0.

**Available activation functions for descriptor:**

Expand Down
2 changes: 1 addition & 1 deletion doc/model/dplr.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ Two settings make the training input script different from an energy training in
},
```

The type of fitting is set to {ref}`dipole <model/fitting_net[dipole]>`. The dipole is associated with type 0 atoms (oxygens), by the setting `"dipole_type": [0]`. What we trained is the displacement of the WC from the corresponding oxygen atom. It shares the same training input as the atomic dipole because both are 3-dimensional vectors defined on atoms.
The type of fitting is set to {ref}`dipole <model[standard]/fitting_net[dipole]>`. The dipole is associated with type 0 atoms (oxygens), by the setting `"dipole_type": [0]`. What we trained is the displacement of the WC from the corresponding oxygen atom. It shares the same training input as the atomic dipole because both are 3-dimensional vectors defined on atoms.
The loss section is provided as follows

```json
Expand Down
6 changes: 3 additions & 3 deletions doc/model/dprc.md
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,7 @@ As described in the paper, the DPRc model only corrects $E_\text{QM}$ and $E_\te

::::

{ref}`exclude_types <model/descriptor[se_a_ebd_v2]/exclude_types>` can be generated by the following Python script:
{ref}`exclude_types <model[standard]/descriptor[se_a_ebd_v2]/exclude_types>` can be generated by the following Python script:

```py
from itertools import combinations_with_replacement, product
Expand All @@ -163,7 +163,7 @@ print(
)
```

Also, DPRc assumes MM atom energies ({ref}`atom_ener <model/fitting_net[ener]/atom_ener>`) are zero:
Also, DPRc assumes MM atom energies ({ref}`atom_ener <model[standard]/fitting_net[ener]/atom_ener>`) are zero:

```json
"fitting_net": {
Expand All @@ -173,7 +173,7 @@ Also, DPRc assumes MM atom energies ({ref}`atom_ener <model/fitting_net[ener]/at
}
```

Note that {ref}`atom_ener <model/fitting_net[ener]/atom_ener>` only works when {ref}`descriptor/set_davg_zero <model/descriptor[se_a_ebd_v2]/set_davg_zero>` of the QM/MM part is `true`.
Note that {ref}`atom_ener <model[standard]/fitting_net[ener]/atom_ener>` only works when {ref}`descriptor/set_davg_zero <model[standard]/descriptor[se_a_ebd_v2]/set_davg_zero>` of the QM/MM part is `true`.

## Run MD simulations

Expand Down
2 changes: 1 addition & 1 deletion doc/model/overall.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ A model has two parts, a descriptor that maps atomic configuration to a set of s
}
```

The two subsections, {ref}`descriptor <model/descriptor>` and {ref}`fitting_net <model/fitting_net>`, define the descriptor and the fitting net, respectively.
The two subsections, {ref}`descriptor <model[standard]/descriptor>` and {ref}`fitting_net <model[standard]/fitting_net>`, define the descriptor and the fitting net, respectively.

The {ref}`type_map <model/type_map>` is optional, which provides the element names (but not necessarily same as the actual name of the element) of the corresponding atom types. A water model, as in this example, has two kinds of atoms. The atom types are internally recorded as integers, e.g., `0` for oxygen and `1` for hydrogen here. A mapping from the atom type to their names is provided by {ref}`type_map <model/type_map>`.

Expand Down
4 changes: 2 additions & 2 deletions doc/model/train-energy-spin.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,9 @@ keeping other sections the same as the normal energy model's input script.
Note that when adding spin into the model, there will be some implicit modifications automatically done by the program:

- In the TensorFlow backend, the `se_e2_a` descriptor will treat those atom types with spin as new (virtual) types,
and duplicate their corresponding selected numbers of neighbors ({ref}`sel <model/descriptor[se_e2_a]/sel>`) from their real atom types.
and duplicate their corresponding selected numbers of neighbors ({ref}`sel <model[standard]/descriptor[se_e2_a]/sel>`) from their real atom types.
- In the PyTorch backend, if spin settings are added, all the types (with or without spin) will have their virtual types.
The `se_e2_a` descriptor will thus double the {ref}`sel <model/descriptor[se_e2_a]/sel>` list,
The `se_e2_a` descriptor will thus double the {ref}`sel <model[standard]/descriptor[se_e2_a]/sel>` list,
while in other descriptors with mixed types (such as `dpa1` or `dpa2`), the sel number will not be changed for clarity.
If you are using descriptors with mixed types, to achieve better performance,
you should manually extend your sel number (maybe double) depending on the balance between performance and efficiency.
Expand Down
8 changes: 4 additions & 4 deletions doc/model/train-energy.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ Benefiting from the relative force loss, small forces can be fitted more accurat

## The fitting network

The construction of the fitting net is given by section {ref}`fitting_net <model/fitting_net>`
The construction of the fitting net is given by section {ref}`fitting_net <model[standard]/fitting_net>`

```json
"fitting_net" : {
Expand All @@ -89,9 +89,9 @@ The construction of the fitting net is given by section {ref}`fitting_net <model
},
```

- {ref}`neuron <model/fitting_net[ener]/neuron>` specifies the size of the fitting net. If two neighboring layers are of the same size, then a [ResNet architecture](https://arxiv.org/abs/1512.03385) is built between them.
- If the option {ref}`resnet_dt <model/fitting_net[ener]/resnet_dt>` is set to `true`, then a timestep is used in the ResNet.
- {ref}`seed <model/fitting_net[ener]/seed>` gives the random seed that is used to generate random numbers when initializing the model parameters.
- {ref}`neuron <model[standard]/fitting_net[ener]/neuron>` specifies the size of the fitting net. If two neighboring layers are of the same size, then a [ResNet architecture](https://arxiv.org/abs/1512.03385) is built between them.
- If the option {ref}`resnet_dt <model[standard]/fitting_net[ener]/resnet_dt>` is set to `true`, then a timestep is used in the ResNet.
- {ref}`seed <model[standard]/fitting_net[ener]/seed>` gives the random seed that is used to generate random numbers when initializing the model parameters.

## Loss

Expand Down
4 changes: 2 additions & 2 deletions doc/model/train-fitting-dos.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,11 +16,11 @@ $deepmd_source_dir/examples/dos/input.json

The training and validation data are also provided our examples. But note that **the data provided along with the examples are of limited amount, and should not be used to train a production model.**

Similar to the `input.json` used in `ener` mode, training JSON is also divided into {ref}`model <model>`, {ref}`learning_rate <learning_rate>`, {ref}`loss <loss>` and {ref}`training <training>`. Most keywords remain the same as `ener` mode, and their meaning can be found [here](train-se-e2-a.md). To fit the `dos`, one needs to modify {ref}`model/fitting_net <model/fitting_net>` and {ref}`loss <loss>`.
Similar to the `input.json` used in `ener` mode, training JSON is also divided into {ref}`model <model>`, {ref}`learning_rate <learning_rate>`, {ref}`loss <loss>` and {ref}`training <training>`. Most keywords remain the same as `ener` mode, and their meaning can be found [here](train-se-e2-a.md). To fit the `dos`, one needs to modify {ref}`model[standard]/fitting_net <model[standard]/fitting_net>` and {ref}`loss <loss>`.

## The fitting Network

The {ref}`fitting_net <model/fitting_net>` section tells DP which fitting net to use.
The {ref}`fitting_net <model[standard]/fitting_net>` section tells DP which fitting net to use.

The JSON of `dos` type should be provided like

Expand Down
4 changes: 2 additions & 2 deletions doc/model/train-fitting-tensor.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ $deepmd_source_dir/examples/water_tensor/polar/polar_input_torch.json

The training and validation data are also provided our examples. But note that **the data provided along with the examples are of limited amount, and should not be used to train a production model.**

Similar to the `input.json` used in `ener` mode, training JSON is also divided into {ref}`model <model>`, {ref}`learning_rate <learning_rate>`, {ref}`loss <loss>` and {ref}`training <training>`. Most keywords remain the same as `ener` mode, and their meaning can be found [here](train-se-e2-a.md). To fit a tensor, one needs to modify {ref}`model/fitting_net <model/fitting_net>` and {ref}`loss <loss>`.
Similar to the `input.json` used in `ener` mode, training JSON is also divided into {ref}`model <model>`, {ref}`learning_rate <learning_rate>`, {ref}`loss <loss>` and {ref}`training <training>`. Most keywords remain the same as `ener` mode, and their meaning can be found [here](train-se-e2-a.md). To fit a tensor, one needs to modify {ref}`model[standard]/fitting_net <model[standard]/fitting_net>` and {ref}`loss <loss>`.

## Theory

Expand Down Expand Up @@ -72,7 +72,7 @@ The tensorial models can be used to calculate IR spectrum and Raman spectrum.[^1

## The fitting Network

The {ref}`fitting_net <model/fitting_net>` section tells DP which fitting net to use.
The {ref}`fitting_net <model[standard]/fitting_net>` section tells DP which fitting net to use.

::::{tab-set}

Expand Down
2 changes: 1 addition & 1 deletion doc/model/train-hybrid.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ This way, one can set the different cutoff radii for different descriptors.[^1]

## Instructions

To use the descriptor in DeePMD-kit, one firstly set the {ref}`type <model/descriptor/type>` to {ref}`hybrid <model/descriptor[hybrid]>`, then provide the definitions of the descriptors by the items in the `list`,
To use the descriptor in DeePMD-kit, one firstly set the {ref}`type <model[standard]/descriptor/type>` to {ref}`hybrid <model[standard]/descriptor[hybrid]>`, then provide the definitions of the descriptors by the items in the `list`,

```json
"descriptor" :{
Expand Down
22 changes: 11 additions & 11 deletions doc/model/train-se-a-mask.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ A complete training input script of this example can be found in the directory.
$deepmd_source_dir/examples/zinc_protein/zinc_se_a_mask.json
```

The construction of the descriptor is given by section {ref}`descriptor <model/descriptor>`. An example of the descriptor is provided as follows
The construction of the descriptor is given by section {ref}`descriptor <model[standard]/descriptor>`. An example of the descriptor is provided as follows

```json
"descriptor" :{
Expand All @@ -43,13 +43,13 @@ The construction of the descriptor is given by section {ref}`descriptor <model/d
}
```

- The {ref}`type <model/descriptor/type>` of the descriptor is set to `"se_a_mask"`.
- {ref}`sel <model/descriptor[se_a_mask]/sel>` gives the maximum number of atoms in input coordinates. It is a list, the length of which is the same as the number of atom types in the system, and `sel[i]` denotes the maximum number of atoms with type `i`.
- The {ref}`neuron <model/descriptor[se_a_mask]/neuron>` specifies the size of the embedding net. From left to right the members denote the sizes of each hidden layer from the input end to the output end, respectively. If the outer layer is twice the size of the inner layer, then the inner layer is copied and concatenated, then a [ResNet architecture](https://arxiv.org/abs/1512.03385) is built between them.
- The {ref}`axis_neuron <model/descriptor[se_a_mask]/axis_neuron>` specifies the size of the submatrix of the embedding matrix, the axis matrix as explained in the [DeepPot-SE paper](https://arxiv.org/abs/1805.09003)
- If the option {ref}`type_one_side <model/descriptor[se_a_mask]/type_one_side>` is set to `true`, the embedding network parameters vary by types of neighbor atoms only, so there will be $N_\text{types}$ sets of embedding network parameters. Otherwise, the embedding network parameters vary by types of centric atoms and types of neighbor atoms, so there will be $N_\text{types}^2$ sets of embedding network parameters.
- If the option {ref}`resnet_dt <model/descriptor[se_a_mask]/resnet_dt>` is set to `true`, then a timestep is used in the ResNet.
- {ref}`seed <model/descriptor[se_a_mask]/seed>` gives the random seed that is used to generate random numbers when initializing the model parameters.
- The {ref}`type <model[standard]/descriptor/type>` of the descriptor is set to `"se_a_mask"`.
- {ref}`sel <model[standard]/descriptor[se_a_mask]/sel>` gives the maximum number of atoms in input coordinates. It is a list, the length of which is the same as the number of atom types in the system, and `sel[i]` denotes the maximum number of atoms with type `i`.
- The {ref}`neuron <model[standard]/descriptor[se_a_mask]/neuron>` specifies the size of the embedding net. From left to right the members denote the sizes of each hidden layer from the input end to the output end, respectively. If the outer layer is twice the size of the inner layer, then the inner layer is copied and concatenated, then a [ResNet architecture](https://arxiv.org/abs/1512.03385) is built between them.
- The {ref}`axis_neuron <model[standard]/descriptor[se_a_mask]/axis_neuron>` specifies the size of the submatrix of the embedding matrix, the axis matrix as explained in the [DeepPot-SE paper](https://arxiv.org/abs/1805.09003)
- If the option {ref}`type_one_side <model[standard]/descriptor[se_a_mask]/type_one_side>` is set to `true`, the embedding network parameters vary by types of neighbor atoms only, so there will be $N_\text{types}$ sets of embedding network parameters. Otherwise, the embedding network parameters vary by types of centric atoms and types of neighbor atoms, so there will be $N_\text{types}^2$ sets of embedding network parameters.
- If the option {ref}`resnet_dt <model[standard]/descriptor[se_a_mask]/resnet_dt>` is set to `true`, then a timestep is used in the ResNet.
- {ref}`seed <model[standard]/descriptor[se_a_mask]/seed>` gives the random seed that is used to generate random numbers when initializing the model parameters.

To make the `aparam.npy` used for descriptor `se_a_mask`, two variables in `fitting_net` section are needed.

Expand All @@ -63,9 +63,9 @@ To make the `aparam.npy` used for descriptor `se_a_mask`, two variables in `fitt
}
```

- `neuron`, `resnet_dt` and `seed` are the same as the {ref}`fitting_net <model/fitting_net[ener]>` section for fitting energy.
- {ref}`numb_aparam <model/fitting_net[ener]/numb_aparam>` gives the dimesion of the `aparam.npy` file. In this example, it is set to 1 and stores the real/virtual sign of the atoms. For real/virtual atoms, the corresponding sign in `aparam.npy` is set to 1/0.
- {ref}`use_aparam_as_mask <model/fitting_net[ener]/use_aparam_as_mask>` is set to `true` to use the `aparam.npy` as the mask of the atoms in the descriptor `se_a_mask`.
- `neuron`, `resnet_dt` and `seed` are the same as the {ref}`fitting_net <model[standard]/fitting_net[ener]>` section for fitting energy.
- {ref}`numb_aparam <model[standard]/fitting_net[ener]/numb_aparam>` gives the dimesion of the `aparam.npy` file. In this example, it is set to 1 and stores the real/virtual sign of the atoms. For real/virtual atoms, the corresponding sign in `aparam.npy` is set to 1/0.
- {ref}`use_aparam_as_mask <model[standard]/fitting_net[ener]/use_aparam_as_mask>` is set to `true` to use the `aparam.npy` as the mask of the atoms in the descriptor `se_a_mask`.

Finally, to make a reasonable fitting task with `se_a_mask` descriptor for DP/MM simulations, the loss function with `se_a_mask` is designed to include the atomic forces difference in specific atoms of the input particles only.
More details about the selection of the specific atoms can be found in paper [DP/MM](left to be filled).
Expand Down
Loading