Skip to content

Commit

Permalink
test 2.0.2 (#161)
Browse files Browse the repository at this point in the history
* Dev (#125)

* update workflow and install method.

* build(deps): update pyproject.toml

* build(deps): update pyproject.toml

* chore: delete release-check.md

* ci: update publish-to-pypi.yml

* Update pyproject.toml

* Update pyproject.toml

* update mix.ep50.pth, mix.iter500.pth and train_config.json (#126)

* update h-BN example, ckpt and the related doc. (#127)

* update mix.ep50.pth, mix.iter500.pth and train_config.json

* update hBN example

* docs: update hands_on.md

* update run.py:  (#128)

* update run.py: support change device and dtype when postprocess the model

* update argcheck.py

* update argcheck.py

* add E3 features (node/edge) to hamiltonian/density blocks (#129)

* stack changes

* fix test ham to feature

* update write block function in command line (#132)

* update write block

* update task naming for each mdoel

* feat(dftb): add support sk params from dftb skf files . (#133)

* add xitorch interp1d

* feat: create dptb/utils/_xitorch/__init__.py

* update C_chain example

* add sk

* read skfile and load sk para

* update sk_param.py

* update onsite.py:

add support load onsite E from the dftb skf files.

* update sk_param.py

* update sk_param.py

* add hopping dptb

* add core for dftb support.

* update sk_param.py: make the outof skparams in the same format as nnsk model parameters.

* update dftbsk.py: make the skparas from skf files  in the same style of nnsk.

* update hopping_dptb.py

* update onsite.py

* update dftbsk.py and nnsk.py

* add new mix type: dftb + nnenv

* update build.py

* update build.py and dftbsk.py

* update deeptb.py

* add build_model for dftbsk and dftbsk+nnenv two new mode.

* add hBN dftb example

* fix(SKParam):

Update SKParam class to handle missing keys in skdict and raise appropriate errors. Add unit tests for SKParam class.

* feat(train): Add skints loss function for training with nnsk model

* test(hopping_dptb): Update dftb/hopping_dptb.py and add test_dftbsk.py

* test(build_model): Update deeptb.py and test_build_model.py with dftbsk changes

* test: Refactor test_build_model.py to remove unnecessary blank lines and add validation for model_options in test_build_model_failure()

* update deeptb.py

* test(test_sktb): to add new tests for dftbsk and nnsk models

* 📃 docs(dftb): Update hBN_dftb example with new data and input files

* Update deeptb.py

* Update SE2Aggregation class in se2.py to use the last 4 columns of x instead of the last 3 columns. Update _SE2Descriptor class in se2.py to set the flow parameter to "target_to_source". and add radial info into env matrix (#135)

* feat(command): add cskf command to collect the skfiles into a pth database  (#134)

* add xitorch interp1d

* feat: create dptb/utils/_xitorch/__init__.py

* update C_chain example

* add sk

* read skfile and load sk para

* update sk_param.py

* update onsite.py:

add support load onsite E from the dftb skf files.

* update sk_param.py

* update sk_param.py

* add hopping dptb

* add core for dftb support.

* update sk_param.py: make the outof skparams in the same format as nnsk model parameters.

* update dftbsk.py: make the skparas from skf files  in the same style of nnsk.

* update hopping_dptb.py

* update onsite.py

* update dftbsk.py and nnsk.py

* add new mix type: dftb + nnenv

* update build.py

* update build.py and dftbsk.py

* update deeptb.py

* add build_model for dftbsk and dftbsk+nnenv two new mode.

* add hBN dftb example

* fix(SKParam):

Update SKParam class to handle missing keys in skdict and raise appropriate errors. Add unit tests for SKParam class.

* feat(train): Add skints loss function for training with nnsk model

* test(hopping_dptb): Update dftb/hopping_dptb.py and add test_dftbsk.py

* test(build_model): Update deeptb.py and test_build_model.py with dftbsk changes

* test: Refactor test_build_model.py to remove unnecessary blank lines and add validation for model_options in test_build_model_failure()

* update deeptb.py

* test(test_sktb): to add new tests for dftbsk and nnsk models

* 📃 docs(dftb): Update hBN_dftb example with new data and input files

* 🦄 refactor(SKParam): Update SKParam class to include HubdU and Occu in skdict

* Update deeptb.py

* ✨ feat(cskf):  Add collectskf.py to collect sktb params from sk files

* 🧪 test(csfk):  update test_skparam to add unit test for cskf command

* 📃 docs(dftb): add docs about dftb example into index.rst

* Fix bugs in SE2Aggregation and _SE2Descriptor classes

* add example mos2

* update hBN example

* fix(se2): update the smooth function in getting env descriptor (#136)

* 🐞 fix(se2): update the smooth function in getting env descriptor

* test: update test_emb_se2.py

* test: update test_emb_se2.py

* Update  version 2.0.1 in pyproject.toml

* Fix(nnsk): NNSK class in nnsk.py to use the get() method when accessing the full orbital (#139)

* Update NNSK class in nnsk.py to use the get() method when accessing values in the full_basis_to_basis dictionary.

* fix digital error in test_emb_se2

* temp

* Update(nnsk): automatic orthogonalization (#141)

* fix(data): fix the bug in  dm parse (#142)

* Update(nnsk): automatic orthogonalization

* update DM parse

* feat:add support to pass kpoints np.array to get band eigenvalues. (#145)

* feat: create toskint.ipynb

* feat:add support to pass kpoints nparray to get band eigenvalues.

* feat(curve_fitting.py): develop a curve fitting function that converts the dftb model to nnsk (#146)

* update fitting dftb

* add(dftb2nnsk): develop fitting class for converting dftb to nnsk model

* remove(curve-fitting.ipynb): remove the notebook for development

* fix test split

* add(test_dftb2nnsk): add simple test for dftb2nnsk class

* temp

* fix test

* fix(nnsk): fix device type errpr in to json function

* rename hopping_dptb to hopping_dftb

* temp

* align inferences

* update decaying function

* remove rc from dftb2nnsk

* fix test nrl

* fix test nrl

* update argcheck

* update dftb2nnsk.py

* update argcheck

* update read_NRL_tojson.py

---------

Co-authored-by: qqgu <[email protected]>

* feat(data): add parse md trajectory of abacus (#144)

* update abacus parse md

* fix(default_dataset): The natom and nframe default setting

* shift the 'pos‘ position in test default dataset

* fix lattice constant transform

* update parse_abacus_md

* update abacus.py

* update abacus.py

* update parse abacus scf lattice constant

---------

Co-authored-by: qqgu <[email protected]>

* style: optmize the import of each submodule (#154)

* data import

* optimize imports

* fix: idp(data) in nnsk and deeptb, and refactor soc switch in hr2hk  (#152)

* refactor: plotting code in dftb2nnsk.py fix push_decay method and update push options (#156)

* Refactor plotting code in dftb2nnsk.py

* refactor(nnsk): Refactor NNSK class in nnsk.py to fix push_decay method and update push options

* update saver.py to save ckpt with name of ovlp

* fix: update nnsk from reference and change the sign of ovp_thr

* update saver.py

* update mos2 example

* Update test_sktb.py with new model weights and fix init_model path in test_md

* docs: Update dftb.md with hBN model training steps and fix formatting (#157)

* docs: Update dftb.md with hBN model training steps and fix formatting

* Bump actions/setup-python from 4 to 5 (#131)

Bumps [actions/setup-python](https://github.com/actions/setup-python) from 4 to 5.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](actions/setup-python@v4...v5)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Bump actions/checkout from 3 to 4 (#130)

Bumps [actions/checkout](https://github.com/actions/checkout) from 3 to 4.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](actions/checkout@v3...v4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* feat: add -v to get dptb version, and update the version.py  to load version number and git version  (#158)

* Refactor plotting code in dftb2nnsk.py

* refactor(nnsk): Refactor NNSK class in nnsk.py to fix push_decay method and update push options

* update saver.py to save ckpt with name of ovlp

* fix: update nnsk from reference and change the sign of ovp_thr

* update saver.py

* update mos2 example

* Update test_sktb.py with new model weights and fix init_model path in test_md

* docs: Update dftb.md with hBN model training steps and fix formatting

* Refactor main.py to add version flag and handle unknown version

* Update pyproject.toml to add toml dependency

* feat: update pyproject.toml and __init__.py to automatically get the version number. (#160)

* Refactor plotting code in dftb2nnsk.py

* refactor(nnsk): Refactor NNSK class in nnsk.py to fix push_decay method and update push options

* update saver.py to save ckpt with name of ovlp

* fix: update nnsk from reference and change the sign of ovp_thr

* update saver.py

* update mos2 example

* Update test_sktb.py with new model weights and fix init_model path in test_md

* docs: Update dftb.md with hBN model training steps and fix formatting

* Refactor main.py to add version flag and handle unknown version

* Update pyproject.toml to add toml dependency

* update pyproject.toml and add -v command

* update ut.sh

* update unit_test.yml and ut.sh

* ci: update unit_test.yml

* ci: update unit_test.yml

* ci: update unit_test.yml

* ci: update unit_test.yml

* ci: update unit_test.yml

* ci: update unit_test.yml

* build(deps): update 2 files and delete 1 file

* ci: update devcontainer.yml

* ci: update devcontainer.yml

* ci: update devcontainer.yml

* ci: update devcontainer.yml

* ci: update devcontainer.yml

* back to main

* Update devcontainer.yml and unit_test.yml workflows

---------

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: Yinzhanghao Zhou <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
  • Loading branch information
3 people authored May 6, 2024
1 parent df2917b commit 6e97dd9
Show file tree
Hide file tree
Showing 58 changed files with 7,363 additions and 316 deletions.
12 changes: 3 additions & 9 deletions .github/workflows/devcontainer.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,6 @@ on:
- 'docs/**'
branches:
- main
pull_request:
branches:
- main
paths:
- '.github/workflows/devcontainer.yml'
- '.github/workflows/unit_test.yml'

jobs:
build_container_and_push:
Expand All @@ -28,14 +22,14 @@ jobs:
uses: docker/setup-buildx-action@v3

- name: Login to GitHub Container Registry
uses: docker/login-action@v2
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}

- name: Login to Aliyun Registry
uses: docker/login-action@v2
uses: docker/login-action@v3
with:
registry: registry.dp.tech
username: ${{ secrets.DP_HARBOR_USERNAME }}
Expand All @@ -50,4 +44,4 @@ jobs:
file: Dockerfile.${{ matrix.dockerfile }}
cache-from: type=registry,ref=ghcr.io/deepmodeling/deeptb-${{ matrix.dockerfile }}:latest
cache-to: type=inline
push: true
push: true
4 changes: 2 additions & 2 deletions .github/workflows/publish-to-pypi.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,9 @@ jobs:
publish:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v4
uses: actions/setup-python@v5
with:
python-version: '3.x'
- name: Install dependencies
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/publish-to-testpypi.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,10 +10,10 @@ jobs:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4

- name: Set up Python
uses: actions/setup-python@v4
uses: actions/setup-python@v5
with:
python-version: '3.x'

Expand Down
4 changes: 3 additions & 1 deletion .github/workflows/unit_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ on:
pull_request:
paths-ignore:
- 'docs/**'
- '.github/workflows/devcontainer.yml'

jobs:
build:
Expand All @@ -21,6 +20,9 @@ jobs:
with:
fetch-depth: 0
ref: "refs/pull/${{ github.event.number }}/merge"
- name: Add safe directory
run: |
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Install DeePTB & Run Test
id: s2
run: |
Expand Down
16 changes: 12 additions & 4 deletions docs/advanced/dftb.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,9 @@
This example demonstrates how to build a DeePTB model from DFTB SK files. The DFTB SK files are generated by the DFTB+ code.


## Step 1: Download DFTB SK Files
## Example: train hBN model with DFTB SK files

### Step 1: Download DFTB SK Files

you can download the skfiles from the [dftb.org](https://dftb.org/parameters/download) website. Here we provide the some sk files in the folder `examples/hBN_dftb/slakos`.

Expand All @@ -20,7 +22,9 @@ examples/hBN_dftb/slakos
└── Si-Si.skf
```

## Step 2: Load DFTB skfiles to in DeePTB to plot band structure:

### Step 2: Load DFTB skfiles to in DeePTB to plot band structure:


```bash
cd examples/hBN_dftb
Expand Down Expand Up @@ -77,7 +81,9 @@ bcal.band_plot(ref_band = kpath_kwargs["ref_band"],
<img src="../img/hbn_dftb.png" alt="示例图片" width="500"/>


## Step 3: Use SK params from DFTB skfiles to train DeePTB nnsk model

### Step 3: Use SK params from DFTB skfiles to train DeePTB nnsk model



use the input.json in the following:
Expand Down Expand Up @@ -239,7 +245,9 @@ bcal.band_plot(ref_band = kpath_kwargs["ref_band"],
<img src="../img/hbn_nnsk_dftb.png" alt="示例图片" width="500"/>


## Step4: load the previous trained model to further train the model use eigenvalues

### Step4: load the previous trained model to further train the model use eigenvalues


The input json looks like:

Expand Down
5 changes: 3 additions & 2 deletions dptb/__init__.py
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
from dptb.version import get_version as _get_version
__version__ = _get_version()
import importlib.metadata

__version__ = importlib.metadata.version("dptb")
10 changes: 4 additions & 6 deletions dptb/data/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,14 +14,13 @@
AtomicInMemoryDataset,
NpzDataset,
ASEDataset,
HDF5Dataset,
ABACUSDataset,
ABACUSInMemoryDataset,
DefaultDataset
)
from .dataloader import DataLoader, Collater, PartialSampler
from .build import dataset_from_config
from .test_data import EMTTestDataset
from .build import build_dataset
from .transforms import OrbitalMapper

__all__ = [
AtomicData,
Expand All @@ -33,17 +32,16 @@
AtomicInMemoryDataset,
NpzDataset,
ASEDataset,
HDF5Dataset,
ABACUSDataset,
ABACUSInMemoryDataset,
DefaultDataset,
DataLoader,
Collater,
PartialSampler,
dataset_from_config,
OrbitalMapper,
build_dataset,
_NODE_FIELDS,
_EDGE_FIELDS,
_GRAPH_FIELDS,
_LONG_FIELDS,
EMTTestDataset,
]
28 changes: 14 additions & 14 deletions dptb/data/dataset/_default_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,22 +65,11 @@ def __init__(self,
else:
raise ValueError("Wrong cell dimensions.")

# load atomic numbers
atomic_numbers = np.loadtxt(os.path.join(root, "atomic_numbers.dat"))
natoms = self.info["natoms"]
if natoms < 0:
natoms = atomic_numbers.shape[-1]
if atomic_numbers.shape[0] == self.info["natoms"]:
# same atomic_numbers, copy it to all frames.
atomic_numbers = np.expand_dims(atomic_numbers, axis=0)
self.data["atomic_numbers"] = np.broadcast_to(atomic_numbers, (self.info["nframes"], natoms))
elif atomic_numbers.shape[0] == natoms * self.info["nframes"]:
self.data["atomic_numbers"] = atomic_numbers.reshape(self.info["nframes"],natoms)
else:
raise ValueError("Wrong atomic_number dimensions.")

# load positions, stored as cartesion no matter what provided.
pos = np.loadtxt(os.path.join(root, "positions.dat"))
natoms = self.info["natoms"]
if natoms < 0:
natoms = int(pos.shape[0] / self.info["nframes"])
assert pos.shape[0] == self.info["nframes"] * natoms
pos = pos.reshape(self.info["nframes"], natoms, 3)
# ase use cartesian by default.
Expand All @@ -91,6 +80,17 @@ def __init__(self,
else:
raise NameError("Position type must be cart / frac.")

# load atomic numbers
atomic_numbers = np.loadtxt(os.path.join(root, "atomic_numbers.dat"))
if atomic_numbers.shape[0] == natoms:
# same atomic_numbers, copy it to all frames.
atomic_numbers = np.expand_dims(atomic_numbers, axis=0)
self.data["atomic_numbers"] = np.broadcast_to(atomic_numbers, (self.info["nframes"], natoms))
elif atomic_numbers.shape[0] == natoms * self.info["nframes"]:
self.data["atomic_numbers"] = atomic_numbers.reshape(self.info["nframes"],natoms)
else:
raise ValueError("Wrong atomic_number dimensions.")

# load optional data files
if get_eigenvalues == True:
if os.path.exists(os.path.join(self.root, "eigenvalues.npy")):
Expand Down
Loading

0 comments on commit 6e97dd9

Please sign in to comment.