Skip to content

Commit

Permalink
Merge pull request #73 from HDFGroup/phoenix
Browse files Browse the repository at this point in the history
registered docs update
  • Loading branch information
loricooperhdf authored Dec 4, 2023
2 parents 5579b50 + db9fbc1 commit dbb8aa9
Show file tree
Hide file tree
Showing 4 changed files with 17 additions and 17 deletions.
Binary file added documentation/hdf5-docs/bz_example.tar.gz
Binary file not shown.
8 changes: 4 additions & 4 deletions documentation/hdf5-docs/registered_filter_plugins.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,17 +55,17 @@ Please contact the maintainer of a filter for help with the filter/compression s

Please be aware that compression filters require that the library not use `H5_MEMORY_ALLOC_SANITY_CHECK`. Building in debug mode automatically enables this feature in earlier releases, which causes memory allocation and free problems in filter applications. Future versions of HDF5 will not enable this feature.

The `bz_example.tar.gz` file contains an example of implementing the BZIP2 filter to enable BZIP2 compression in HDF5. (This example is based on PyTables code that uses BZIP2 compression.). Download and uncompress this file as follows:
The [`bz_example.tar.gz`](/documentation/hdf5-docs/bz_example.tar.gz) file contains an example of implementing the BZIP2 filter to enable BZIP2 compression in HDF5. (This example is based on PyTables code that uses BZIP2 compression.). Download and uncompress this file as follows:

`gzip -cd bz_example.tar.gz | tar xvf -`
gzip -cd bz_example.tar.gz | tar xvf -

To compile the example, you will need to install the HDF5 library and use the h5cc compile script found in the bin/ directory of the HDF5 installation.

For information on h5cc, see: Compiling Your HDF5 Application
For information on h5cc, see [Compiling Your HDF5 Application](https://docs.hdfgroup.org/hdf5/develop/_l_b_compiling.html).

Please note that tools like h5dump that display information in an HDF5 file will not be able to display data that is compressed with BZIP2 compression, since BZIP2 is not implemented in HDF5.

However, as of HDF5-1.8.11, a new HDF5 feature will enable the h5dump tool to determine that the data is compressed with an external compression filter such as BZIP2, and will automatically load the appropriate library and display the uncompressed data.
However, as of HDF5-1.8.11, a new HDF5 feature will enable the `h5dump` tool to determine that the data is compressed with an external compression filter such as BZIP2, and will automatically load the appropriate library and display the uncompressed data.

The bz_example example code can be used for modifying the HDF5 source to "include" BZIP2 as one of the "internal" filters. For information on how to do this, see how ZLIB (the deflate filter) is implemented in the HDF5 source code. Specifically look at these files:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,8 @@ Please contact the maintainer of a VFD for help implementing the plugin.

| Driver | Driver Identifier| Search Name* | Short Description | URL | Contacts|
| --- | --- | --- | --- | --- | --- |
| CUDA GPU | 512 | gds | The HDF5 GPUDirect Storage VFD is a Virtual File Driver (VFD) for HDF5 that can be used to interface with Nvidia's GPUDirect Storage (GDS) API. The driver is built as a plugin library that is external to HDF5. | https://github.com/hpc-io/vfd-gds | Suren Byna (sbyna at lbl dot gov)|
| GDAL vsil | 513 | vsil | The HDF5 GDAL vsil Storage VFD is a Virtual File Driver (VFD) for the GDAL HDF5 driver that can be used to access any file supported by the GDAL Virtual File System Interface (https://gdal.org/user/virtual_file_systems.html). | https://github.com/OSGeo/gdal/blob/master/frmts/hdf5/hdf5vfl.h | Even Rouault (even dot rouault at spatialys dot com)|
| Unidata/UCAR NetCDF-C ByteRange | 514 | byte-range | The Unidata H5FDhttp.[ch] VFD driver is used to support accessing remote files using the HTTP byte range mechanism. It is part of the Unidata Netcdf-C library. | https://github.com/Unidata/netcdf-c/blob/main/libhdf5/H5FDhttp.c | Dennis Heimbigner (dmh at ucar.edu) |
| CUDA GPU | 512 | gds | The HDF5 GPUDirect Storage VFD is a Virtual File Driver (VFD) for HDF5 that can be used to interface with Nvidia's GPUDirect Storage (GDS) API. The driver is built as a plugin library that is external to HDF5. | [https://github.com/hpc-io/vfd-gds](ttps://github.com/hpc-io/vfd-gds) | Suren Byna (sbyna at lbl dot gov)|
| GDAL vsil | 513 | vsil | The HDF5 GDAL vsil Storage VFD is a Virtual File Driver (VFD) for the GDAL HDF5 driver that can be used to access any file supported by the GDAL Virtual File System Interface ([https://gdal.org/user/virtual_file_systems.html](https://gdal.org/user/virtual_file_systems.html)). | [https://github.com/OSGeo/gdal/blob/master/frmts/hdf5/hdf5vfl.h](https://github.com/OSGeo/gdal/blob/master/frmts/hdf5/hdf5vfl.h) | Even Rouault (even dot rouault at spatialys dot com)|
| Unidata/UCAR NetCDF-C ByteRange | 514 | byte-range | The Unidata H5FDhttp.[ch] VFD driver is used to support accessing remote files using the HTTP byte range mechanism. It is part of the Unidata Netcdf-C library. | [https://github.com/Unidata/netcdf-c/blob/main/libhdf5/H5FDhttp.c](https://github.com/Unidata/netcdf-c/blob/main/libhdf5/H5FDhttp.c) | Dennis Heimbigner (dmh at ucar.edu) |

*The Search Name provides a mechanism for searching for a VFD.
20 changes: 10 additions & 10 deletions documentation/hdf5-docs/registered_vol_connectors.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,23 +13,23 @@ Please contact the maintainer of a VOL connector for help implementing the plugi
## List of VOL Connectors Registered with The HDF Group
| Connector | Connector Identifier | Search Name* | Short Description | URL | Contacts
| --- | --- | --- | --- | --- | ---|
| Asynchronous I/O | 512 | async | Provides support for asynchronous operations to HDF5| https://github.com/hpc-io/vol-async | Suren Byna (sbyna at lbl dot gov)|
| Cache | 513 | cache | Provides support for multi-level, multi-location data caching to dataset I/O operations | https://github.com/hpc-io/vol-cache | Suren Byna (sbyna at lbl dot gov) |
| Log-based | 514| LOG | The log-based VOL plugin stores HDF5 datasets in a log-based storage layout.<br>In this layout, data of multiple write requests made by an MPI process are appended one after another in the file. Such I/O strategy can avoid the expensive inter-process communication and I/O serialization due to file lock contentions when storing data in the canonical order. Through the log-based VOL, existing HDF5 programs can achieve a better parallel write performance with minimal changes to their codes. | https://github.com/DataLib-ECP/vol-log-based/blob/master/README.md | Kai Yuan Hou <br> (khl7265 at ece dot northwestern dot edu) |
| Asynchronous I/O | 512 | async | Provides support for asynchronous operations to HDF5 | [https://github.com/hpc-io/vol-async](https://github.com/hpc-io/vol-async) | Suren Byna (sbyna at lbl dot gov) |
| Cache | 513 | cache | Provides support for multi-level, multi-location data caching to dataset I/O operations | [https://github.com/hpc-io/vol-cache](https://github.com/hpc-io/vol-cache) | Suren Byna (sbyna at lbl dot gov) |
| Log-based | 514 | LOG | The log-based VOL plugin stores HDF5 datasets in a log-based storage layout.<br>In this layout, data of multiple write requests made by an MPI process are appended one after another in the file. Such I/O strategy can avoid the expensive inter-process communication and I/O serialization due to file lock contentions when storing data in the canonical order. Through the log-based VOL, existing HDF5 programs can achieve a better parallel write performance with minimal changes to their codes. | [https://github.com/DataLib-ECP/vol-log-based/blob/master/README.md](https://github.com/DataLib-ECP/vol-log-based/blob/master/README.md) | Kai Yuan Hou <br> (khl7265 at ece dot northwestern dot edu) |
| DAOS | 4004 | daos | Designed to utilize the DAOS object storage system by use of the DAOS API <br> https://doi.org/10.1109/TPDS.2021.3097884 | https://github.com/HDFGroup/vol-daos <br> [HDF5 DAOS VOL Connector Design](https://github.com/HDFGroup/vol-daos/blob/master/docs/design_doc.pdf) <br> [HDF5 DAOS VOL Connector User's Guide](https://github.com/HDFGroup/vol-daos/blob/master/docs/users_guide.pdf) | help at hdfgroup dot org |
| native| 0 | native | | | help at hdfgroup dot org|
| pass-through| 517 | pass_through_ext| Provides a simple example of a pass-through VOL connector | https://github.com/hpc-io/vol-external-passthrough | Suren Byna (sbyna at lbl dot gov) |
| dset-split | 518 | dset-split | Creates separate sub files for each dataset created and mounts these sub-files as external links in the main file. It enables versioning of HDF5 files at a dataset boundary.| https://github.com/hpc-io/vol-dset-split | Annmary Justine (annmary dot roy at hpe dot com)|
| PDC-VOL| 519| PDC-VOL | It is a terminal VOL that reads and writes HDF5 objects to the PDC system| <https://github.com/hpc-io/pdc> https://github.com/hpc-io/vol-pdc| Houjun Tang (htang4 at lbl dot gov)|
| REST | 520| REST| Designed to utilize web-based storage systems by use of the HDF5 REST APIs | https://github.com/HDFGroup/vol-rest | Matthew Larson (mlarson at hdfgroup dot org)|
| LowFive| 521 | LowFive | A new data transport layer based on the HDF5 data model, for in situ workflows. Executables using LowFive can communicate in situ (using in-memory data and MPI message passing), reading and writing traditional HDF5 files to physical storage, and combining the two modes.| https://github.com/diatomic/LowFive | Tom Peterka (tpeterka at mcs dot anl dot gov) <br> Dmitriy Morozov (dmorozov at lbl dot gov) |
| native| 0 | native | | | help at hdfgroup dot org |
| pass-through| 517 | pass_through_ext| Provides a simple example of a pass-through VOL connector | [https://github.com/hpc-io/vol-external-passthrough](https://github.com/hpc-io/vol-external-passthrough) | Suren Byna (sbyna at lbl dot gov) |
| dset-split | 518 | dset-split | Creates separate sub files for each dataset created and mounts these sub-files as external links in the main file. It enables versioning of HDF5 files at a dataset boundary. | [https://github.com/hpc-io/vol-dset-split](https://github.com/hpc-io/vol-dset-split) | Annmary Justine (annmary dot roy at hpe dot com) |
| PDC-VOL| 519| PDC-VOL | It is a terminal VOL that reads and writes HDF5 objects to the PDC system | [https://github.com/hpc-io/pdc](https://github.com/hpc-io/pdc) [https://github.com/hpc-io/vol-pdc](https://github.com/hpc-io/vol-pdc) | Houjun Tang (htang4 at lbl dot gov) |
| REST | 520| REST| Designed to utilize web-based storage systems by use of the HDF5 REST APIs | [https://github.com/HDFGroup/vol-rest](https://github.com/HDFGroup/vol-rest) | Matthew Larson (mlarson at hdfgroup dot org) |
| LowFive| 521 | LowFive | A new data transport layer based on the HDF5 data model, for in situ workflows. Executables using LowFive can communicate in situ (using in-memory data and MPI message passing), reading and writing traditional HDF5 files to physical storage, and combining the two modes. | [https://github.com/diatomic/LowFive](https://github.com/diatomic/LowFive) | Tom Peterka (tpeterka at mcs dot anl dot gov) <br> Dmitriy Morozov (dmorozov at lbl dot gov) |

*The Search Name provides a mechanism for searching for a VOL.

## List of Prototype VOL Connectors

| Connector | Connector Identifier | Search Name* | Short Description| URL | Contacts |
| --- | --- | --- | --- | --- | --- |
| rados| unassigned | rados| Prototype VOL connector to access data in RADOS | https://github.com/HDFGroup/vol-rados | help at hdfgroup dot org|
| rados | unassigned | rados | Prototype VOL connector to access data in RADOS | [https://github.com/HDFGroup/vol-rados](https://github.com/HDFGroup/vol-rados) | help at hdfgroup dot org |

*The Search Name provides a mechanism for searching for a VOL.

0 comments on commit dbb8aa9

Please sign in to comment.