-
Notifications
You must be signed in to change notification settings - Fork 163
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Review H5Easy "extend/part" API. #1018
Comments
Hello @1uc , I was about to ask a related question when noticed your post here. I am a user with the use case you're mentioning: I would like to effectively write data to a data set by element in a loop. Right now, doing so with the provided To elaborate a bit more on the use-case, the data is stored in a custom container and is not contiguous in memory. The size of the data set is known, however, at the time I am dumping it to a file. What is the recommended way of writing data set by element? At the moment, I am trying to implement a solution using the lower level API, and avoid resizing the data set on every write. Thanks, |
I would try copying the discontiguous data into a contiguous buffer; and write from there. This buffer could either be the full size, or something sufficiently large (candidates are 4kb, 1MB, 4MB). I suspect, sorting the dataset by index (while creating the buffer) will pay off. If the elements are contiguous in the file, then that should be sufficient. If they are not, then HDF5 supports a selection mechanism. They fall into two groups: hyperslabs for somewhat structured selections; and unstructured by index (efficient if there's no structure). HighFive supports a couple of options:
The selection would be used to pick the elements of the dataset in the file (not in memory). If you want to select in memory (because you don't want to copy), then all I can say is that HDF5 supports this, but HighFive doesn't. One can use Please note that today is the last day any HighFive devs can expect to have write access to this repository. I'd like to keep HighFive alive past the end of the BlueBrainProject and indent to maintain it at https://github.com/highfive-devs/highfive. In the event that this repository is made read-only, please feel free to continue the discussion there. |
Hello @1uc, Thank you for the quick and very detailed reply! Just to clarify your second point a little bit, since this is exactly what I would like to do I think. Right now, I tried something very simple with
I understand that this line: Sorry again for the question, I struggled a bit with understanding the API. Thanks! |
I think you're asking about how to change the callback so it can write more than one double. I'm guessing You could use an
or a pointer:
Here |
In H5Easy there's API for reading and writing one element at a time:
HighFive/include/highfive/h5easy_bits/H5Easy_scalar.hpp
Lines 66 to 70 in 5f3ded6
HighFive/include/highfive/h5easy_bits/H5Easy_scalar.hpp
Lines 120 to 122 in 5f3ded6
It does this by creating a dataset that can be extended in all directions; and automatically grows if the index of the element written requires it to do so. (Negating our ability to spot off-by-one programming errors.)
The API for reading/writing one element at a time feels like it would tempt users into writing files that way in a loop. Which is a rather serious issue on common HPC hardware (and not great on consumer hardware).
To enable this API it must make a default choice for the chunk size, currently
10^n
. That seems very small and is at risk of creating files that can't be read efficiently. Picking it reasonably large might inflate the size of the file by a factor 100 or more.I think it might be fine to allow users to read and write single elements of an existing dataset, i.e. without the automatically growing aspect; and a warning in the documentation to not use it in a loop. In core we support various selection APIs that are reasonably compact: list of random points, regular hyperslabs (general too) and there's a proposal to allow Cartesian products of simple selections along each axes.
The text was updated successfully, but these errors were encountered: