Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

3D renderer: Volume not shown if larger than graphics card memory #48

Open
codeling opened this issue Dec 5, 2019 · 1 comment
Open

Comments

@codeling
Copy link
Contributor

codeling commented Dec 5, 2019

When using OpenGL2 backend in VTK, volumes larger than the graphics card memory are not rendered when the GPU renderer is selected; the vtkSmartVolumeMapper does not handle such volumes automatically.

Current Workaround: Use CPU renderer (by switching to "RayCastRenderMode" in renderer settings).

There are other ways to render such volumes, e.g. using partitions or the OSPRay renderer (see this discussion on using vtkSmartVolumeMapper for large volumes, or using vtkMultiBlockVolumeMapper as mentioned in this discussion

The goal of this issue is to experiment with the different options, and find out which is best suited for our purpose. In the end an automatism should exist which chooses the most suitable rendering mode automatically, depending on volume size and available GPU memory.

@codeling
Copy link
Contributor Author

codeling commented Jul 4, 2024

Update: Using SetPartitions on vtkOpenGLGPUVolumeRayCastMapper is not a good idea, as at least currently it is very slow in our tests, as also described here. From the numbers reported there, vtkMultiBlockVolumeMapper seems like a viable solution.

Another possibility might be to use "partitioned datasets" using vtkPartitionedDataSet as described in this paraview forum post, though at this point it is unclear how these can be used, and whether they are more performant than vtkMultiBlockVolumeMapper.

In general it seems like for both these solutions (vtkMultiBlockVolumeMapper and vtkPartitionedDataSet), the dataset needs to be split up "manually"; this either requires splitting up the volume in memory (leading to a duplication of memory consumption) or to already load separate, pre-split chunks of the full dataset as individual volume datasets; especially the second option would require a larger change to our dataset loading/representation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant