Proof of concept: Use Shader Graph for point cloud rendering #265
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The point cloud rendering in #218 is awesome, but unfortunately it will only work with the Universal Render Pipeline. That's especially unfortunate because we've added support for the built-in pipeline in this release. I was curious what stopped us from implementing point cloud rendering using Shader Graph, which should make it much easier (perhaps even automatic) to support all the render pipelines. So this draft PR is a proof of concept of doing that, and I don't see any major barriers to it. Short version: it works.
URP:
HDRP:
It uses a custom function node to read from the structured buffer, so no increase in memory usage. In fact, it should be pretty easy to use this same approach in the non-attenuated case, too, so we only need one copy of the points.
Because the output vertex position in shader graph is always in "object space", we need to transform the clip space position back to object space, only so Unity can go the other way again. I think this is the biggest downside to this approach. But GPUs basically do
matrix * vector
multiplications in their sleep, right? I haven't measured performance to be sure, but I think this is a worthwhile tradeoff for the compatibility that shader graph gives us.It should also be possible to do nifty stuff like drape raster overlays over point clouds.
I'm currently using an "empty"
Mesh
to drive the rendering. It doesn't have any vertex data, but it does have index data because meshes require it. So this is pretty inefficient, but using a Mesh rather than DrawProcedural makes Unity set the model matrix and maybe other uniforms in the normal way that shader graph expects. We can definitely make this more efficient, I was just being lazy.Stuff I haven't figured out (hopefully no deal killers here, but who knows?):
@j9liu I'm curious what you think. I guess the first question is whether I've missed any major deal killers on this such that its not as practical as it appears. And the second question (assuming the answer to the first is positive) is whether we want to a) ship the current implementation in this release, and perhaps switch to shader graph next release, b) hold shipping attentuation until we sort this out, or c) try to wrap up this approach this week and get it into this release.