-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathdvr.diderot
243 lines (216 loc) · 11.6 KB
/
dvr.diderot
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
#version 1.0
/* ==========================================
## dvr.diderot: basic direct volume rendering of a scalar field
Note that this example is heavily based on the [`mip`](../mip)
example, especially for the basic set-up of camera, rays, and strands;
some of the variables and code are better documented there.
This program needs a dataset `vol.nrrd` to render; [`fs3d-scl`](../fs3d) is one
way to generate this. Some examples:
* Sphere:
`../fs3d/fs3d-scl -which dparab -width 5 -sz0 30 -sz1 32 -sz2 35 | unu save -f nrrd -o vol.nrrd`
* XYZ axes indicator:
`../fs3d/fs3d-scl -which frame -width 3.5 -sz0 93 -sz1 94 -sz2 95 | unu crop -min 30 30 30 -max M M M -o vol.nrrd`
The program also uses a univariate colormap `cmap.nrrd` (not for any
important purpose, just to show how it could be used). This can be created
via (here we are desaturating the `isobow.nrrd` colormap):
unu 3op lerp 0.5 ../cmap/isobow.nrrd 1 -o cmap.nrrd
The program can then be compiled with:
diderotc --exec dvr.diderot
Because the program names a proxy volume (`vol.nrrd`), compilation
is specialized on the volume data sizes and sample type. Recompilation
is required to operate on new input data with different sizes or type.
Some example invocations, organized according to dataset examples above
(still in-progress...):
* Sphere:
`./dvr -camEye 10 20 5 -camAt 0 0 0 -camUp 0 0 1 -camNear -2 -camFar 2 -camFOV 8 -iresU 400 -iresV 400 -thick 0.1 -refStep 0.1 -rayStep 0.01 -isoval 0.0 -cmin -1 -cmax 1`
* XYZ axes indicator:
`./dvr -camEye 3.1 4.0 2.1 -camAt 0.2 0.0 0.4 -camUp 0 0 1 -camNear -0.7 -camFar 0.5 -camFOV 20 -iresU 560 -iresV 480 -thick 0.015 -rayStep 0.005 -maxAlpha 0.95 -isoval 1 -mcnear 1.1 0.9 0.7 -mcfar 0.5 0.7 0.9`
All of these will generate an RGBA image as saved as `rgba.nrrd`, which can be
viewed with another Teem utility (peer to `unu`) called `overrgb`:
overrgb -i rgba.nrrd -b 1 1 1 -g 1.2 -o out.png
For the sphere dataset (via `fs3d-scl -which dparab`), this should produce
[sphere.png](sphere.png). For the
XYZ axes indicator dataset (via `fs3d-scl -which frame`) this should produce
[axes.png](axes.png); make sure you get the same.
Volume rendering is one setting in which we can make sure Diderot is
doing all the right transformations for measuring derivatives:
derivatives measured in volume index-space have to be transformed to
get the derivative in world-space. The [`mip`](../mip) example tested
for this kind of invariance in reconstruction of values, but not
derivatives. The following generates three different samplings of a
rotationally symmetric paraboloid dataset, and then renders each of
them with the same parameters, including a little fog to see the shape
of the volume domain itself. **For this to work as intended**, one must
comment out `field#2(3)[] V = bspln3 ⊛ vol;` and uncomment
`field#1(3)[] V = ctmr ⊛ vol;`: the Catmull-Rom accurately
reconstructs quadratic functions, but the cubic B-spline does not.
../fs3d/fs3d-scl -which dparab -width 3.8 -sz0 20 -sz1 22 -sz2 25 | unu save -f nrrd -o sphere-0.nrrd
../fs3d/fs3d-scl -which dparab -width 3.8 -sz0 55 -sz1 29 -sz2 12 -angle -30 | unu save -f nrrd -o sphere-1.nrrd
../fs3d/fs3d-scl -which dparab -width 3.8 -sz0 30 -sz1 80 -sz2 19 -angle -30 -axis 1 -1 1 -shear 0.5 -0.6 0.9 | unu save -f nrrd -o sphere-2.nrrd
for I in 0 1 2; do
cp sphere-$I.nrrd vol.nrrd
echo "compiling with sphere-$I.nrrd ..."
diderotc --exec dvr.diderot
echo "rendering with sphere-$I.nrrd ..."
./dvr -camEye 10 20 5 -camAt 0 0 0 -camUp 0 0 1 \
-camNear -4 -camFar 4 -camFOV 12 -iresU 400 -iresV 300 \
-phongKa 0 -phongKd 0.8 -phongKs 0.3 \
-thick 0.03 -rayStep 0.01 \
-isoval 0.0 -fog 0.5 0.5 0.5 0.012 -o rgba-$I.nrrd
overrgb -i rgba-$I.nrrd -b 1 1 1 -g 1.2 -o sphere-$I.png
done
unu join -i sphere-?.png -a 0 -incr | unu slice -a 1 -p 0 -o sphere-compare.png
Looking at the individual `sphere-?.png`, one should see the same sphere
(with the same shading and same specular hightlight), surrounded by different
foggy regions. These show how `sphere-1.nrrd` is sampled on a rotated grid,
and how `sphere-2.nrrd` is sampled on a rotated and sheared grid. The
[sphere-compare.png](sphere-compare.png) image puts these three images into
the red, green, and blue channels, providing another way to confirm that
the sphere itself is the same in each. Try changing the reconstruction kernel
to `c1tent` to see the results of inaccurate reconstruction.
Note that this program approximates the volume rendering integral by updating two
variables at each sample along the ray (going front to back):
rgb += transp*sampleA*sampleRGB;
transp *= 1 - sampleA;
where `rgb` is the (pre-multiplied) color of the ray so far, `transp`
is the transparency (1-opacity), and `sampleRGB` and `sampleA` are the
color and opacity of the current sample being composited. While such
an inner loop is common for volume rendering, one could ask why
couldn't Diderot have figured out how transform a high-level
mathematical statement of the volume rendering integral into such an
implementation. This is a topic of current research.
========================================== */
input image(3)[] vol ("volume dataset to render") = image("vol.nrrd");
/* see ../mip/mip.diderot for everything about camera set-up */
input vec3 camEye ("camera look-from point") = [6, 9, 2];
input vec3 camAt ("camera look-at point") = [0, 0, 0];
input vec3 camUp ("camera pseudo-up vector") = [0, 0, 1];
input real camNear ("relative to look-at point, distance to near clipping plane (where rays start from)") = -3;
input real camFar ("relative to look-at point, distance to far clipping plane") = 3;
input real camFOV ("field-of-view angle (in degrees) subtended vertically by view plane") = 15;
input bool camOrtho ("whether to use orthographic, instead of perspective, projection") = false;
input int iresU ("# samples on horizontal axis of image") = 640;
input int iresV ("# samples on vertical axis of image") = 480;
input real rayStep ("inter-sample distance along view direction") = 0.1;
input real refStep ("reference (unit) step length, for normalizing opacities") = 0.1;
input real transp0 ("transparency close enough to 0 to terminate ray") = 0.005;
input real isoval ("isovalue at which to render soft isosurface") = 0;
input real thick ("approximate thickness (in world-space) of soft isosurface") = 0.1;
input real maxAlpha ("maximum opacity on rendered surface") = 1;
input real phongKa ("Blinn-Phong ambient component") = 0.1;
input real phongKd ("Blinn-Phong diffuse component") = 0.7;
input real phongKs ("Blinn-Phong specular component") = 0.3;
input real phongSp ("Blinn-Phong specularity exponent") = 60;
input vec3 litdir ("direction (non-normalized) towards light source, in (U,V,N) view-space") = [-1, -2, -1];
input vec3 mcnear ("material color at near clipping plane (for depth cuing)") = [1,1,1];
input vec3 mcfar ("material color at far clipping plane") = [1,1,1];
/* this fog has no practical value except as a subtle indicator
of the extent and shape of the volume data domain */
input vec4 fog ("fog RGBA inside volume data domain") = [1,1,1,0];
input image(1)[3] cmap ("univariate colormap") = image("cmap.nrrd");
input real cmin ("value mapped to min end of colormap") = 0;
input real cmax ("value mapped to max end of colormap. By default cmin==cmax==0, which disables this colormapping") = 0;
/* Convolve volume data with one of various possible kernels;
see ../mip/mip.diderot for more info */
//field#1(3)[] V = c1tent ⊛ vol;
//field#1(3)[] V = ctmr ⊛ vol;
field#2(3)[] V = bspln3 ⊛ vol;
//field#4(3)[] V = c4hexic ⊛ vol;
/* create a field to render from the original volume data field */
field#1(3)[] F = V - isoval;
/* create a 1-D field around the colormap */
field#0(1)[3] CM = tent ⊛ clamp(cmap);
// (boilerplate) computing ray parameters and view-space basis
vec3 camN = normalize(camAt - camEye); // N: away from eye
vec3 camU = normalize(camN × camUp); // U: right
vec3 camV = camN × camU; // V: down (right-handed frame)
real camDist = |camAt - camEye|;
real camVmax = tan(camFOV*π/360)*camDist;
real camUmax = camVmax*iresU/iresV;
real camNearVsp = camNear + camDist; // near clip, view space
real camFarVsp = camFar + camDist; // far clip, view space
// convert light directions from view-space to world-space
vec3 litwsp = transpose([camU,camV,camN])•normalize(litdir);
/* 2-D opacity function, after Levoy. The 1.4 is a trick to ensure
that there is a segment of positions receiving maximum opacity. */
function real alpha(real v, real g) = maxAlpha*clamp(0, 1, 1.4 - |v|/(g*thick));
// how to render ray through (rayU,rayV) on view plane
strand raycast(int ui, int vi /* real rayU, real rayV */) {
// cell-centered sampling of view plane (intersects look-at)
real rayU = lerp(-camUmax, camUmax, -0.5, ui, iresU-0.5);
real rayV = lerp(-camVmax, camVmax, -0.5, vi, iresV-0.5);
// creation of per-strand ray state based on ../mip/mip.diderot
real rayN = camNearVsp;
vec3 UV = rayU*camU + rayV*camV;
vec3 rayOrig = camEye + (UV if camOrtho else [0,0,0]);
vec3 rayVec = camN + ([0,0,0] if camOrtho else UV/camDist);
/* alphaFix is used for opacity correction, so that attenuation
happens in world-space rather than ray sample index space.
The actual distance between samples on this ray is |rayVec|*rayStep */
real alphaFix = |rayVec|*rayStep/refStep;
vec3 eyeDir = -normalize(rayVec);
// output for this ray
output vec4 rgba = [0,0,0,0];
// state for this ray is current color ...
vec3 rgb = [0,0,0];
// ... and current tranparency
real transp = 1;
/* example of turning on debuging for one strand (pixel);
can can then have "if (verb) { print(...); }"
bool verb = 369==ui && 242==vi; */
update {
rayN += rayStep; // increment ray position
if (rayN > camFarVsp) { // ray passed the far clipping plane
stabilize;
}
vec3 pos = rayOrig + rayN*rayVec; // pos is ray sample position
if (!inside(pos,F)) { // If not inside field domain,
continue; // then move onto next iteration
}
// compute fog contribution
real aa = fog[3];
if (aa > 0) {
aa = 1 - (1 - aa)^alphaFix;
rgb += transp*aa*[fog[0],fog[1],fog[2]];
transp *= 1 - aa;
}
// compute data contribution
real val = F(pos);
vec3 grad = -∇F(pos);
aa = alpha(val, |grad|);
if (aa == 0) {
continue;
}
aa = 1 - (1 - aa)^alphaFix;
// Note: not standard Phong diffuse (no dark hemisphere)
real dcomp = lerp(0, 1, -1, normalize(grad)•litwsp, 1)^2;
// If phongKs=0, this is conditional expression speeds things slightly
real scomp = max(0,normalize(grad)•normalize(eyeDir+litwsp))^phongSp
if phongKs != 0 else 0.0;
// simple depth-cueing
vec3 dcol = lerp(mcnear, mcfar, camNearVsp, rayN, camFarVsp);
// lots of things could be basis for material color; this
// is contrived to show off some univariate mapping
vec3 mcol = [1,1,1];
if (cmin != cmax) {
mcol = CM(lerp(0, 1, cmin, pos[2], cmax));
}
// light color is currently [1,1,1]
rgb += transp*aa*((phongKa + phongKd*dcomp)*modulate(dcol,mcol)
+ phongKs*scomp*[1,1,1]);
transp *= 1 - aa;
if (transp < transp0) { // early ray termination
transp = 0;
stabilize;
}
}
stabilize {
if (transp < 1) { // un-pre-multiply opacities
real aa = 1-transp;
rgba = [rgb[0]/aa, rgb[1]/aa, rgb[2]/aa, aa];
}
}
}
initially [ raycast(ui, vi)
| vi in 0..iresV-1, // slower
ui in 0..iresU-1 ]; // faster