-
Notifications
You must be signed in to change notification settings - Fork 2
/
background.tex
1750 lines (1477 loc) · 82.7 KB
/
background.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
\chapter{Background}
\label{bgch}
This chapter provides information relevant to the core parts of this research, with
a particular focus on areas relevant to the novel work presented. We start by
discussing the two main approaches to solving the NTE, Monte Carlo methods and
deterministic methods, as the hybrid methods that are described next incorporate both
types of solutions. Then, a discussion of previous work in the field of hybrid methods
is given, with a specific focus on the CADIS method and variants on that method as
well as significant historical work that has incorporated angular information into
hybrid methods. Finally, we present a mathematical background for and derivation of
the LDO equations.
\section{Approaches to Solving the Neutron Transport Equation}
\subsection{Monte Carlo Methods}
Solving the NTE using Monte Carlo methods approximates ``following'' the individual
particles from birth to death. The purpose of particle tracking is to calculate the
expectation or mean value $\xbar$ of some quantity of interest, often the neutron
scalar flux. The estimate of this quantity takes the form of the average of $N$
samples:
\begin{equation}
\xhat = \frac{1}{N}\sum_{n=1}^{N}x_n,
\end{equation}
\noindent where $x_n$ is the contribution from the $n^{th}$ particle history to the
quantity of interest.
As the calculation proceeds, $x_n$ is tallied from each neutron history in order
to calculate the estimated or sample mean $\xhat$ at the end of the calculation.
Errors in Monte Carlo calculations take the form of stochastic uncertainties, as the
independent variables of the NTE are treated continuously. Taking this into
consideration, it is useful to quantify how good of an estimate the sample value
$\xhat$ is to the true mean value $\xbar$.
For some property of a Monte Carlo history $x$ sampled from a continuous probability density
function $f(x)$, the variance of that property is defined to be
\begin{equation}
\sigma^2(x) = \overline{x^2} - \xbar^2,
\end{equation}
\noindent where
\begin{equation}
\overline{x^n} \equiv\int_{-\infty}^{\infty}x^n f(x)dx.
\end{equation}
The standard deviation of the property is calculated as the square root of the
variance:
\begin{equation}
\sigma(x) = \left(\overline{x^2} - \xbar^2\right)^{1/2}
\end{equation}
\noindent and provides a measure of the spread of $x$ about the mean value $\xbar$
\cite{lm}. With this, the variance and standard deviation of $\xhat$ can be
expressed in terms of the variance and standard deviation of $x$ as
\begin{equation}
\sigma^2(\xhat) = \frac{1}{N}\sigma^2(x)
\end{equation}
\noindent and
\begin{equation}
\label{eq:mc_var}
\sigma(\xhat) = \frac{\sigma(x)}{\sqrt{N}},
\end{equation}
\noindent respectively. A low standard deviation indicates that the values of $x$ are
closely clustered near $\xbar$, while a high standard deviation indicates a large
spread in the values of $x$. If $\xhat$, constructed from $N$ values of $x_n$, is
used to estimate $\xbar$, then the spread in the results of $\xhat$ about $\xbar$
is proportional to $\sigma(x)$ and falls off as the square root of the number of
histories in the sample, as seen in Equation \ref{eq:mc_var} \cite{lm}. This is to
say, generally, that a greater number of histories contributing to the property of
interest being calculated results in a lower standard deviation of the estimate of
that property.
For a given Monte Carlo calculation, the sample variance is defined as
\begin{equation}
S^2 = \frac{1}{N-1}\sum_{n=1}^{N}\left(x_n - \xhat\right)^2
\end{equation}
\noindent and is considered to be an unbiased estimator of the variance; the
expectation value of the sample variance is equal to the variance, $\sigma^2(x)$
\cite{lm}. Because it is an unbiased estimator of the variance, the sample variance
allows us to estimate the spread in $\xhat$; this is useful because $\xhat$ is the
value that actually results from the Monte Carlo calculation. In practice, the sample
variance and standard deviation are calculated as
\begin{equation}
S^2 = \frac{N}{N-1}\left(\widehat{x^2} - \xhat^2\right), \text{ where }
\widehat{x^2} \equiv \frac{1}{N}\sum_{n=1}^{N}x_n^2,
\end{equation}
\noindent and
\begin{equation}
S = \left(\frac{N}{N-1}\right)^{1/2}
\left[\frac{1}{N}\sum_{n=1}^{N}x_n^2 - \xhat^2\right]^{1/2},
\end{equation}
\noindent respectively \cite{lm}. For large numbers of histories, $\frac{N}{N-1}$ is
often set equal to one.
The simplest Monte Carlo model for particle transport problems is the ``analog''
model that uses the real probability that various events occur \cite{mcnp}. In the
analog model, particles are followed from event to event, and the next event is
always sampled from a number of possible events according to the real event
probabilities. This is called the analog Monte Carlo model because it is directly
analogous to the naturally occurring transport; it works well when a significant
fraction of the particles contribute to the tally estimate and can be compared to
detecting a significant fraction of the particles in the physical situation.
To quantify the efficiency of calculating a given quantity of interest, a metric known
as the ``figure of merit'' is often used.
\subsubsection{The Figure of Merit}
The figure of merit (FOM) is defined as
\begin{equation}
\text{FOM} = \frac{1}{R^2T},
\label{eq:fom}
\end{equation}
\noindent where $R$ is the estimated relative error, defined as $S/\xhat$, and $T$ is
the computer time taken to complete the calculation \cite{mcnp}. This value should be
approximately constant for any one Monte Carlo calculation, as $R^2$ is
proportional to $1/N$ and $T$ should be directly proportional to $N$.
As stated earlier, estimates for quantities of interest with the lowest statistical error
are usually obtained for quantities to which a substantial fraction of the histories
contribute. That is to say, in order to get estimates for quantities of interest that
are statistically meaningful (have sufficiently low statistical error), a sizable number of
the particle histories tracked should contribute to the estimate. This can be
difficult to achieve in a reasonable amount of computational time for certain analog
Monte Carlo calculations. That is to say, if these analog calculations were allowed
to continue until convergence, they would have a very small FOM because of the sheer
amount of calculation time needed for the calculation to finish. A pertinent example
of this type of problem is a neutron shielding scenario, in which the neutron scalar
flux varies by orders of magnitude through the shield and over the problem geometry.
In these cases, ``non-analog'' techniques are introduced.
Non-analog Monte Carlo attempts to follow ``interesting'' particles more often
than uninteresting ones, where an interesting particle is one that contributes much more
to the quantity that needs to be estimated. Non-analog techniques are
meant to increase the odds that a given particle contributes to the quantity of
interest. To ensure that the average score is the same in the non-analog model as in
the analog model, the score is modified to remove the effect of biasing the natural
odds.
A non-analog Monte Carlo technique will have the same expected tallies as an
analog technique if the expected weight executing any given random walk is preserved.
These variance reduction techniques can often decrease the relative error by
sampling naturally rare events with an unnaturally high frequency and weighting the
tallies appropriately. In the following subsection, several variance reduction methods
are described and discussed.
\subsubsection{Variance Reduction}
Commonly used classes of variance reduction techniques are truncation methods,
population control methods, and modified sampling methods \cite{mcnp}.
Some variance reduction methods are generally applicable, while others are more
specialized and carry high risk in use. Some variance reduction techniques cause an
increase in computational time, but variance typically decreases faster than
the increase in time, so these techniques still result in a net increase of the FOM
\cite{olsher}.
\paragraph{Truncation Methods}\mbox{} \\
Of the classes listed above, truncation methods are the simplest; they aim to
accelerate
calculations by truncating parts of phase space that do not contribute significantly
to the problem solution. One example of this is geometry truncation, in which
unimportant parts of the problem geometry are not modeled. Truncation methods may
also be applied to other independent variables such as energy; when using
energy cutoff, particles whose energy is out of the range of interest are terminated
so that computation time is not spent following them.
\paragraph{Population Control Methods}\mbox{} \\
Population control methods use particle splitting and Russian roulette to control the
number of samples taken in various regions of phase space. In important regions, many
samples of low weight particles are tracked, and in unimportant regions, few samples
of high weight are tracked. Weight adjustments are made to the particles to ensure
that the problem solution remains unbiased. Specific population control methods
include geometry splitting and Russian roulette, energy splitting and roulette,
weight cutoff, and weight windows \cite{mcnp}.
Using geometry splitting with Russian roulette, particles transported from a region
of higher importance to a region of lower importance undergo Russian roulette. Some
of the particles will be killed a certain fraction of the time, but survivors will be
counted more by increasing their weight the remaining fraction of the time. In doing
this, unimportant particles are followed less often, yet the problem solution remains
undistorted. If a particle is transported to a region of higher importance, it may be
split into two or more particles, each with less weight and therefore counting less.
In this case, important particles are followed more often, yet the solution is again
undistorted because, on average, the total weight is conserved.
In general, when a particle of weight $w_0$ is split into $k$ particles, the resulting
particles are each given a weight of $\frac{w_0}{k}$, conserving the expected
weight. When a particle is subject to Russian rouletting, it is turned into a
particle of weight $w_1 > w_0$ with probability $\frac{w_0}{w_1}$ and is killed with
probability $1 - \frac{w_0}{w_1}$, again conserving the expected weight.
Geometry splitting with Russian roulette can be used to great advantage in deep
penetration shielding problems. Splitting helps maintain the particle population,
which diminishes rapidly in analog simulations. Conversely, geometry splitting with
Russian roulette does not work well in problems that have severe angular dependence.
In the most extremely anisotropic case, a particle may never enter a geometric region
in which it may be split \cite{mcnp}.
Energy splitting and Russian roulette are generally used in combination but may be
employed separately. When using energy splitting, once a neutron drops below a given
energy threshold, it may be split into multiple neutrons, each with an appropriately
adjusted weight. This is useful when particles are more important in some energy
ranges than in others. In the case of using energy rouletting, if a particle drops
below a certain energy, a roulette game is played and the particle is either killed
or survives with a weight increased by a factor of the reciprocal of the survival
probability (to conserve overall particle population weight). These two energy-based
variance reduction techniques are independent of spatial location, so a space-energy
weight window (discussed below) is usually a better choice for problems with strong
space-energy dependence.
When weight cutoff is employed, Russian roulette is played if a particle's weight
drops below a specified cutoff value. The result of the roulette is that the particle
is either killed or survives with its weight increased to a given level. Weight
cutoff is most efficient when used in combination with geometry splitting (discussed
above) and implicit capture (discussed below). It is important to note that, unlike
in the case of the energy cutoff, the weight cutoff does not bias the solution
because the particles that survive do so with increased weight.
The last population control method discussed here is the weight window, which is a
phase space splitting and Russian roulette technique. The phase space may be
space-energy or solely space.
Each phase space cell is bounded by upper and lower weight bounds. If a particle is
above the upper weight bound, it is split such that the resultant particles are all
within the bounds of the weight window. If a particle is below the lower weight bound,
Russian roulette is played and the particle is either terminated or permitted to
survive with an increased weight within the bounds of the weight window. If a
particle's weight is within the window, no action is taken. All of these scenarios are
depicted in Figure \ref{fig:ww}, a cartoon of the weight window concept.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.85\textwidth]{img/ww-mcnp.eps}
\caption{Weight window phase space splitting and Russian roulette \cite{mcnp}.}
\label{fig:ww}
\end{figure}
The weight window may be used alone to good effect, but it is particularly powerful
when used in conjunction with other variance reduction techniques that introduce large
variations in particle weight. Well-specified weight windows keep the Monte Carlo
solution from severe perturbations resulting from high-weight particles and
simultaneously keep computational resources from wasting time on low-weight particles
by rouletting them.
\paragraph{Modified Sampling Methods}\mbox{} \\
Modified sampling methods alter the statistical sampling of a problem to increase the
number of tallies per particle. For a given Monte Carlo event, it is possible to
sample from an arbitrary distribution rather than the physical probability as long as
the particle weights are adjusted to compensate. With modified sampling methods,
sampling is done from distributions that send particles in desired directions or into
other desired regions of phase space such as time or energy. Modified sampling methods
may also change the location or type of collisions. Categories of modified sampling
methods include implicit capture, forced collisions, and source biasing.
Using implicit capture (also called implicit absorption or survival biasing),
particles are never killed by absorption. Instead, a particle's weight is reduced by
the absorption probability at each collision, allowing important particles to
survive by not being lost to absorption. Implicit capture can be thought of as a
splitting process in which a particle of weight $w_0$ is split into two particles: one
of weight $w_0(1-\frac{\Sigma_a}{\Sigma_t})$ that survives and is
subsequently followed, and one of weight $w_0\frac{\Sigma_a}{\Sigma_t}$ that is
instantaneously killed \cite{mcnp}.
The forced collision method increases sampling of collisions in specified spatial
cells. Particles undergoing forced collisions are split into collided and uncollided
parts. The collided part of the particle is forced to react within the current cell,
while the uncollided part of the particle exits the cell without collision. When the
track of the uncollided particle portion is continued, it is followed with weight
$w_0e^{-\Sigma_t d}$, where $w_0$ is the original particle weight and $d$ is the
distance traveled between the splitting site and the cell boundary. The collided part
of the particle thus reacts with weight $w_0\left(1 - e^{-\Sigma_t d}\right)$. These
resultant weights are chosen to reflect the actual physics of the problem;
$e^{-\Sigma_t d}$ is the probability of exiting the cell without collision, and
$1 - e^{-\Sigma_t d}$ is the probability of colliding in the cell. One of these two
things must happen to the original particle of weight $w_0$, so we observe that the starting
weight is preserved.
Finally, particle sources may be biased with respect to one or more variables. This
allows for greater numbers of particles to be produced in more important ranges of
each biased variable, with the particles' weights reduced accordingly. In the relevant
example of the neutron shielding problem, one may start more particles
at high energies and in strategic directions in order to get more particles to
contribute to the desired solution. The corresponding weights of the particles are
altered to correct the statistical distribution.
\subsection{Deterministic Methods}
In the case of deterministic methods, each of the six independent variables of the
steady-state NTE is discretized, relevant boundary conditions are imposed, and the
resulting system of linear algebraic equations is iterated over until an acceptable
solution has been reached. We limit the discussion here to the discretization of the
integro-differential form of the NTE and the finite-volume discrete ordinates method.
These discretizations introduce some errors into the calculations, with the
discretization of some variables being more problematic than others. For example, it
is functionally straightforward to discretize the energy and spatial variables,
while discretizing angular space using the discrete ordinates method is more mathematically
intricate and often brings deleterious errors (``ray effects'') into problem solutions.
Deterministic methods may converge more quickly than Monte Carlo methods, especially
in the case of shielding problems, though the solutions are often plagued by the
aforementioned inaccuracies.
\subsubsection{Discretization of the Neutron Transport Equation}
\label{sec:disc}
\paragraph{Energy Discretization - The Multigroup Approximation}\mbox{} \\
Discretization of the energy variable is known as the ``multigroup'' approximation;
it is relatively straightforward from a mathematical standpoint. Energy is broken up into $G$
groups, where the $g^{th}$ group has an upper bound of energy
$E_g$ and a lower bound of energy $E_{g+1}$ as shown in Figure \ref{egrid}. The highest energy
group has $g = 0$ and the lowest energy group has $g = G-1$. This convention is used because
neutrons are generally born at higher energies (starting in group 0 or 1) and scatter down to
lower energies before undergoing an absorption reaction.
\begin{figure}[!thb]
\centering
\begin{tikzpicture}
\draw[thick] (-5,0) -- (-3,0);
\draw[thick,dotted] (-3,0) -- (-1.5,0);
\draw[thick] (-1.5,0) -- (1.5,0);
\draw[thick,dotted] (1.5,0) -- (3,0);
\draw[thick,->] (3,0) -- (5.5,0);
\draw[thick] (-4.5,-0.25)--(-4.5,0.25);
\node [below] at (-4.5,-0.25) {$E_G$};
\draw[thick] (-3.5,-0.25)--(-3.5,0.25);
\node [below] at (-3.5,-0.25) {$E_{G-1}$};
\draw[thick] (-1,-0.25)--(-1,0.25);
\node [below] at (-1,-0.25) {$E_{g+1}$};
\draw[thick] (0,-0.25)--(0,0.25);
\node [below] at (0,-0.25) {$E_{g}$};
\node[above] at (-0.5, 0.25) {group $g$};
\draw[decorate,decoration={brace}](-1,0.275) -- (0,0.275);
\draw[thick] (1,-0.25)--(1,0.25);
\node [below] at (1,-0.25) {$E_{g-1}$};
\draw[thick] (3.5,-0.25)--(3.5,0.25);
\node [below] at (3.5,-0.25) {$E_1$};
\draw[thick] (4.5,-0.25)--(4.5,0.25);
\node [below] at (4.5,-0.25) {$E_0$};
\node [below] at (5.5,-0.1) {$E$};
\end{tikzpicture}
\caption{Discretized energy grid.}
\label{egrid}
\end{figure}
\FloatBarrier
Discretizing the NTE with respect to energy on this grid gives the $G$ multigroup
equations
\noindent\begin{minipage}{0.7\textwidth}
\begin{multline*}
\label{eq:mg_nte}
\bo \cdot \nabla \psi^g(\vecr,\bo) + \Sigma_t^g(\vecr) \psi^g(\vecr,\bo) = \\
\sum_{g'=0}^{G-1}\int_{4\pi} \Sigma_s^{g'\rightarrow g}(\vecr,\bo'\cdot\bo)
\psi^{g'}(\vecr,\bo')d\bo' + Q^g(\vecr,\bo),
\end{multline*}
\end{minipage}
\hspace{-0.5cm}
\begin{minipage}{0.3\textwidth}
\begin{align}
\begin{split}
g &= 0,1,\ldots,G-1.
\end{split}
\end{align}
\end{minipage}
\vspace{0.1cm}
\noindent Here it is assumed that, within each energy group, the angular flux may be
approximated as the product of some known function of energy $f(E)$ and the group flux
$\psi^g(\vecr, \bo)$ as
\begin{equation}
\psi(\vecr,E,\bo) \approx f(E)\psi^g(\vecr,\bo), \quad E_{g+1} < E \leq E_{g}\:,
\end{equation}
\noindent where $f(E)$ is normalized such that $\int_{E_g+1}^{E_{g}}f(E)dE = 1$. With
this, the multigroup cross sections and the group source are similarly defined
\cite{lm} as
\begin{align}
\Sigma_t^g(\vecr) &= \int_{E_{g+1}}^{E_{g}}\Sigma_t(\vecr,E)f(E)dE, \\
\Sigma_s^{g'\rightarrow g}(\vecr,\bo'\cdot\bo) &= \int_{E_{g+1}}^{E_{g}}\int_{E_{g'+1}}^{E_{g'}}
\Sigma_s(\vecr, E'\rightarrow E, \bo'\cdot\bo)f(E')dE'dE, \\
Q^g(\vecr,\bo) &= \int_{E_{g+1}}^{E_{g}}Q(\vecr,E,\bo)dE.
\end{align}
\paragraph{Spatial Discretization}\mbox{} \\
In the interest of completeness, we will briefly discuss the discretization of space.
The LDO equations are inherently three-dimensional \cite{ahrens}, so we will restrict
the discussion to three-dimensional space with point positions specified by Cartesian
coordinates. A general mesh cell is shown in Figure \ref{fig:spatial_mesh}.
\begin{figure}[!htb]
\centering
\begin{tikzpicture}
\filldraw (xyz cs:x=-3,y=-3,z=3) circle (2pt) node {} --
(xyz cs:x=3,y=-3,z=3) circle (2pt) node {};
\node[fill,circle,inner sep=0pt, minimum size = 2pt,
label={[shift={(1.75,-0.75)}]
\footnotesize$(x_{i-1/2}, y_{j-1/2}, z_{k+1/2})$}]
at (xyz cs:x=-3,y=-3,z=3) {};
\node[fill,circle,inner sep=0pt, minimum size = 2pt,
label={[shift={(1.75,-0.75)}]
\footnotesize$(x_{i+1/2}, y_{j-1/2}, z_{k+1/2})$}]
at (xyz cs:x=3,y=-3,z=3) {};
\filldraw (xyz cs:x=-3,y=3,z=3) circle (2pt) node {} --
(xyz cs:x=3,y=3,z=3) circle (2pt) node {};
\node[fill,circle,inner sep=0pt, minimum size = 2pt,
label={[shift={(1.75,-0.75)}]
\footnotesize$(x_{i-1/2}, y_{j+1/2}, z_{k+1/2})$}]
at (xyz cs:x=-3,y=3,z=3) {};
\node[fill,circle,inner sep=0pt, minimum size = 2pt,
label={[shift={(1.75,-0.75)}]
\footnotesize$(x_{i+1/2}, y_{j+1/2}, z_{k+1/2})$}]
at (xyz cs:x=3,y=3,z=3) {};
\filldraw (xyz cs:x=-3,y=3,z=3) --
(xyz cs:x=-3,y=3,z=-3) circle (2pt) node[] {};
\node[fill,circle,inner sep=0pt, minimum size = 2pt,
label={[shift={(1.75,-0.75)}]
\footnotesize$(x_{i-1/2}, y_{j+1/2}, z_{k-1/2})$}]
at (xyz cs:x=-3,y=3,z=-3) {};
\filldraw (xyz cs:x=3,y=3,z=3) --
(xyz cs:x=3,y=3,z=-3) circle (2pt) node {};
\node[fill,circle,inner sep=0pt, minimum size = 2pt,
label={[shift={(1.75,-0.75)}]
\footnotesize$(x_{i+1/2}, y_{j+1/2}, z_{k-1/2})$}]
at (xyz cs:x=3,y=3,z=-3) {};
\filldraw (xyz cs:x=3,y=-3,z=3) --
(xyz cs:x=3,y=-3,z=-3) circle (2pt) node {};
\node[fill,circle,inner sep=0pt, minimum size = 2pt,
label={[shift={(1.75,-0.75)}]
\footnotesize$(x_{i+1/2}, y_{j-1/2}, z_{k-1/2})$}]
at (xyz cs:x=3,y=-3,z=-3) {};
\draw (xyz cs:x=-3,y=-3,z=3) -- (xyz cs:x=-3,y=3,z=3);
\draw (xyz cs:x=3,y=-3,z=3) -- (xyz cs:x=3,y=3,z=3);
\draw (xyz cs:x=-3,y=3,z=-3) -- (xyz cs:x=3,y=3,z=-3);
\draw (xyz cs:x=3,y=-3,z=-3) -- (xyz cs:x=3,y=3,z=-3);
\node[fill,circle,inner sep=0pt, minimum size = 2pt,
label={[shift={(1.75,-0.75)}]
\footnotesize$(x_{i-1/2}, y_{j-1/2}, z_{k-1/2})$}]
at (xyz cs:x=-3,y=-3,z=-3) {};
\filldraw[dashed] (xyz cs:x=-3,y=-3,z=-3) circle (2pt) node {} --
(xyz cs:x=-3,y=3,z=-3);
\draw[dashed] (xyz cs:x=-3,y=-3,z=-3) -- (xyz cs:x=3,y=-3,z=-3);
\draw[dashed] (xyz cs:x=-3,y=-3,z=-3) -- (xyz cs:x=-3,y=-3,z=3);
\filldraw (xyz cs: x=0,y=0,z=0) circle (2pt) node[below]
{\footnotesize$(x_i, y_j, z_k)$};
\end{tikzpicture}
\caption{General three-dimensional mesh cell \cite{exmm}.}
\label{fig:spatial_mesh}
\end{figure}
The mesh cell is centered at the $i^{th}$ position along the $x$-axis, the $j^{th}$
position along the $y$-axis, and the $k^{th}$ position along the $z$-axis. Indexing
is such that there are $I$ mesh cells with $I+1$ grid points in the $x$-direction, $J$ mesh
cells with $J+1$ grid points in the $y$-direction, and $K$ mesh cells with $K+1$ grid points in
the $z$-direction. It is assumed that all material
properties are constant within a given cell. In order to eventually solve for the
scalar flux in a given system, we are interested in solving for the angular flux at
the center of each mesh cell, resulting in the $G\times I \times J \times K$
equations shown in Equation \ref{eq:mg_xyz_nte}.
\begin{minipage}{0.65\textwidth}
\begin{multline*}
\label{eq:mg_xyz_nte}
\bo\cdot\nabla\psi^g_{i,j,k}(\bo)+\Sigma_{t,i,j,k}^g\psi^g_{i,j,k}(\bo) = \\
\sum_{g'=0}^{G-1}\int_{4\pi} \Sigma_{s,i,j,k}^{g'\rightarrow g}(\bo'\cdot\bo)
\psi^{g'}_{i,j,k}(\bo')d\bo' + Q^g_{i,j,k}(\bo),
\end{multline*}
\end{minipage}
\begin{minipage}{0.31\textwidth}
\begin{align}
\begin{split}
g &= 0,1,\ldots,G-1,\\
i &= 1,2,\ldots,I,\\
j &= 1,2,\ldots,J,\\
k &= 1,2,\ldots,K.
\end{split}
\end{align}
\end{minipage} \strut
\noindent To solve for these cell-centered flux
quantities in practice, auxiliary equations are introduced. As these are specific to the
spatial discretization employed in a given solution and do not differ between the classical
discrete ordinates equations and the LDO formulation, we refer the reader to Reference
\cite{denovo} for more detail on spatial differencing and solution methods.
\paragraph{Angular Discretization - Discrete Ordinates}\mbox{} \\
\label{sec:do}
The last part of phase space to discretize in the time-independent NTE is angle.
The discrete ordinates method is the most common angular discretization method
incorporated into general-purpose neutron transport codes \cite{lm}. It is a
collocation method that requires the solution of the NTE to be exact at a
distinct number of angles $\bo_n$:
\noindent\begin{minipage}{0.69\textwidth}
\begin{align*}
\bo_n&\cdot\nabla\psi^{g,n}_{i,j,k}+\Sigma_{t,i,j,k}^g\psi^{g,n}_{i,j,k} = \\
&\sum_{g'=0}^{G-1}\sum_{\ell=0}^P \Sigma_{s,\ell,i,j,k}^{g'\rightarrow g}
\bigg[\Ye{\ell 0}{n}\phi_{\ell 0}^{g'} + \sum_{m=1}^{\ell}
\bigg(\Ye{\ell m}{n}\phi_{\ell m}^{g'} \\
&\qquad\qquad\qquad\qquad\qquad\qquad
+ \Yo{\ell m}{n}\vartheta_{\ell m}^{g'}\bigg)\bigg]
+ Q^{g,n}_{i,j,k},
\end{align*}
\end{minipage}
\hspace{-0.55cm}
\begin{minipage}{0.31\textwidth}
\begin{align}
\begin{split}
g &= 0,1,\ldots,G-1,\\
i &= 1,2,\ldots,I,\\
j &= 1,2,\ldots,J,\\
k &= 1,2,\ldots,K,\\
n &= 1,2,\ldots,N.
\label{eq:do}
\end{split}
\end{align}
\end{minipage}
\vspace{0.1cm}
\noindent Here, $\psi^n \equiv \psi(\bo_n)$ and the angles are integrated by a
quadrature rule such that their corresponding weights $w_n$ sum to $4\pi$. Weights and
ordinates (``quadrature sets'') are chosen in such a way as to provide good
approximations to angular integrals used to evaluate scalar flux \cite{lm,exmm}. The
upper limit of summation for the scattering term spherical harmonic expansion, denoted
as $P$ in Equation \ref{eq:do}, is known as the ``\pn order''. The scattering cross section
coefficient values $\Sigma_{s,\ell,i,j,k}^{g'\rightarrow g}$ come from data libraries based on
experimental measurements.
The scattering source is expanded in terms of spherical harmonics:
\begin{equation}
\phi_{\ell,i,j,k}^{g}=\sum_{n=1}^N \Ye{\ell m}{n}w_n\psi^{g,n}_{i,j,k}\ \text{ and }\
\vartheta_{\ell,i,j,k}^{g} = \sum_{n=1}^N \Yo{\ell m}{n}w_n\psi^{g,n}_{i,j,k},
\label{sph_harm_exp}
\end{equation}
\noindent where $\phi$ and $\vartheta$ are
referred to as the ``flux moments''.
Here, $\Ye{\ell m}{n}$ and $\Yo{\ell m}{n}$ are the ``even'' and ``odd''
real components of the spherical harmonic functions, defined as \cite{exmm}
\begin{equation}
\Ye{\ell m}{n} = (-1)^m\sqrt{(2-\delta_{m0})\frac{2\ell+1}{4\pi}
\frac{(\ell-m)!}{(\ell+m)!}}
P_{\ell m}(\cos\theta)\cos(m\varphi),
\label{eq:sph_e}
\end{equation}
\begin{equation}
\Yo{\ell m}{n} = (-1)^m\sqrt{(2-\delta_{m0})\frac{2\ell+1}{4\pi}
\frac{(\ell-m)!}{(\ell+m)!}}
P_{\ell m}(\cos\theta)\sin(m\varphi).
\label{eq:sph_o}
\end{equation}
\noindent In Equations \ref{eq:sph_e} and \ref{eq:sph_o}, $P_{\ell m}(\cos\theta)$ is
the associated Legendre polynomial and $(\theta,\varphi)$ are the components of $\bo$
as shown in Figure \ref{fig:ang} and Equations \ref{eq:ang_disc} -- \ref{eq:ang_sum}. It is
assumed that the double differential scattering cross section depends only on the dot product of
the incoming and outgoing angles of the particle undergoing scattering.
To summarize Equations
\ref{eq:do} -- \ref{eq:sph_o}, we note that the double differential scattering cross section is
expanded into the real components of the spherical harmonic functions, which can be represented
with Legendre polynomials; these are then multiplied with the angular flux moments expanded into
spherical harmonic functions.
\begin{figure}[!hbt]
\begin{minipage}{0.5\textwidth}
\begin{tikzpicture}
\coordinate (origin) at (0,0,0);
\coordinate (omega) at (2,4,7);
\coordinate (xaxis) at (0,0,9);
\coordinate (yaxis) at (3,0,0);
\coordinate (diag) at (2,5,0);
% axes
\draw[thick,->] (xyz cs:x=0) -- (xyz cs:x=3) node[right] {$x$};
\draw[thick,->] (xyz cs:y=0) -- (xyz cs:y=5) node[above] {$y$};
\draw[thick,->] (xyz cs:z=0) -- (xyz cs:z=8) node[below] {$z$};
% dashed box lines
\draw[dashed] (xyz cs:z=7,y=4) -- (xyz cs:z=7,y=0.25) node[left] {$\xi$};
\draw[dashed] (xyz cs:z=7,y=0) -- (xyz cs:z=7,x=2);
\draw[dashed] (xyz cs:z=7,x=2) -- (xyz cs:z=7,y=4,x=2);
\draw[dashed] (xyz cs:z=7,y=4,x=0) -- (xyz cs:z=7,y=4,x=2);
\draw[dashed] (xyz cs:x=2,z=0) -- (xyz cs:x=2,z=7);
\draw[dashed] (xyz cs:x=2,z=0,y=4) -- (xyz cs:x=2,z=0);
\draw[] (xyz cs:x=2.3,y=-0.1,z=0) -- (xyz cs:x=2.3,y=-0.1,z=0) node[above] {$\mu$};
\draw[dashed] (xyz cs:x=2,z=0,y=4) -- (xyz cs:x=0,z=0,y=4) node[left] {$\eta$};
\draw[dashed] (xyz cs:x=0,z=0,y=4) -- (xyz cs:z=7,y=4);
\draw[dashed] (xyz cs:x=2,z=0,y=4) -- (xyz cs:z=7,y=4,x=2);
\draw[thick,->] (xyz cs:x=0,y=0,z=0) -- (xyz cs:x=2,z=7,y=4);
\draw[] (xyz cs:x=2,z=7,y=4) -- (xyz cs:x=2,z=7,y=4) node[above] {$\bo$};
\draw[dashed] (xyz cs:x=2,z=0,y=4) -- (xyz cs:x=0,z=0);
\draw[dashed] (xyz cs:z=7,y=0) -- (xyz cs:x=2,z=7,y=4);
\pic [draw,<-,angle radius=0.3cm,angle eccentricity=1.4,"$\theta$"]
{angle = omega--origin--xaxis};
\pic [draw,->,angle radius=0.5cm,angle eccentricity=1.4,"$\varphi$"]
{angle = yaxis--origin--diag};
\end{tikzpicture}
\caption{Angular coordinate system \cite{exmm}.}
\label{fig:ang}
\end{minipage}
\begin{minipage}{0.5\textwidth}
\begin{subequations}
\begin{align}
\label{eq:ang_disc}
&\xi = \cos\theta \\
&\mu = \sqrt{1-\xi^2}\cos\varphi \\
&\eta = \sqrt{1-\xi^2}\sin\varphi
\end{align}
\end{subequations}
\begin{equation}
\label{eq:ang_sum}
\mu^2 + \eta^2 + \xi^2 = 1
\end{equation}
\end{minipage}
\end{figure}
\FloatBarrier Commonly-used quadrature sets include level-symmetric, Gauss-
Legendre product, quadruple range (QR) product, and linear-discontinuous
finite element (LDFE). The various quadrature set types have different
properties with each being better for certain classes of problems. For
example, relatively coarse (sixteen angles per octant) QR product quadratures
are generally sufficient for generating variance reduction parameters for
neutron transport problems, but more finely resolved quadrature sets are
recommended for photon transport problems \cite{advantg}. Level-symmetric
quadrature sets are widely applied for general applications \cite{lm} but
tend to exhibit far more ray effects than QR product quadratures
\cite{advantg}. In the following subsection, we will discuss ray effects in
greater detail.
\subsubsection{Ray Effects}
\label{sec:ray}
Ray effects are unphysical computational anomalies in the scalar flux solution that
arise from the discrete ordinates formulation. Because the NTE is only evaluated
at a finite number of discrete angles, the number of directions in
which particles may stream is restricted. As a consequence of this, contributions to
the scalar flux from uncollided particles are limited to those from the discrete
angles along which particle sources are ``visible'' \cite{lathrop}. A demonstrative
example of ray effects is shown in Figure \ref{fig:ray}. The plot shows results from
the PARTISN \cite{partisn} code using a triangular $P_n - T_n$ quadrature with
48 points and a
scattering ratio of $c = 0.25$ \cite{ahrens}. Although the point source
emits neutrons isotropically, the scalar flux calculated at a given radius out from
the source sees contributions only from the discrete angles along which particles may
stream. That is, the flux at a given distance from the point source is actually equal
in all directions and so the figure should appear to be a sphere, but the discrete
angles restrict streaming pathways to the rays shown in the image.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.5\textwidth]{img/ray-effects.png}
\caption{Isosurface plot of scalar flux from a point source \cite{ahrens}.}
\label{fig:ray}
\end{figure}
The severity of ray effects in a given simulation depends on the properties of
the sources. The largest consequences tend to occur in scenarios with localized
sources and relatively minimal scattering. When using the discrete ordinates
approximation in a purely absorbing medium, regardless of the accuracy of the
angular flux calculations and the number of discrete angles used, it is always
possible to get far enough away from a localized source such that a poor value
of the scalar flux is obtained at that point \cite{lathrop}. In a scattering
medium with localized sources, angular flux values are incorrect because they
depend on integrals that are poorly approximated by the quadrature formulation.
In contrast to this, neutrons exiting scattering reactions are generally less
localized and consequently tend to mitigate ray effects. As we will see later
in the chapter, one key point of interest in using the LDO equations as part of
a hybrid method calculation is that the LDO equations mitigate ray effects when
higher-order quadrature sets are used \cite{ahrens}.
\subsection{Hybrid Methods}
Hybrid methods aim to combine the previously described Monte Carlo
methods and deterministic methods in such a way as to perform calculations that result
in statistically meaningful results within a tractable period of computation time.
Generally, these hybrid methods are implemented such that the solution(s) from a
deterministic code are used to inform a Monte Carlo code. When this is done well, the
Monte Carlo code converges more quickly than without the information from the
deterministic solution(s).
Specifically, an adjoint and/or forward flux solution generated by a deterministic
transport solver is used to make a weight window map for a Monte Carlo run. As was
noted earlier, weight windows are most effective when specified well and when used in
conjunction with other variance reduction techniques. Because developing effective
weight window maps can be labor-intensive and require a user to have significant
\textit{a priori} knowledge about the problem being solved, automated hybrid methods
have been developed to couple deterministic solutions to Monte Carlo transport
calculations.
\section{Previous Work}
Substantial effort has been placed into the development and automated execution of
hybrid methods. This section will discuss previous work in this field with a
particular emphasis on the present state of hybrid methods as well as hybrid methods
that incorporate neutron direction of travel.
Here we begin by describing the CADIS (consistent adjoint driven importance sampling)
and \fwc\ (forward-weighted consistent adjoint driven importance sampling) methods,
which are the current state
of the art of Monte Carlo variance reduction parameter generation. These are
introduced first because, as will be described in more detail later, this work
employs solutions of the LDO equations in combination with the CADIS and \fwc\
methods via the ADVANTG software. Following this, we present a discussion of selected
work in angle-informed hybrid methods, focusing on variants of CADIS and \fwc.
\subsection{CADIS and FW-CADIS}
\subsubsection{CADIS}
\label{sec:cadis}
The CADIS method was introduced by Wagner and Haghighat in 1997 to automate Monte
Carlo variance reduction parameter generation \cite{cadis}. CADIS is based on the
source biasing and weight window techniques described above, does not depend heavily
on user experience, and was implemented as described in Reference \cite{cadis} in the
MCNP \cite{mcnp} code. Most importantly, the CADIS method produces source biasing
parameters and weight window target values such that particles are born with the target
weights. Since CADIS is used heavily in this work, it is pertinent to
describe the theory behind the method.
The goal of most Monte Carlo neutron transport problems is to calculate some response
(scalar flux, dose, etc.) at some location in phase-space. This can be posed as
solving the following integral equation:
\begin{equation}
R = \int_P \psi(P)\sigma_d(P)dP,
\label{eq:cadis_r1}
\end{equation}
\noindent where $R$ is the response of interest, $\psi$ is the neutron angular flux,
and $\sigma_d$ is some objective function in the phase-space $(\vecr, E, \bo) \in P$.
We now introduce the adjoint identity
\begin{equation}
\langle \psi^{\dagger}, H\psi\rangle=\langle\psi , H^{\dagger}\psi^{\dagger}\rangle
\label{eq:adj}
\end{equation}
\noindent where $H$ is the transport operator and the dagger superscript indicates an
adjoint quantity. Using Equation \ref{eq:adj} and some algebraic manipulations, it
can be shown that
\begin{equation}
R = \int_P \psi^{\dagger}(P)q(P)dP,
\label{eq:cadis_r2}
\end{equation}
\noindent where $\psi^{\dagger}$ and $q$ are the adjoint neutron angular flux function
and the particle source density, respectively. For a given problem with a vacuum
boundary condition, Equations \ref{eq:cadis_r1} and \ref{eq:cadis_r2} are equivalent
expressions for $R$. The adjoint neutron angular flux function $\psi^{\dagger}$ has
physical meaning as the expected contribution to the response $R$ from a particle in
phase-space $P$. In other words, the adjoint flux function is significant because it
represents the importance of those source particles to the response of interest.
To calculate the response with the Monte Carlo method, the independent variables are
sampled from the probability density function (PDF) $q(P)$. However, this may not be
the best PDF from which to sample, so an alternative PDF $\qhat(P)$ can be introduced
into the integral:
\begin{equation}
R = \int_P \left[\frac{\psi^{\dagger}(P)q(P)}{\qhat(P)}\right]\qhat(P)dP,
\end{equation}
\noindent where $\qhat(P) \geq 0$ and the integral of $\qhat(P)$ over $P$ is
normalized to unity. Then, the alternative PDF $\qhat(P)$ that will minimize the
variance of the response is given by
\begin{equation}
\qhat(P) = \frac{\psi^{\dagger}(P)q(P)}{\int_P\psi^{\dagger}(P)q(P)dP}.
\label{eq:qhat}
\end{equation}
\noindent Looking at Equation \ref{eq:qhat}, we see that the numerator is the
response from phase-space $P$ and the denominator is the total response $R$. Thus,
this definition of $\qhat(P)$ is a measure of the contribution from phase-space $P$
to the response. It is useful to bias the sampling of source particles by this ratio
of their contribution to the detector response.
Because the source variables are sampled from this new biased PDF, the statistical
weight of the source particles must be corrected such that
\begin{equation}
w(P)\qhat(P) = w_0q(P),
\label{eq:cadis_w1}
\end{equation}
\noindent where $w_0$ is the unbiased particle starting weight and is set equal to 1.
Substituting Equation \ref{eq:qhat} into Equation \ref{eq:cadis_w1} and solving for
$w(P)$ gives the following expression for the statistical weight of the particles:
\begin{equation}
w(P) = \frac{\int_P\psi^{\dagger}(P)q(P)dP}{\psi^{\dagger}(P)}
= \frac{R}{\psi^{\dagger}(P)}.
\label{eq:cadis_w2}
\end{equation}
\noindent Equation \ref{eq:cadis_w2} demonstrates an inverse relationship between
this adjoint (importance) function and the statistical particle weight.
Now, let us consider the
transport process in this context. The integral transport equation for particle
density in the phase-space $P$ is given by
\begin{equation}
\psi(P) = \int K(P'\rightarrow P)\psi(P')dP' + q(P)
\label{eq:cadis_nte1}
\end{equation}
\noindent where $K(P'\rightarrow P)$ is the expected number of particles entering $dP$
about $P$ from an event in $P'$. Given the preceding discussion, we would like to
transform Equation \ref{eq:cadis_nte1} to be in terms of the biased source
distribution $\qhat(P)$. Defining
\begin{equation}
\psihat(P) = \frac{\psi(P)\psi^{\dagger}(P)}{\int \psi^{\dagger}(P)q(P)dP},
\end{equation}
\noindent we can write Equation \ref{eq:cadis_nte1} in terms of $\qhat(P)$ as
\begin{equation}
\psihat(P) = \frac{\psi^{\dagger}(P)}{\int \psi^{\dagger}(P)q(P)dP}
\int K(P'\rightarrow P)\psi(P')dP' + \qhat(P).
\label{eq:cadis_nte2}
\end{equation}
\noindent Equation \ref{eq:cadis_nte2} can also be written as
\begin{equation}
\psihat(P) = \int K(P'\rightarrow P)\psihat(P')
\left[\frac{\psi^{\dagger}(P)}{\psi^{\dagger}(P')}\right]dP' + \qhat(P)\:,
\end{equation}
\noindent which allows us to define
\begin{equation}
\khat(P' \rightarrow P) = K(P'\rightarrow P)
\left[\frac{\psi^{\dagger}(P)}{\psi^{\dagger}(P')}\right]
\end{equation}
\noindent and finally write
\begin{equation}
\psihat(P) = \int\khat(P' \rightarrow P)\psihat(P')dP' +\qhat(P).
\end{equation}
Because $K(P'\rightarrow P)$ is unknown, we simulate neutron transport in the usual
unbiased way and change the number of particles emerging in $P$ from an event in
$P'$ by the ratio of importances $\psi^{\dagger}(P)/\psi^{\dagger}(P')$. When this
ratio is above one, splitting occurs, and rouletting occurs when the ratio is less
than one. The statistical weights of the particles resulting from splitting and/or
rouletting are then corrected such that
\begin{equation}
w(P)K(P' \rightarrow P)\left[\frac{\psi^{\dagger}(P)}{\psi^{\dagger}(P')}\right] =
w(P')K(P' \rightarrow P)
\end{equation}
\noindent or
\begin{equation}
w(P) = w(P')\left[\frac{\psi^{\dagger}(P)}{\psi^{\dagger}(P')}\right].
\label{eq:cadis_w3}
\end{equation}
\noindent With this, reduced variance can be achieved when all source and transport
sampling is proportional to its importance. The source particles' energy and position
are sampled from the biased source distribution
\begin{equation}
\qhat(\vecr,E) =
\frac{\phi^{\dagger}(\vecr,E)q(\vecr,E)}
{\int_V\int_E\phi^{\dagger}(\vecr,E)q(\vecr,E) dE\ d\vecr}
= \frac{\phi^{\dagger}(\vecr,E)q(\vecr,E)}{R}.
\label{eq:cadis_sb}
\end{equation}
\noindent Here, the physical meaning of the numerator is the detector response from
space-energy element $(d\vecr,dE)$ and the denominator is the total detector response
$R$. As in the preceding derivation, this ratio is a measure of the particles'
relative contribution to the detector response.
To bias particles undergoing the transport process, the weight window technique is
applied. Weight window lower bounds $w_{\ell}$ must be calculated such that the
statistical weights defined in Equation \ref{eq:cadis_w2} fall at the center of the
weight window intervals. The width of a given interval is denoted by
$c_u = w_u/w_{\ell}$, the ratio of upper and lower weight window values. The
weight window lower bounds are then given by
\begin{equation}
w_{\ell}(\vecr,E) = \frac{w}{\left(\frac{c_u +1}{2}\right)}
= \frac{R}{\phi^{\dagger}(\vecr,E)}\frac{1}{\left(\frac{c_u +1}{2}\right)}.
\label{eq:cadis_tb}
\end{equation}
\noindent Using this definition, the weight window technique then performs particle
splitting and/or rouletting consistent with the statistical weight given in Equation
\ref{eq:cadis_w3}.
The key result of the foregoing discussion is that the statistical weights of the
source particles are within the bounds of the weight windows. In other words, the
source-biasing parameters and the weight window target values are consistent. This
circumvents the potential of particles being immediately split or rouletted upon birth
and avoids the resultant degradation in computational efficiency. We refer the reader
to Reference \cite{cadis} for a complete discussion of results and analysis of the initial
implementation of the CADIS method.
Note that Equations \ref{eq:cadis_sb} and \ref{eq:cadis_tb} use the scalar adjoint
flux rather than the angular flux. This is consistent with the
original implementation of the CADIS method in MCNP as well as the implementation of
the method in ADVANTG; the scalar adjoint neutron flux function is used in both tools
\cite{cadis, advantg}. Historically, this was done to reduce the memory required for
the deterministic calculation as well as the weight window map. Additionally, the
angular adjoint flux resulting from an \sn calculation was not considered to be
sufficiently accurate due to the limited number of discrete angles used for the
calculation. Further, people have
generally not developed weight windows that are a function of angle.
The CADIS method is very effective for automated optimization of localized detectors
but falls short of efficiently optimizing distributed responses. FW-CADIS, discussed
in the next section, was developed to address this issue.
\subsubsection{FW-CADIS}
\label{sec:fwcadis}
FW-CADIS, introduced
by Wagner, Blakeman, and Peplow in 2009, is a variation on the CADIS method to
increase the efficiency of Monte Carlo calculations of distributions and responses at
multiple localized detectors \cite{fwcadis}.
For this global variance reduction method, a response with uniformly-low statistical
uncertainty across all phase-space is desired. One way to target this for a given
Monte Carlo simulation is to uniformly distribute the particles throughout the system.
Though this is not a physical response, it is a proxy for the goal of obtaining
uniform uncertainty. It also indicates the possibility of developing an adjoint
importance function that represents the importance of particles to achieving the goal
of uniform particle distribution.
With this new problem formulation, the problem of calculating particle density is cast
into the response formulation:
\begin{equation}
R = \int_{4\pi}\int_{V}\int_{E}\psi(\vecr,E,\bo)f(\vecr,E,\bo)dE\ dV\ d\bo,
\end{equation}
\noindent where $f(\vecr,E,\bo)$ is some function that converts angular neutron flux
to Monte Carlo particle density. Recall that the angular neutron flux can be defined
as the product of the physical particle density $n$ and velocity $v$:
\begin{equation}
\psi(\vecr,E,\bo) = n(\vecr,E,\bo)v(\vecr,E,\bo)
\label{eq:psi}
\end{equation}
\noindent and the physical particle density is related to the Monte Carlo particle
density $m$ and the average particle weight $\bar{w}$ by
\begin{equation}
n(\vecr,E,\bo) = \bar{w}(\vecr,E,\bo)m(\vecr,E,\bo).
\label{eq:dens}
\end{equation}
\noindent Using Equations \ref{eq:psi} and \ref{eq:dens}, the Monte Carlo particle
density can be estimated by
\begin{equation}
m(\vecr,E,\bo) = \frac{n(\vecr,E,\bo)}{\bar{w}(\vecr,E,\bo)}
= \frac{\psi(\vecr,E,\bo)}{\bar{w}(\vecr,E,\bo)v(\vecr,E,\bo)}
\end{equation}
\noindent and the total Monte Carlo density can be estimated by
\begin{equation}
R = \int_{4\pi}\int_{V}\int_{E}\psi(\vecr,E,\bo)