Advisory Committee on Nuclear Waste 133rd Meeting, March 20, 2002
Official Transcript of Proceedings
NUCLEAR REGULATORY COMMISSION
Title: Advisory Committee on Nuclear Waste
133rd Meeting
Docket Number: (not applicable)
Location: Rockville, Maryland
Date: Wednesday, March 20, 2002
Work Order No.: NRC-283 Pages 117-184
NEAL R. GROSS AND CO., INC.
Court Reporters and Transcribers
1323 Rhode Island Avenue, N.W.
Washington, D.C. 20005
(202) 234-4433. UNITED STATES OF AMERICA
NUCLEAR REGULATORY COMMISSION
+ + + + +
ADVISORY COMMITTEE ON NUCLEAR WASTE
133RD MEETING
+ + + + +
WEDNESDAY
MARCH 20, 2002
+ + + + +
The meeting commenced at 1:00 p.m. in Conference
Room 2B3, Two White Flint North, Rockville, Maryland,
George M. Hornberger, ACNW Chairman, presiding.
PRESENT:
GEORGE M. HORNBERGER ACNW Chairman
B. JOHN GARRICK ACNW Member
MILTON N. LEVENSON ACNW Member
RAYMOND G. WYMER ACNW Member
. STAFF PRESENT:
JOHN T. LARKINS Exec. Dir.-ACRS/ACNW
SHER BADAHUR Assoc. Dir.-ACRS/ACNW
HOWARD J. LARSON Spec. Asst.-ACRS/ACNW
LYNN DEERING ACNW Staff
LATIF HAMDAN ACNW Staff
MICHAEL LEE ACNW Staff
RICHARD K. MAJOR ACNW Staff
ALSO PRESENT:
SITAKANTA MOHANTY
RICHARD CODELL
. I-N-D-E-X
AGENDA PAGE
High-Level Waste Performance Assessment
Sensitivity Studies
Sitakanta Mohanty. . . . . . . . . . . . . 120
Richard Codell . . . . . . . . . . . . . . 144
Discussion . . . . . . . . . . . . . . . . . . . 166
Adjourn. . . . . . . . . . . . . . . . . . . . . 184
. P-R-O-C-E-E-D-I-N-G-S
(1:07 p.m.)
CHAIRMAN HORNBERGER: The meeting will
come to order, the afternoon session here of the 133rd
meeting of the ACNW. I have a note for the Committee.
At three o'clock, we are to go over to the neighboring
building to get new badges. And so we have an
appointment at three o'clock. That shouldn't be a
problem because we have a 2:45 to three o'clock break
schedule and I don't think that I will steal so much
of the time that we've given over to Dick and to
Sitakanta to do a presentation.
Dick was originally scheduled to talk to
us about the sensitivity studies for the waste
package, and Howard tells me he's going to talk about
anticipatory research instead.
(Laughter.)
Although I may be corrected and it may in
fact revert back to the sensitivity studies.
Sitakanta, are you going to go first or is Dick?
MR. MOHANTY: I'm going first. Good
afternoon, ladies and gentlemen. My name is Sitakanta
Mohanti. I will be -- myself and Dr. Richard Codell
will make this presentation. I will go over the first
part of the presentation, and Dr. Codell will make the
presentation on the second half of this talk.
The title of this presentation is,
"Sensitivity and Uncertainty in the NRC Total System
Performance Assessment of TPA 4.1 Code," I should add
results. Okay. Here is an outline of this
presentation. First, we will briefly address the
purpose of this analysis of uncertainty and
sensitivity. Then we will present an overview of the
Total System Performance Assessment preliminary
results. And then we will talk about the sensitivity
analysis results that have been obtained so far. Then
some effects of treatment of data, especially variance
and uncertainty on the expected dose estimation. Then
finally we will talk about the preliminary risk
insights from the sensitivity and uncertainty
analysis.
Under the sensitivity analysis results,
that is the third bullet, we have three specific
presentations: One is characterized as the parametric
sensitivity analysis, then we will talk about
distributional sensitivity analysis, then the third
one will be with a subsystem of barrier component
sensitivity analysis. I will be talking about the
first two bullets, and a portion of sensitivity
analysis results, especially distributional
sensitivity analysis and subsystem of barrier
component sensitivity analysis.
First, here are the purposes of the
analysis. As you all know, NRC staff, in conjunction
with the staff from the Center for Nuclear Waste
Regulatory Analysis, have been involved over several
years in developing the Total System Performance
Assessment Code. The TPA Code represents an
independent approach to assist NRC's review of DOE's
performance assessment.
NRC's performance assessment tools are
intended to be used for gaining risk insights and to
risk inform the pre-licensing and the potential
licensing activities proactively and reactively. For
example, the development of the Yucca Mountain Review
Plan that you will hear about tomorrow, the
development of analysis tools by various key technical
user groups, or KTIs, and the confirmatory testing,
all these have been and will continue to be influenced
by the analysis that is performed using the Total
System Performance Assessment or the TPA groups of
tools.
As far as the reactor work is concerned,
staff is particularly looking at improving capability
to review license applications, such as DOE's
performance assessment results and probe DOE's
assertion regarding the repository performance,
identify probabilities, such as risk dilution. Some
of these examples we will cover during the course of
this presentation.
Staff will also look at DOE's sensitivity
and uncertainty analysis approaches and also will try
to identify by doing its independent analysis which
model assumptions analysis and what is the degree of
importance of all these to the overall performance.
And it will also verify DOE's assertion regarding the
barrier importance.
These activities will require staff's,
one, understanding of the system as a whole, therefore
getting into various components of the total system is
very important. Therefore, staff will use these tools
and knowledge in understanding the system as a whole
and understand the factors that are important to
safety performance.
Here I will give you just a very brief
background before we move on to the results. These
are also some of the caveats in the sense that you
have heard about the results -- you have heard the
results from TPA 3.2 sensitivity analysis in the past,
and this represents the latest -- DOE's latest design,
which it designed. And we have not done any analysis
using DOE's low temperature concept, because their
high temperature concept is considered as the normal
case.
The Total System Performance Assessment
Code, or the TPA Code, currently has about 950
parameters, out of which 330 parameters are samples.
So this is a pretty large problem for any kind of
Monte Carlo analysis on conducting sensitivity and
uncertainty analysis. So that means 620 parameters
are not sample, they're fixed at constant values at
what we believe as the best available value. And if
necessary, those values can be varied if we want to
support the current sensitivity analysis.
The results that will be presented
alternative conceptual models -- the results on
conceptual model analysis will not be shown for the
second time. However, in the context of the Total
System Performance Assessment, conceptual model
studies are done on a case-by-case basis, alternative
conceptual model studies.
And we would like to add the note that
analysis are performed mainly for developing staff
understanding, and the analysis that would be
presented are not necessarily mandated by the
regulatory requirements. And the results are
preliminary in the sense that this sensitivity
analysis is currently under development. The report
is not ready. So what you are seeing today is a
snapshot of the results that we have come up with so
far. The results will be perhaps finalized in several
months.
Here I will start with the performance
assessment results. The performance measure is the
peak expected dose to the reasonably maximally exposed
individual. And the results will be shown essentially
for two scenarios. The first one is the nominal case
scenario, which is characterized by the slow
degradation over time leading to ground water release.
And the disruptive events scenario only one we will
present here is the igneous activity. Other two
disruptive event scenarios are seismic activities as
well as faulting activity. However, seismicity is
included in the nominal case, whereas we are not
presenting results on faulting because there is no
sensitivity. We don't see more sensitivity to
faulting event results.
And as far as the nominal case scenarios
are concerned, essentially the risk is computed by
averaging the results from Monte Carlo realizations,
which is in terms of dose as a function of time. And
then the peak is determined from the expected dose
curve. Whereas the disruptive event scenario requires
some specialized calculations because of the low event
probability. Therefore, special convolution has been
used to take into consideration all possible events
prior to the event time. Prior to the evaluation
time, if there are events, those should be
appropriately factored in so that we get a smooth risk
curve.
First, this is the result of the nominal
case scenario. In the figure, we are presenting dose
versus time, but before we go through the figure, let
me just highlight that by the time the regulation was
out, was finalized, this work was already underway.
Therefore, some of the things that you are seeing here
are still different from what is mandated by the rule.
For example, the well pumping rate is varied in these
calculations. The receptor group is located at 20
kilometers. Other than that, I think these are the
main ones that are different compared to the rule.
And just to highlight, what we have seen
so far is that there are no corrosion failures in
10,000 years, no seismic failures in the nominal case.
The nominal case is the one which is defined by
probability pretty close to one, and we are presenting
here results from 350 realizations. So primarily the
doses are resulting from the initially defective
failure which is varied between one to 88 waste
packages. To compare that, we have a total of 8,877
waste packages, with each waste package having about
7.89 MTU of spent fuel.
CHAIRMAN HORNBERGER: Sitakanta, what does
that probability approximately one, what does that
mean?
MR. MOHANTY: Because this is the -- okay.
If we subtract the probability for the disruptive
events, such as volcanism, then it's a very large
number. This is number is pretty close to one.
Also, there are several important things
to observe in this figure. The rate curve represents
the expected dose curve, which is an arithmetic
average of the individual realizations which are
represented in this blue color. The dark blue color
is the 95th percentile curve, and the green color
represents the 75th percentile curve. What this
entails is that until about 6,000 years the expected
dose curve exceeds the 95th percentile. And
throughout the 10,000 years, the expected dose curve
exceeds the 75th percentile.
So this gives some sort of indication that
the expected dose curve appears to be quite robust in
determining the expected dose. And the peak expected
dose is from the expected dose curve, and clearly this
indicates the expected dose curve is -- expected dose
arc is pretty close to 10,000 years. And to be exact,
in our calculation it is showing up at 9,769 years.
MEMBER GARRICK: Now, this is just from
defective failures.
MR. MOHANTY: These are all from defective
failures.
MEMBER GARRICK: Because this is not the
peak dose for much later times.
MR. MOHANTY: Right.
MEMBER GARRICK: Yes.
MR. MOHANTY: Okay. Corresponding to that
figure, here are some additional results. The figures
on the left represent the cumulative release from the
saturated zone because that is the end point after the
transport through the geosphere, and after that it is
the biosphere. So I'll talk first between the
biosphere and the geosphere. This is the release
rate, and the release rate -- cumulative release rate
are presented in the Y axis of this curve for 10,000
years here and 100,000 years here. And these values
are presented in large scale so that you can see these
numbers which are very small, smaller than one. And
because these are smaller than -- some of these are
smaller than one, therefore the log of that is a
negative number here.
Clearly, it shows that technetium is
dominating, also iodine-129 and chlorine-36. And here
is the corresponding curve. And this indicates that
most of the dose, which is about 52 percent of the
dose, contributes and is coming from technetium-99 and
about 25 percent of the dose coming from iodine and 20
percent coming from neptunium-237, and others are sort
of insignificant in terms of dose contribution.
I have just put a figure from 100,000
years just for comparison purposes. This shows that
if you go beyond 10,000 years, the dominant -- the
same nuclides are dominating, but you can see some
finite values. You don't have to see -- there are no
negative numbers here in the log space.
Next, here is the result from the
disruptive event scenario, as I mentioned earlier.
The faulting event -- we are not showing the results
for faulting event, and the seismicity was included as
part of the base case, and we did not see any failures
in 10,000 years. So, essentially, this is a
comparison between the nominal case scenario and the
igneous activity scenario.
So, clearly, this shows that the peak in
the igneous activity scenario, which has a recurrence
rate of ten to the one minus seven per year, the dose
-- the peak expected dose occurs much earlier compared
to the nominal case. As I mentioned earlier, in the
nominal case scenario, the peak expected dose occurs
close to 10,000 years.
And to obtain this smooth curve, we needed
about 4,200 realizations, coupled with the convolution
integral approach that one was used to obtain this
curve. And this drop here is perhaps because we have
not taken one step beyond 10,000 years. Because if we
take a step beyond 10,000 years, this line is going to
flatten out or will perhaps slightly go up.
For the early release, this peak from the
igneous activity event, which is 0.35 milirems per
year, that occurs at 245 years. And the dominant
radionuclide is americium-241, and this dose is
primarily because of high activity nuclides, which
americium-241 is one of them.
Okay. Next, I will briefly go over the
stability of the peak expected dose. As such, because
it is an expected dose, we should expect a lot of
stability in that number. We are using 350
realizations, and we considered that to be quite
stable. But I also wanted to show you some variation,
what happens if we go much beyond 350 realizations.
In this table, that shows that we have gone beyond
500. We have gone all the way to -- in fact, we have
gone to 4,000 realizations. Here, this one shows only
up to 3,000 realizations. And it varies between 2.48
ten to the minus two milirems per year and 3.24 ten to
the minus two.
So, essentially, we don't see nice and
smooth conversions. And we did some investigation to
find out what might be the reason for that. It turned
out that when we plugged the peak dose as a function
of the number of sampling realizations, there are some
extreme values. That is what is causing this kind of
change in the peak expected dose value. And we have
noticed that this kind of realization shows up in
about one to 2,000 realizations. And this is
something we are continuing to investigate further.
Next, we will start with sensitivity
analysis. I will be talking about the distributional
sensitivity analysis and subsystem value components in
sensitivity analysis. And after me, Dr. Codell with
start with the parametrics sensitivity analysis.
This distributional sensitivity analysis
is done primarily to understand how the peak expected
dose is going to be influenced if the distributional
function assumption that we have received from various
KTIs are not correct or at least to identify if there
are some areas where staff need to focus more to
determine if anything can be improved.
Two approaches we have followed. One is
using a fixed range from here to here for changing the
mean of the distribution by ten percent. So we have
shifted the mean by ten percent. And in the second
approach, we have completely changed the distribution
function type. That means if in the nominal case we
had a normal distribution, we changed that to a
uniform distribution to see if that has a major
impact. Similarly, if the distribution had a log
uniform distribution, we changed that to log normal,
because in log space we thought that would capture the
difference.
Instead of working with all 330
parameters, we thought maybe changing for the top ten
influential parameters that we have identified by
using other methods would be more appropriate, because
those parameters are already showing a lot of
sensitivity. That's why in this talk we will
primarily focus on the top ten influential parameters.
And we have used two different sensitivity
measures. One is the change to the peak expected
dose, before and after changing the distribution
function type and an effective distance between the
CDFs. CDFs are constructed by using the peak dose
from individual realizations.
CHAIRMAN HORNBERGER: Sitakanta, when you
say you looked at the top ten percent in importance,
how did you determine that, from a different
sensitivity analysis?
MR. MOHANTY: Yes. Those were determined
from parametric sensitivity analysis that Dick is
going to talk about.
For the distributional sensitivity
analysis, the two kinds that I saw, these two figures
are showing their results. The ten percent shift to
the mean with a fixed range, the results are shown
here. And for the complete change to the distribution
function, the results are shown by these blue curves
-- bars.
Let me describe the results from the shift
to the mean by ten percent. Clearly, it shows that
when the distribution function type is increased, the
mean is increased by ten percent, there is a 150
percent increase in the waste package flow
multiplication -- a 150 percent increase in the peak
expected dose because of a ten percent change for the
waste package flow multiplication factor.
The second one that came out to be very
important is the spent fuel dissolution, which is a
pre-exponential term that defines that dissolution --
spent fuel dissolution rate. Fifty-seven percent
change to the peak expected dose occurred because of
a ten percent shift to the distribution function.
Similarly, when we changed the
distribution function type we did not see that kind of
effects for the two that showed up as important when
the mean was shifted. Rather, the two that turned out
to be important are the drip shield failure time and
the neptunium retardation in alluvium. So, therefore,
this clearly indicates that staff should revisit and
determine if the input parameter distributions were
not carefully looked at, at least the ones that are
showing up as important in the sensitivity analysis
should be looked at further, because these effects can
be cumulative. So when we add these things up for
many parameters, many sample parameters, that could
influence the peak expected dose that we compute from
the nominal case.
Next we'll talk about the subsystem of
barrier component sensitivity analysis. This analysis
is just an extension of the sensitivity analysis we
are doing, first, to the parametric sensitivity
analysis, distributional sensitivity analysis, then
the system can be broken down in many different ways.
One can break the system along the line of
subprocesses, but here we have broken it down along
the line of physical components, and we are primarily
interested in seeing how much sensitivity we are
getting from individual components. But then breaking
down these components are very subjective, because one
can have more components than what we have shown here.
But it appears to be adequate for our purpose.
But it is very important to highlight here
that this analysis should not be mixed with multiple
barrier analysis. This is not a proposal to do
analysis this way. Therefore, it should be clearly
noted that this analysis is not required by -- there
is no regulatory requirement for this kind of
analysis.
And I would like to draw your attention to
the representation of the repository in this column.
The repository can be viewed at the top as an
unsaturated zone. Then next to that we have -- below
that we have the drip shield. Below that we have the
waste package, then waste form, invert, unsaturated
zone and the saturated zone. So here you are seeing
barrier components, but in the results that I will
present in the next two slides, we show only six
because unsaturated zone above the repository and
below the repository will be treated as one entity.
And we will also show results from the
one-on analysis, one-off analysis and cumulative
addition analysis, because they all provide different
insights into the system.
CHAIRMAN HORNBERGER: Sitakanta, I don't
think I full understood something. You said that this
was not intended to be an analysis of barriers, but
then I've lost track of why you're doing this.
MR. MOHANTY: We are doing this purely to
supplement the sensitivity analysis. We are trying to
group them together. It's one way of looking at a
group of parameters, so we thought maybe grouping the
parameters along the line of a physical entity makes
it easier to understand.
My purpose in showing that column in the
previous slide was that these should be viewed as
individual cases. Each column here represents one
case. The group on the left represents the one-off
analysis; the group on the right represents one-on
analysis. And the first column under the one-off
analysis represents the nominal case. And the row at
the bottom these represent the percentage change.
I would like to draw your attention to
these numbers in the sense that the numbers on the
left-hand side these are changes with respect to the
nominal case results. The numbers on the right-hand
side, these are with respect to the case where all
barriers are suppressed, and the suppression of a
barrier is represented by the gray color. That means
if we go to the second column here, this shows the
drip shield as a barrier has been suppressed. Under
the third column, the waste package as a barrier has
been suppressed. So, therefore, this number -- when
the drip shield barrier is suppressed, the number at
the bottom shows that the peak expected dose changed
only by a factor of 34 percent. When the waste
package value was suppressed, the peak expected dose
changed by 62,200 percent. So likewise, these numbers
represent changes with respect to the nominal case
result. But these are in percentages.
CHAIRMAN HORNBERGER: And can you
enlighten me just a little bit by what you mean by
suppressed?
MR. MOHANTY: By suppression, we mean that
the function of the -- okay, from a purely technical
point of view here, the drip shield fails at a certain
time; it has a distribution. By suppression here we
imply that drip shield failure has been shifted back
to time zero. That means drip shield may have failed
at time zero, but if there is no infiltration because
of the thermal hydrology until 10,000 years, no water
is going to contact the waste package.
CHAIRMAN HORNBERGER: So the drip shield
-- the suppression of the drip shield is equivalent to
assuming that there is no drip shield.
MR. MOHANTY: Right, right.
CHAIRMAN HORNBERGER: Okay. Now, if we go
to the waste package, that's a little more difficult
for me. Does that mean that there is no waste
package?
MR. MOHANTY: Right. Here, what it -- let
me give you the detail here, if I can find my cursor
here. When the waste package is gone as a barrier,
what it implies is that the waste package has two
functions: when the waste package fails, and the
second is it contains -- it does not allow water to
enter into the waste package through flow
multiplication factor. Only a fraction of the waste
package surface area will contribute to the water
getting into the waste package. Therefore, it will
come into contact to the spent fuel. So when the
waste package is gone, when the waste package is
removed as a barrier, that implies that the waste
package is failing at time zero. And, also --
MEMBER GARRICK: Does that also affect the
composition of the water?
MR. MOHANTY: No. So that is an important
point we want to make, that when we are doing this
analysis we are not changing the physical processes
that are going on.
MEMBER GARRICK: So you're really not
accounting for the interactive effects.
MR. MOHANTY: Right, because your purpose
is primarily the sensitivity --
MEMBER GARRICK: So this is not so much an
attempt to see the physical event and the progression,
as it is to deal with this question of sensitivity and
uncertainty.
MR. MOHANTY: Right. Yes.
MEMBER GARRICK: Okay.
MR. MOHANTY: But I think it is also
important to point out here that this removal, this
one-off analysis, especially when it comes to waste
package, only affects the waste package that are
already seeing water. So two things are happening:
It is seeing water early and the second, more water is
getting into the waste package. But there are lots of
waste packages that do not see that water, as long as
the unsaturated zone barrier is above -- unsaturated
zone is above the repository horizon.
So, similarly, the numbers on the right
maybe they should be viewed as a decrease, so there
should be negative numbers. So here it means that
when the drip shield -- when all barriers are
suppressed and drip shield is added as a barrier, just
the only barrier, and the spent fuel would be in the
waste package somewhere here, but now we have no waste
package, in that case the drip shield has a
performance of about 63 percent. So this allows us to
see how much performance individual are coming from
these individual barrier components.
So here is shows that when the waste
package barrier is added to a case where all barriers
are suppressed, we have a 99.9 percent reduction in
peak expected dose. Whereas, when the unsaturated
zone is added, we have a reduction of 96 percent, and
when the saturated zone is added, when others are
suppressed, it's about 94 percent.
So we have carried this analysis a little
further, and we have added to the one-on analysis.
Here we are adding those cumulatively. The first
column here represents when all barriers are
suppressed. In the second column, this shows that
this is similar to the saturated column you saw on the
previous page. But when we add unsaturated zone to
the saturated zone, it says that the peak expected
dose has been reduced by 99.2 percent. Of course
there are several decimal places that I'm not showing.
When we add invert to the unsaturated zone
and saturated zone, then that reduces to 99.6 percent.
And by the time we reach the waste package, this is
99.99, but maybe the number will change in about
seventh or eighth decimal place. So then when we add
all the barriers, all the barrier components, then we
regain the nominal case.
Then we grouped all these barrier
components together to reflect the engineered barrier
and the natural barrier. Clearly, as expected, as we
observed from individual component sensitivity,
clearly, when we group them together, it shows when we
compare that with respect to the nominal case, there
is a -- and when the engineered barrier is suppressed,
then we see a substantial increase in the peak
expected dose, which is about 808,233 percent. And
when the engineered barrier system is there, but the
natural barrier is suppressed, it's about 58,233
percent.
I think that ends my presentation. Now
Dr. Richard Codell will take over.
DR. CODELL: Please don't adjust your
sets. We're experiencing technical difficulties here.
CHAIRMAN HORNBERGER: While we're waiting,
I'll interject and try to ask Sitakanta a tough
question. Having looked at all of that one-on and
one-off analysis, I'm not sure what message I'm
supposed to take from that.
MR. MOHANTY: We are continuing to conduct
this analysis. These results are quite fresh. We are
also trying to figure out how they are going to
contribute to the risk significance. The main reason
we did this kind of analysis is to see if any barrier
component in suppressing is shadowing the effect of
other barriers. So to determine how these individual
barrier components are performing, we had to separate
those out to individual ones.
For example, if we go to -- I think it
should be Slide Number 13, the one-on analysis for the
drip shield, we see that when all other barriers are
suppressed, the drip shield reduces dose by 63
percent. That number -- we have not devised any
better method at this time to determine whether the
effect of drip shield is 63 percent or something else.
Simply by looking at one-off analysis we could not
figure that out.
Also, another reason for doing this
analysis is that if something is modeled, then we can
capture that effect in the traditional sensitivity
analysis. But if the model doesn't represent that,
then sensitivity analysis cannot capture it, because
it is not in the model. We do our best to capture
everything possible in the model, but there are also
uncertainty about the models themselves.
So there are two important aspects here.
One is uncertainty in the model themselves, number
one; and number two, is the shadowing effect of one
barrier component over others. So, therefore, by
adding it cumulatively, starting from the saturated
zone and coming up all the way to the level of drip
shield or the unsaturated zone above that, that gives
us some insights. Simply from those numbers we can
derive if there are any shadowing effects.
MEMBER GARRICK: I guess the thing that --
(Pause.)
DR. CODELL: Maybe I should, to save time,
just work from the viewgraphs. Hopefully we'll be --
in a minute or two we'll have the presentation, and
I'll be able to put it up on the screen.
I cover parametric sensitivity analysis.
We're talking about the nominal -- we're on Slide 15.
We're covering nominal scenario only. And the purpose
is to determine sensitivity of parameters singly and
also in groups. The grouping is something new this
year. We're using -- there are two methods we're
using for parametric sensitivity. The first is
statistical methods that evaluate sensitivity to a
previously calculated pool of vectors that were
generated by the TPA 4.1 Code. In this case, we're
using generally 4,000 vectors cover the range of the
parameters. And then there are non-statistical
techniques that get to a sensitivity a second way,
which is to redirect the calculations to get the
maximum -- extract the maximum sensitivities from the
models.
We generally look at the peak of each
realization and look at the sensitivity of that, even
though the standard is based on something else; that
is, the peak of the mean dose. Starting on the next
slide, the statistical methods, these include
primarily regression on raw and transformed variables,
non-parametric tests, like Kolmogorov-Smirnoff tests
and assigned tests, the parameter tree approach, and
there's some new work on cumulative distribution
functions sensitivity method and some other work
recently developed by Sitakanta Mohanti and Justin Wu
at the Center.
Another method along these lines is a
method which is based on the mean -- calculating the
sensitivity of the mean dose directly with respect to
the means of the independent parameters and also the
variance of the independent parameters. These are too
new to really go into any detail but they're
developmental.
The non-statistical methods include
differential analysis, Morris method and FAST method.
These are things that we covered before. There's one
new method that -- factorial design of experiments
which has the unfortunate acronym DOE, it's usually
called DOE.
(Laughter.)
This is something that John Telford, of
the Office of Research, and I have been working on for
several months with some pretty impressive results, I
feel. I'll go into that in a little more detail.
Let me just take one second here to open
up the correct file.
(Pause.)
Okay. So we're at the bottom of Slide 16
now, starting at 17. The next slide shows -- this is
a tried and true method that we've been using for
several years now. We call it a composite statistical
method. This is to look at, in this case, six
statistical tests of various kinds, looking at
transformed and untransformed variables. And it's
really a seat-of-the-pants kind of method but works
quite well.
We used six statistical tests with 4,000
realizations. And then looking at each test and
factor the number of times the variable in question is
statistically significant in each test and its rank
and then develop a single list of parameters, top
parameters from the number of times they appear and
the ranks of the six tests. And when you do that you
come up with a list arbitrarily cut off at ten
variables for 10,000 and 100,000 years, showing that
a lot of the parameters that show up have to do with
how much water gets in contact with the waste.
I'll show some comparisons of the methods
later, but to get into the new work we've done, John
Telford and I, the factorial design of experiments.
Basically, factorial design, in the simplest form, is
to look at two values of each of the variables, a high
and a low value. We took fifth and 95th percentile of
each distribution, and since there were 330 variables,
if you looked at all possible combinations, you'd have
two to the 303 runs required, which at the present
rate would take ten to the 94 years. And, of course,
in maybe 1,000 years this will be --
MEMBER GARRICK: Are you looking for
permanent employment?
(Laughter.)
DR. CODELL: In 1,000 years things might
be much better, but right now it's out of the
question. So the fractional factorial design is what
you have to use, but it gives you reasonable time
estimates, but it's somewhat ambiguous.
So the way we did it is looking at the
sampling iteratively, that is running a fractional
factorial and then using other information from the
runs to refine the list and then repeating on the
refined list several times until we're quite sure that
we've gotten most of the important variables. And
this took a lot of trial and error, but we think we
hit on a good procedure for doing this.
The advantages of this technique is it's
systematic and potentially precise. It's easily
interpreted with a powerful, statistical techniques
like analysis of variance and trees. And it does
reveal interaction among variables instead of looking
only at sensitivity of single variables. I think this
is an important point. Disadvantages are it's still
costly and difficult to implement. And looking at
only the high and the low value, you're not looking at
the range of entire variable.
So this is how we went through the
procedure. For the 10,000 year case, first looked at
a design set using some statistical software of 2,048
variables, and we identified 100 potentially sensitive
variables. We reduced that list to 37 on the basis of
other information. For example, even though some of
these variables appeared to be sensitive like seismic
parameters, you could see from other results that
there weren't any failures, so you knew those
variables were confounded and could be eliminated from
consideration.
And the second screening -- that was the
first screening -- the second identified ten variables
and then we went into a full factorial with only ten
variables, which is a reasonable number to deal with,
and identified six to eight sensitive variables. When
you do this and you go through the analysis of
variance, one of the byproducts is a tree diagram.
And this shows very clearly that if you follow the
path of the cursor here, a low value of drip shield
failure time gives you -- and a high value of the flow
multiplying factor, the diversion factor and the fuel
dissolution factor and the waste package effective
fraction leads to the highest dose.
So this kind of information is much more
revealing than looking at sensitivity of single
variables at a time. This same sort of information,
incidentally, comes out of what we call the parameter
tree method, which is a statistical technique, but
this is much more precise, whereas there's a lot of
uncertainty in the parameter tree approach.
The next slide shows the same sort of
result for the 100,000-year run, and it also shows the
high and the low value of each variable contributing
to the highest dose.
Just to show that we think we've captured,
with a very small number of variables, most of the
uncertainty, the next few slides reconstruct some of
the results of the original run with the 330 variables
and the reduced set, both either from regression or
from fractional factorials. This is a cumulative
distribution function of the peak doses showing that
especially for the high end of the dose curve that
we've captured most of the uncertainty just with ten
variables from the regression analysis or eight
variables from the factorial design method. The lower
curve shows the mean dose for the same calculations,
330 versus ten for regression and eight for the
factorial design. So even though it's not perfect,
we've captured most of the uncertainty with a very few
number of variables. And the same is true for the
100,000-year result.
Now, the next set of slides, moving away
from the factorial design now, there were two options
for looking at sensitivities. The first one is that
we have -- we can look at the peak of each individual
run and look at the sensitivity based on the number or
we could look at the time of the occurrence of the
peak of the mean and look at that. The upper graph
shows the sensitivity result looking at the mean dose;
the bottom is looking at the peak of the individual
doses. Except for the first two columns here, the
results are quite different, and we're tempted to say
that the sensitivity measure and statistical parlance
has more power using the peak dose from each
individual run rather than the mean.
But there's one interesting factor here.
If you look at this particular variable, drip shield
failure time came out about number 20 on this measure
using the mean of the peak dose, and yet it came out
quite high looking at the individual peak doses. This
is not an error. This is -- there's a real reason for
this that isn't obvious, and the next couple of slides
really get to this -- why does drip shield failure
time differ?
MEMBER GARRICK: Dick, could you say
something about the sensitivity measure, what it
really is?
DR. CODELL: Well, I'll let Sitakanta
address that. He prepared these slides.
MR. MOHANTY: For the sensitivity
analysis, we need a point value. That means we have
dose as a function of time, but when we do the
analysis it has to be a point value that represents
for the 10,000 years. So it's a matter of whether we
should choose the peak from that realization or should
we choose the value, dose value corresponding to the
peak expected dose? That will be the point value.
So in other words, the red bars, those are
showing the sensitivity analysis using the dose values
corresponding to the time when the peak expected dose
occurred. Whereas the figure at the bottom that is
indicating that peak can occur any time in 10,000
years so that it is independent of the time of
occurrence. So, therefore, it reflects sort of the
whole time domain, whereas the one at the top
represents a particular time.
CHAIRMAN HORNBERGER: But what are the
units on that sensitivity measure?
MR. MOHANTY: Oh. We have different
sensitivity measures from different methods. This
particular one is representing one method that we have
used for the two graphs. So that measure is
essentially, but it's kind of hard to explain. This
is extracted from the Morris method in which we take
the sensitivity from individual points and average
that, and we determined the mean of the del Y over del
X where X is the variable that is being changed. Del
Y is the dose value that is being changed. So this is
an ensemble statistics so that sensitivity measure is
based on the ensemble statistics, both mean and
variance.
MEMBER GARRICK: But it is in a change in
dose per unit change in parameter.
MR. MOHANTY: In the parameter, right.
DR. CODELL: Okay. Thanks, Sitakanta.
CHAIRMAN HORNBERGER: But you must
normalize somehow.
MR. MOHANTY: Yes. The parameters are
normalized.
CHAIRMAN HORNBERGER: Okay.
DR. CODELL: On the next slide, I wanted
to talk about the treatment of data variability and
performance assessment modeling. This particular
piece of work came up during the SAS review of the
TSPASR and the SSPA, and it was called the galsean
variance partitioning. It was basically how you treat
data in the model.
It isn't exactly how DOE is doing it, but
it leads us to some interesting conclusion on how we
should deal with experimental data uncertainty either
because of lack of knowledge or variability. And the
difference between these two kinds of uncertainty,
epistemic and aleatory, is often blurred, for example,
treatment of corrosion rate data for the waste
packages and its effects on dose.
To get at this phenomenon, we put together
a model based very loosely on NRC's model and DOE's
model, but it's a separate model. NRC's TPA model we
represent variability and waste package corrosion by
a few representative waste packages -- only one per
subarea, ten in all. Whereas DOE has in its Total
System Performance Assessment uses the patch failure
model that allows significant spatial variability of
failures.
We could look at the data on corrosion,
and we could say it's either -- this is real data from
the corrosion experiments on the coupons and say it's
either a fixed but uncertain rate or a spatially
varying rate due to the material and environmental
variability.
On the next slide, I showed this very
simple model that deals only with a few parts of the
model, particularly the waste package corrosion and
the dissolution of waste in a fixed number of years
once the waste package has failed. Now, there are
three possible models. Model 1 is the whole
repository. That's where you take this corrosion rate
data, shown here as a density function, and you apply
it to each and every waste package identically; that
is, they'll all fail at exactly the same rate, pretty
much, there is some slight variation, but at the same
time. Whereas the other extreme is Model 3 where each
patch of each waste package is sampled from the
distribution so that each and every waste package and
each and every patch has a different failure time.
And Model 2 is in between those two extremes.
Now, if you take this model and you just
look at, for the present time, five realizations, as
shown in this figure, you'd see that Model 1, where
every waste package fails at about the same time,
gives you five individual peaks, and they're all
rather high, because when they fail at the same time
you get a big release and therefore a big dose.
Whereas Model 3, where you have this patch failure,
they're all pretty much the same and smaller.
The interesting thing is that I wanted to
point out here is that the dose and the way the NRC
has defined in the rule as the peak of the mean is
very sensitive to the timing of the peaks, so that
even though these individual peaks are high, when you
look at the way that Model 3 always fails the same,
each new realization looks pretty much like the last
one, these all line up. So when you take the peak of
the mean dose, Model 3 actually gives you a higher
dose, which I will show.
And how does that relate to a few slides
before where I showed the drip shield failure time
being an important parameter? Well, drip shield
failure time determines the timing of the dose. If it
fails early, then the release is early. If it fails
late, the release is late. That's the same effect as
changing Model 1 to Model 3. And that's why it showed
up in one way when you look at the peak of the mean
dose and another way when you looked at the peak to
the individual doses. But that was an interesting
conclusion.
MEMBER GARRICK: Again, that's dependent
upon the corrosion model that you --
DR. CODELL: Well, yes. And the way we've
treated drip shield failure time in the TPA model is
just a sample failure time. It just relates -- it's
just a -- it could have been another example of the
same phenomenon.
MEMBER GARRICK: Right. And the other
thing here is that there's going to much greater
variability in the setting than in the waste packages.
So whatever you take advantage of with respect to the
similarity of the waste packages could be very offset
by the variability because of spatial considerations.
DR. CODELL: It could be but we don't have
enough information from the corrosion rate data to
know which is which, and that's a dilemma. A very
important factor in our analysis is how quickly the
waste packages will corrode. And even though it seems
to be a very long time, if it were not a long time,
then we'd want to know whether the variability in the
data was due to real spatial differences or
experimental -- or other unquantifiable errors.
MEMBER GARRICK: And, of course, in the
DOE model, they'd decouple the drip shield
contribution from the diffusive transport out of the
waste package.
DR. CODELL: Yes.
MEMBER GARRICK: So it really depends upon
how you structure the thing. I'm curious about how
you screened your parameters.
DR. CODELL: I'm sorry, which slide were
we?
MEMBER GARRICK: Well, when you go from
900 to 330, to 100, to 37.
DR. CODELL: Well, actually, the 990 were
screened by experience. We've, at various times in
the past, looked at all those variables varying and
decided that most of them didn't contribute anything
to the results. So those were held at fixed values.
The screening that took place in the factorial design
was more systematic, because we started with 330 and
worked our way down. And it was either based on the
sensitivities we observed in the analysis or variance
or --
One of the problems with fractional
factorial design is a problem called confounding, and
that's where a variable can be mistaken -- sensitivity
in a variable can look like it's sensitive but it's
actually a combination of several other variables.
And it's just a numerical combinatorial problem, not
a real physical problem. But by looking at the
physical outputs of the code, for example, seeing that
the factors that looked sensitive, that had to do with
seismicity, couldn't have been because there weren't
any failures due to seismicity. So we could eliminate
those. So it took a little bit of a combination of
the silicon and the carbon computers to reach this
conclusion.
MEMBER GARRICK: Did you call this the
confounding phenomenon?
DR. CODELL: Yes.
MEMBER GARRICK: That's a good name.
DR. CODELL: It's not my -- that's what
it's called in the factorial design method.
CHAIRMAN HORNBERGER: How are you sure at
the end of the day that you don't have some aliasing
left in your final ten or whatever --
DR. CODELL: Well, there can't be any when
you do the full factorial.
CHAIRMAN HORNBERGER: Oh, right.
DR. CODELL: That's the --
CHAIRMAN HORNBERGER: So once you choose
the ten it's okay.
DR. CODELL: Right. Yes. And the final
test was seeing how well you did by comparing it to
the original.
Well, getting back to this little
experiment on the two kinds of uncertainty, the
epistemic and aleatory, the first result shows for a
full set of realizations that the peak of the mean
dose the Model 3, where you have the patch failure,
gives you the highest result. This may seem
counterintuitive, but as it turns out, if you're
sampling each and every patch, you end up getting the
similar kinds of failures each new realization. And
that's why they look identical and fall on top of each
other, leading to a high peak of the mean dose. And
the other models give you much lower doses.
However, this is sensitive to other
parameters in the model, and what we determined when
all was said and done was that if you look at a much
slower release, say 100 times slower than what we used
in this example, all three models pretty much fall on
top of each other. And when looking at the ranges of
parameters in the Department of Energy model, probably
it is more like the case on the right more than the
left. But it's still an interesting phenomenon and
explains some other interesting features like the drip
shield failure time result we got.
Something related to this previous
exploitation is risk dilution. This is something we
worry about. It's not good enough just to increase
the range of distribution if you don't know it.
There's some cases if you do that, you'll end with
actually a lower dose, which isn't what you wanted at
all. And here's an example. Once again, drip shield
failure time. If you have a narrow range, this green
curve, or a wide range, the blue curve, you'll get
different results. And the narrow range gives you a
higher dose than the wide range. Once again, this is
one of the parameters that has to do with the timing
of the doses, and when you increase the range of that,
you're going to end up with a lower result.
So summing this all up --
CHAIRMAN HORNBERGER: Dick, let me -- I
don't know, I think I'd like to challenge you on that
one, because you said you put in a wide range and you
may get a lower dose, and that isn't what you want.
If in fact you have a broad uncertainty range, why
isn't it what I would want?
DR. CODELL: Well, I think the --
CHAIRMAN HORNBERGER: I mean if you really
-- if your uncertainty and failure times for drip
shield really is -- I mean I suppose it ties back into
your aleatory versus epistemic, because if you really
believed that every single drip shield was going to
fail on day 372, then it really matters. Is that what
you're saying?
DR. CODELL: That's right. That's right.
Yes. And this is interesting because I think prior to
NRC's regulations for high-level waste, I think most
people considered looking at the peak of the
individual doses as a factor. And automatically if
you put a wider distribution in, you're going to get
a higher -- one of those is going to be a higher peak.
But it doesn't work that way when you look at the peak
of the mean curve; it's just the opposite.
CHAIRMAN HORNBERGER: Which I assume is
why even though the regulation calls for the peak of
the mean dose, you will require the potential licensee
to display all sorts of things, including all of the
uncertainty?
DR. CODELL: Well, I wouldn't go too far
there. I think I'd be stepping out of bounds. I
don't think we would require anything like that. If
Tim is in the audience, he could probably rescue me
right now.
CHAIRMAN HORNBERGER: Maybe I could frame
it another way. The ACNW will want to see that.
DR. CODELL: Yes.
MR. McCARTEN: Well, as Dick's slide
indicates, I mean the key there is the inappropriate
use of a wider distribution. We are clearly
interested in the distribution. And if the
uncertainty is there, we're not saying don't include
the uncertainty you have.
If there are some arbitrary decisions that
are done and sometimes done in the sense of
conservatism, let me make this bigger, I'm uncertain
about it, we want to look at that to make sure, well,
you may think it's conservative to make it bigger but
you've actually, in essence, produced a lower dose.
And so you want to have an appropriate range. As Dick
indicated, I think we are going to look at all the
information the Department gives us.
MEMBER GARRICK: Let me understand
something. Is this distribution a random variable?
DR. CODELL: Yes.
MEMBER GARRICK: Because what we're really
interested in is our uncertainty about a fixed
variable.
DR. CODELL: Well, it actually is an
uncertainty about a fixed variable in almost every
case. I think in every case in the TPA Code it's
uncertainty about a fixed variable. I would consider
that a definition.
MEMBER GARRICK: And if that's the case,
then of course you know want to be very careful about
manipulating a broad distribution into -- or a peak
distribution into a broad distribution as to what
information you might be losing in that process.
DR. CODELL: Right. Well, these density
functions that we use are based on either data or
people's idea of what the data should look like. And
it isn't always -- they aren't always precise.
MEMBER GARRICK: Well, as George says,
we're going to be very interested in following this.
DR. CODELL: Preliminary insights from the
sensitivity analyses for 10,000 years, factors that
control water/fuel contact seem to be the most
important and most doses from low retardation, long
half-life radionuclides, like technetium. For 100,000
years, it's interesting that waste packages usually
fail by 100,000 years, so the parameters aren't always
showing up as being conservative, because you'll
usually have failure anyway. Changing them isn't
going to make any difference.
The fuel/water contact is still important,
and the dominant radionuclide, neptunium-237, seems to
be very important, so parameters associated with that
are important. For barrier sensitivity, the
preliminary results that both natural barrier and
engineering barrier make substantial contributions.
This is some additional work in progress
that was too callow to talk about today, but we've
acquired some neural network software, which seems to
be very powerful, and this is basically doing non-
linear regression. We've had some limited success
with it so far.
Dave Esch and I took some training in it,
and I think you'll probably see this next we make a
presentation. We're looking at new sensitivity
measures consistent with the peak expected dose, as we
showed in the previous slide, and looking for
efficient distributional sensitivity methods like the
cumulative distribution function sensitivity that
allows us to look at the sensitivity at different
parts of the cumulative distribution; that is, high
dose and medium dose or low dose and also the means --
the sensitivity of the mean dose directly.
Some other work, we're trying to get a
handle on barrier performance in a couple ways. We're
trying to define what a degraded state of a barrier
means. This is a very difficult problem trying to
figure out how to define a barrier as failed, like
what does a failed waste package mean.
Just looking at the kinds of barrier
sensitivity analysis that Sitakanta presented earlier,
there are six barriers, so two to the six is 64
possible combinations of failures, from everything
failed to everything working. There has been 29 of
the 64 analyses completed, and we've made some
preliminary shot at making a tree structure, but it's
possible to draw a tree with this result but looking
more powerfully at -- looking at it with more powerful
methods like analysis of variance there's not enough
runs yet to do that, but we're hoping to do that in a
future presentation.
In conclusion, parametric sensitivities
provide useful risk insights. The method we've been
using, the sensitivity method where we're combining
ranks of the various statistical methods still works
very well.
Factorial design shows great promise and
clearly defining the sensitivities and the
interactions of the variables. The distributional
sensitivity technique that Sitakanta presented is an
effective approach identifying the impact of the
choice of parameter distribution and the shape and the
shift in the mean. We've shown that inappropriate
parameter ranges can lead to risk dilution in some
cases, and the treatment of uncertainty as lack of
knowledge, epistemic or variability, can affect the
peak risk calculation. That's the end of our
presentation. We'd be happy to take additional
questions.
CHAIRMAN HORNBERGER: Actually, before --
now that John is back. I started before you got back
from lunch, John, but this is really your bailiwick,
so why don't you run it.
MEMBER GARRICK: All right, well, let's
see if there's some questions. Of course, I have a
few.
Milt?
MEMBER LEVENSON: One comment.
MEMBER GARRICK: Microphone.
MEMBER LEVENSON: One comment and then one
question based entirely on ignorance. One of the
things is that I guess I sort of disagree with your
use of terminology because no matter what you do in
the way of assumptions, you are not going to change
the risk. So you can't dilute the risk or increase
the risk. You may change your calculated number, but
it's not really the risk.
But on Slide 13, I'm having trouble
relating this to the physical world in that on the
one-OFF, if you remove the waste package, you have a
62,200 percent change.
MEMBER GARRICK: Can we see that on the
screen? Can you put the projector back on, please?
On the one-OFF analysis, when you remove
the waste package, you have a 62,200 percent change,
but with the one-ON analysis, none of the other
barriers are there. You just add the waste package.
You only have 100 percent change. Factor 2. It
doesn't seem consistent with the physical world, as I
visualize it.
MR. MOHANTY: Let me explain the
difference. Under the one-OFF analysis, the first
column represents the nominal case. For the nominal
case, the peak expected dose is .021 millirems per
year, whereas under the one-ON analysis, we are
determining the percent in change based on the first
column under one-ON analysis. And that number, I
don't remember what that value is, but we are using
that number to determine the change. So that means at
most it can be that. So when we put the waste
package, so 99.9 percent represents a reduction in
what we observe under column 1.
CHAIRMAN HORNBERGER: You can't take away
any more than 100 percent. But if you have something
to start with, you can change it by 62,000 percent.
MEMBER LEVENSON: I guess without the
numbers, it's very difficult to determine what the
significance of what this chart is.
CHAIRMAN HORNBERGER: Even with the
numbers, I would maintain it's very difficult to
figure out what the significance of what this charge
is.
(Laughter.)
I don't mean to be too severely because I
know we are interested, like you are in barrier
performance. But it strikes me, Dick just casually
said it's a really difficult problem to figure out
what it means to have a barrier suppressed. And I
agree with that. It just doesn't make sense to me to
even consider changes as if all the drip shields
failed at Time Zero. I don't understand what you're
doing.
MEMBER LEVENSON: It's not so much to
understand what you're doing. It isn't clear to me
what the significance is.
CHAIRMAN HORNBERGER: Right.
MR. HAMDAN: One of the main objectives of
this is one that sensitivity analysis is one that is
not risk, but in response to your question and that is
to test one other. And one could argue that these
barriers individually or in combination is to see if
one is working and this has not been in all these
slides clearly that you can elicit in slide 3, clearly
the emphasis also I think he did answer and answer
very well. But the question as to what added value
the sensitivity adds to the model and whether the
model has been improved has not been addressed.
MEMBER GARRICK: Yes, but the black box
here is the degree to which the model represents
reality and I think that's part of what Milt is
struggling with.
You know, it's this question of if you
tried to look at this as a system and you apply the
basic equations of continuity and conservation of mass
and momentum, etcetera, etcetera and you flow through
the system, this model isn't doing that because the
800-pound gorilla here is the water and the chemistry
of the water. And the chemistry of the water is
extremely sensitive to each of the stages it passes
through.
So we're not talking about something
that's so much represents reality as we are talking
about some very interesting concepts that you can
apply in a Monte Carlo-type analysis, but at least
that's my perspective.
Ray?
VICE CHAIR WYMER: I'll be a little bit
facetious. I certainly admire the sophistication and
complexity of these analytical tools and the variety
that were used in these analyses and that's
impressive, but I couldn't lay a finger on it myself.
But I was pleased to see that in your preliminary
sensitivity analysis that you confirmed what I thought
for the last two or three years that --
CHAIRMAN HORNBERGER: Chemistry is
important.
VICE CHAIR WYMER: Natural version as a
substantial contribution, that looks pretty good.
(Laughter.)
Waste packages fail corrosion parameter is
not sensitive. Fuel water contact, that's important,
pretty good.
And retardation of neptunium seems to me
like that's important. But I thought I knew that
stuff.
Factors controlling water fuel contact
dominate performance. That's right. And most dose
from low retardation and long half-life radionuclide,
sure, I know that trivializes the degree to which you
understand these things and the sophistications, but
nonetheless the answers are sort of self-evident for
whatever that's worth.
MEMBER GARRICK: George?
CHAIRMAN HORNBERGER: I just wanted to
make a final comment on the barrier component
sensitivity. I really do understand what Latif said
and that is that you do a lot of these things to try
to understand what's going on with your modeling. I
certainly have no problems with this.
The issue that I have, the difficulty I
have with slides like this is that there's too much
chance for mischief making with the numbers by people
who will want to use them for purposes that are not to
understand how your model is working. And I guess I"d
just ask you to give a little thought to that as you
present these things.
MR. CODALL: We've given a great deal of
thought to that. In fact, at every level of review,
we've been asked to be sensitive to this and to put
disclaimers in that this not underlying not required
by the regulations.
I think people who want to make mischief
of this will do so regardless.
But this is the kind of analysis that's
often done for safety. You look at the failure of a
system. You look at what happens when an engine on an
airplane dies. It's nothing not wrong with it, in my
opinion. It's just my opinion.
MEMBER LEVENSON: I think there's a
significant difference in fact, that's one of the
points I think George made earlier that an engine is
either on or off and you can -- this is a legitimate
analysis for that sort of thing. The waste container
is not either in existence or not in existence, and
therefore, I think you have to be very careful about
using what is a legitimate analysis under other
conditions or this one that might be much more
legitimate to say what happens if 10 percent of the
waste packages fail early, etcetera, rather than --
they're either on or off.
MEMBER GARRICK: Yeah. I think that
there's no question that the modeling test exercise
that you've done here is very interesting and very
powerful. As I was saying earlier thought, I think
that what we're really interested in is information
that would give us confidence that the models that are
being employed are doing a reasonable job of
representing reality in terms of what's happening.
Now maybe this can contribute to that, but
what is really something that concerns us is the
interactive effects of these different barriers and
how the one thing that would suppress some of the
mischief that we talk about would be to do this same
exercise for different models.
Take for example, the TPA model and do the
exercise, and then take the diffusive transport model
of DOE and do the exercise and you would certainly see
that things would line up differently. And it would
clearly indicate that how model sensitive it is.
But again, I guess the question I would
ask is what contribution comes from this work towards
creating a model that we have increased confidence in
in terms of representing the performance of the
repository?
MR. MOHANTY: Let me start with one-ON
analysis. What that figure tells us is that on
saturated zone, unsaturated zone is making quite a bit
of contribution and these individual contributions
perhaps could not have bene seen if we did not isolate
those from other either components or in a broad sense
subsystems.
So that tells us something. And also when
we compare that say with an invert, we are seeing only
2 percent change. Then we are going back to the total
system for performance assessment code to determine
why we are only getting .2 percent and we did go back
and find that the way in what is modeled in what is
supposed to reduce flow or delay transport, but it
just so happens that the flow through invert is
predominantly fracture flow.
So when the flow is a predominantly
fracture, in the fractures and we are not assigning
any retardation fractures to the fractures, therefore,
the invert is almost completely bypassed in the TPA
approach. So that is a kind of insight we gain when
we do this kind of one-OFF one-ON analysis or
cumulative analysis.
VICE CHAIR WYMER: I would have liked to
have seen that sort of thing in your table of
preliminary insights, the two cases you just cited are
much more informative.
MR. MOHANTY: If I can make another point
that there are two ways we can determine all
components for how well the components of the code is
functioning. To give you an example, if the packages
were going to fail at 1 million years, then the only
way we can find out that what is what is affecting the
packages is we go with continual calculus for 1
million years, are we deliberately suppressed that to
find out what the impacts are if we are to fail early.
So by doing this kind of analysis that
prevents us from going to much further into the future
to million years because we can gain similar kind of
insights by deliberately doing the sensitivity
analysis by suppressing components.
CHAIRMAN HORNBERGER: I have a question
now that you have mentioned your one-ON analysis. If
we look at that lefthand column, okay, you have a dose
associated with that, is that right? What you said
was that we could read those as 99.9 percent reduction
in dose?
MR. MOHANTY: Right.
CHAIRMAN HORNBERGER: So my question is
what is the dose in that lefthand columna nd how did
yo calculate it?
MR. MOHANTY: Under the one-OFF analysis
or one-ON analysis?
CHAIRMAN HORNBERGER: One-ON analysis.
MR. MOHANTY: On the one-ON analysis, this
is not a real dose where barriers are suppressed. We
do honor the various processes in the TPA code. We do
not veer away from the processes too far because we
know that we are limited simply by this is just a
technique we are using.
CHAIRMAN HORNBERGER: All right, so then
if I go to the first column, the drip shield, the 63
percent reduction in what?
MR. MOHANTY: We do have a dose value.
CHAIRMAN HORNBERGER: How did you
calculate it?
MR. MOHANTY: This is by suppressing all
components.
CHAIRMAN HORNBERGER: Okay, which means
what did you do, dissolve all the fuel instantaneously
in the --
MR. MOHANTY: Yes.
CHAIRMAN HORNBERGER: And so it's a high
dose.
MR. MOHANTY: It is a high dose, yes.
CHAIRMAN HORNBERGER: I mean that's a high
calculated dose. I don't want to get in trouble with
--
(Laughter.)
So I just point out that again, even on
this one we're saying oh yes, the natural, the insight
that you gain, it's quite artificial and I'm much more
comfortable with Latif's interpretation that what
we're doing is learning how the components of the
model are working.
MEMBER GARRICK: Also, those kind of
reductions in the kind of doses we're talking about
are not very relevant.
I don't know that that really tells us a
great deal about the protection provided by those
components. I'll have to think about that, a good
deal more. And I still worry about the fact that
there is interaction between the barriers and what the
waste package sees in terms of input material is
different than what the invert sees as different what
the unsaturated zone sees and so on.
And that could be a major factor in what
really happens. All the peer reviews of the TSPAs
have given great attention to the importance of water
composition because that's the mechanism by which
everything happens and that's just the process of
applying principles of continuity from the
infiltration model, if you wish, namely the geology
above the waste package through to the waste package
and so on.
So again, there's no question this is an
intriguing process and it does, as Latif says, but
we're going to have to be a lot more diligent students
and studiers of this before we can really see what it
contributes to reality.
MR. CODALL: In terms of projected work,
we are getting together soon to talk about what
degradation of barriers means. I hope in the next few
weeks, Tim McCarten is convening a group to better
define in terms of what is expected in the regulations
what barrier degradation means.
This one-OFF, one-ON analysis probability
overkill.
CHAIRMAN HORNBERGER: yes.
MR. CODALL: But getting at the -- getting
a finer level is the next step. This maybe you
consider this a first step and gaining information
about importance of barriers.
MEMBER GARRICK: Go ahead, Latif.
MR. HAMDAN: But do we need a more refined
-- it seems to me that from this beautiful
presentation the staff has already has all the tools
it needs what it needs to do from the subject. After
all, the barrier requirements which was about 60 has
been removed by 63. This morning Commissioner
McGaffigan complained about things could grow and grow
and grow.
Now with the two that you have here, it
seems that either you are doing a very nice job with
what you have, so why bother with to come up with new
tools that you want to try and new analysis,
specifically tools to do the same thing again and
again.
I would suggest that you rethink this
because this is really maybe you can do what you want
to do with what you already have and keep in mind
again that the barrier capability performance for
individual barriers which was about 60 is now omitted.
MR. McCARTEN: I guess -- we hear what the
Committee is saying and there's no question we've had
a lot of discussions internally and that's the reason.
It's clearly stated. These types of analyses are not
required to demonstrate compliance with the barrier
requirements in 63. But what we're trying to do and
for the Committee I think Dick and Sitakanta both
tried to give all the things we're looking at with
potential to increase our understanding, and I think
the concept of all these things, we're just throwing
out where we're going.
There is a huge downside to doing barrier
neutralization because people jump to those numbers at
the bottom and the value is not in those numbers at
the bottom. And what we're -- I like to think that
when this is done, it's a way to probe and test your
own thinking of how the code is working, how you think
the system is working and this is just another way to
poke at you, your brain a little harder, to think a
little more.
Ultimately, what you're not seeing and
we're not there yet is what kind of information about
the system can you pull out and that's the key. And
I think this is a way to kind of jiggle the system a
little more.
Maybe it's not the right way to go, but I
think it's a way to push your understanding and I
think that's the key we need to get to as you guys are
indicating. And we're not there yet. We owe you
something more. Where's the understanding in this?
And right now, it clearly is not at those bottom
numbers. It's deeper than that.
MEMBER LEVENSON: Let me ask a question
sort of in that relationship. In the one-ON analysis,
the unsaturated zone and the saturated zone have for
all practical purposes the same significance, whatever
that is. But in the one-OFF analysis the significance
is different by more than an order of magnitude. What
do we learn from that?
MR. McCARTEN: Well, these numbers in
order of magnitude, I don't think is necessarily that
significant, but the next step is what -- the key is
understanding the capability of the barriers and
what's going on and why those numbers came out. I
think that's what -- you may use this analysis to push
you a little harder about the understanding of the
capabilities of the barriers, so you clear -- oh yeah,
that's why those numbers came out that way, but that's
sort of the next step with this and whether this is
the right way to go or there's other approaches re
better or there's as Dick indicated some intermediate
steps or this is the first step, that's where we're at
now and we're just trying to as a group, we're always
trying to do additional analyses to see if it's
helpful.
CHAIRMAN HORNBERGER: Tim, you remind me
of one of my favorite quotes, the purpose of computing
is insight, not numbers. The purpose of computing
numbers is not yet in sight.
(Laughter.)
I do have a question for Dick, actually.
I was really intrigued because I hadn't thought of
that before, but DOE's patch model really does lead
them to get essentially the same failure rates on
every realization.
Do you see this as a problem in DOE's
code?
MR. CODALL: No. I think it's probably
somewhat more realistic than what we chose, so that's
a point for DOE's conservatism.
CHAIRMAN HORNBERGER: But it strikes me
that -- I mean, that's equivalent, I think to your
saying that all of the uncertainty is aleatory. I
hate those terms, but environmental variation and it
strikes me that they're probably -- potentially could
be another component if for no other reason that you
would have differences in fabrication of casks.
MR. CODALL: Right, but the problem is
that the data don't tell you which is which. And then
I think though that the answer that -- is that it
doesn't -- it seems that for the ranges of parameters
that we're dealing with, the results aren't too
different no matter what you assume and that's
somewhat reassuring.
CHAIRMAN HORNBERGER: Actually, I had --
I've often somewhat facetiously suggested that DOE
could build a better safety case by purposefully
damaging canisters in a certain pre-determined rate so
that they all wouldn't fail at the same time.
(Laughter.)
MEMBER GARRICK: This reminds of the old
days of reliability analysis when they had no
failures, someone would assume a failure, a horrible,
horrible thing to do.
Well, this is very interesting and we'd
like to continue. I think that my perspective on this
that what you're doing needs to contribute to a couple
of things or we would have to challenge it.
It's wherewithal. One, a better
understanding of the contribution of the individual
barriers. And two, a greater confidence that we can
build a model that represents reality a little more
effectively after doing this work than we could
before. And if it doesn't do -- contribute to those
things, then I would have to wonder.
MR. CODALL: Well, I'd just like to point
out this is not part of this presentation, but we're
starting development on TPA 5.0 and we're putting back
in that code, the diffusion model. It was taken out
earlier. I think I was probably responsible for
putting it in and taking it out because it didn't seem
to make any difference, but since DOE is depending on
that release pathway, we're putting that back in too,
so we'll have a handle on it. So there are changes to
the code that will improve it.
MEMBER GARRICK: I sure wish you'd put
something in there that would account for the chemical
effects inside the waste package.
MR. CODALL: There may be something like
that going in. Are you aware of something, Sam?
Nothing comes to mind. But where people
are chemists here and at the center who worry about
such things.
MEMBER GARRICK: That's where the action
is relative to the mobilization of the waste and the
creation of the source term and I think a lot of
attention on that would pay high dividends.
All right, well, we're running a little
behind. We thank you very much.
MR. CODALL: Thank you.
MEMBER GARRICK: And I think we'll adjourn
for a recess.
(Off the record.)
Page Last Reviewed/Updated Monday, October 02, 2017