Advisory Committee on Nuclear Waste 134th Meeting, April 16, 2002

Official Transcript of Proceedings

Title: Advisory Committee on Nuclear Waste
134th Meeting

Docket Number: (not applicable)

Location: Rockville, Maryland

Date: Tuesday, April 16, 2002

Work Order No.: NRC-327 Pages 1-138

Court Reporters and Transcribers
1323 Rhode Island Avenue, N.W.
Washington, D.C. 20005
+ + + + +
+ + + + +
APRIL 16, 2002
+ + + + +
+ + + + +
The ACNW met at the Nuclear Regulatory
Commission, Two White Flint North, Room T2B3, 11545
Rockville Pike, at 12:30 p.m., George M. Hornberger,
Chairman, presiding.


HOWARD J. LARSON Special Assistant, ACRS, ACNW
JOHN T. LARKINS Executive Director, ACRS, ACNW
SHER BAHADUR Associate Director, ACRS, ACNW
MICHELE S. KELTON Technical Secretary

ROLAND BENKE Center for Nuclear Waste Analysis
RICHARD CODEL Center for Nuclear Waste Analysis

Opening Statement - Chairman Hornberger. . . . . . 4
High-Level Waste Risk Insights Initiative
B. John Garrick. . . . . . . . . . . . . . . 7
Bret Leslie. . . . . . . . . . . . . . . 7, 54
Chris Grossman . . . . . . . . . . . . . . .10
Amendment to 10CFR Part 63
Tim McCartin. . . . . . . . . . . . . . . 115

12:32 p.m.

come to order -- I'll read it with feeling. This is
the first day of the 134th meeting of the Advisory
Committee on Nuclear Waste. My name is still George
Hornberger, Chairman of the ACNW. The other members
of the Committee present are Raymond Wymer, Vice
Chairman, and John Garrick. Milt Levenson is absent
today, as he has been appointed as a member of the
National Academy's Board on Radioactive Waste
Management, which is meeting today. He will be in
attendance tomorrow.

During today's meeting, the Committee
will, one, hear a presentation on the preliminary
results of the NRC staff's High-Level Waste Risk
Insights Initiative; two, receiving a briefing on the
final rulemaking amendment, Part 63, having to do with
the probability of unlikely events; and, three,
discuss preparation of ACNW reports.
John Larkins is the designated Federal
Official for today's initial session.

The meeting is being conducted in
accordance with the provisions of the Federal Advisory
Committee Act. We have received no requests for time
to make oral statements from members of the public
regarding today's session. Should anyone wish to
address the Committee, please make your wishes known
to one of the Committee staff. It is requested that
the speakers use one of the microphones, identify
themselves and speak with sufficient clarity and
volume so that they can be readily heard.
Before proceeding, I would like to cover
some brief items of current interest. One, Mr. Mike
Markly, ACRS Senior Staff Engineer, who has worked
with the ACNW/ACRS Joint Committee, departed on April
7 for a six-month rotation in the Division of Waste
Management, NMSS. It sounds funny, he departed. He
didn't depart too far.

MR. LARSON: You might want to say "left."

report on the clearance issue, titled, "The
Disposition Dilemma: Controlling the Release of Solid
Materials from Nuclear Regulatory Conversion Licensed
Facilities," was issued March 21, 2002. One of the
comments in the report noted the lack of NRC standards
for disposal of volumetrically contaminated materials,
reiterating an earlier comment made by the ACNW.
Three, the NRC staff held three meetings
in Nevada, April 8 to 10, explaining Agency
regulations and licensing and oversight
responsibilities associated with the proposed high-
level waste repository at Yucca Mountain.
Four, the formal Atomic Safety and
Licensing Board commenced evidentiary hearings on the
private fuel storage, LLC, facility commencing April
8. Hearings are scheduled through May 17. The
following four issues are to be covered: One, a
contention that hazards from military aircraft and
other operations near the area have not been
adequately considered; two, questions about the
ability of the facility to withstand possible
earthquakes; three, potential contamination of
groundwater from non-radiological waste; and, four,
questions about whether the environmental impact
statement adequately addresses alternatives to the
placement of the proposed connection railway to the

John, did you have another announcement
that you wanted to make at this juncture or did you
want to wait? You had some --

MR. LARKINS: I think I'll wait till
tomorrow morning if that's okay.
CHAIRMAN HORNBERGER: Tomorrow morning.

All right. We are ready to start, and the
Risk Insights Initiative is -- our lead Committee
member is B.J. Garrick.

MEMBER GARRICK: Thanks, George. As it is
well-known that the Committee has had a long interest
in this issue of how the staff is taking into account
risk in the decisionmaking process that they engage
in, and we're going to take an important step forward
in that today by hearing from the staff as to why it
is developing risk insights, how it's developing them
and how it plans to use and document these results.
And we're going to, hopefully, begin to see, with
increasing clarity, some of the linkages that the
Committee has been looking for between the analyses
and technical activities that go on and the judgments
that are made about the risks involved. And to help
us do that today will be Bret Leslie, and it looks
like they're going to occupy a little bit of our time
here, so let's get on with it, Bret.

MR. LESLIE: Good afternoon. My name is
Bret Leslie, and the I'm the Technical Assistant in
the High-Level Waste Branch here at NRC. And this
actually will be a tag team presentation. And before
I turn it over to my cohort, I want to provide a few
introductory remarks.

This effort and this project was conceived
over a year ago, and we've primarily focused on
completing all the issue resolution, technical
exchanges supporting the staff sufficiency comments.
It was really after that and September 11 that we
gained the manpower. And primarily one of the reasons
why we have the manpower is Chris Grossman who's a new
staff member to the NRC, he's only been here about
eight months, he joined the Performance Assessment
Team. He's helped me out quite a bit in this project,
in carrying it out. It was a fairly substantial

And, in addition, we're also joined today
by Stefan Mayer and Roland Benke who are two
Performance Assessment staff members down at the
Center, and they also -- as we go through this
presentation, you'll understand their role. We
conducted a whole bunch of meetings, teleconferences,
and went through all of our agreements, and we
included the Center and the NRC staff in this to make
sure we were catching all the nods and winks that were
going on with the staff we had the Center PA staff,
Stefan and Roland, kind of capture some of the
subtleties of what was being said.

As I said earlier, this is going to be
kind of a tag team presentation. I'm going to give
Chris an opportunity to kind of do half the
presentation, kind of the process and some of the
steps we went through. And at the end, or midway
through, I'll take over and start to talk about some
of the implications of what the work was and what it
means and how we're going to be implementing some of
the results into our program, and I think this is
really what John is after.

And just to make it a little more
difficult for the Committee, I've kind of done a roll-
up. The last two slides before the conclusion will be
where I take what we learned in the risk insights,
apply it to issue resolution space in agreements we
did, show the differences between and why these
differences occur because of differences in
performance estimates from the DOE's code, the TSPA
code and our code, the TPA code. I'll show how we
reviewed it, how we will document it, how that relates
to integrated issue resolution status report, how that
review is risk informed and how it's related to the
Yucca Mountain Review Plan. And so I think the
example I provide should walk you through the entire
process. And so what you're going to have to do is --
you know, I'll maybe do a song and dance near the end
to wake you guys up, because that's really where we
try to tie everything together.

The one thing that I want to be very clear
about is that this wasn't a risk ranking exercise, and
people -- we've titled it very specifically. It was
a communications and integration exercise. Does that
mean that we didn't get any information that can be
used in terms of ranking agreements? No, it doesn't
mean that, but that's not the purpose of this
exercise. This exercise, you'll see, was kind of a
fact-finding integration exercise to figure out where
some holes are, to figure out what we can do to fill
those holes and to correct some slight deficiencies or
identify the gaps.

So that's kind of an overview of the whole
process, and you're getting a snapshot today. We're
not done with the Risk Insights Initiative, and you'll
see that when I get to the slides halfway through.
The other reason why I've got Chris to
take over is if he does a good job and if you like the
presentation, then he take the credit. If you don't
like the presentation, you can blame me, and he still
gets to do it next time. So with that, Chris, could
you go ahead and lead us through the process?
MR. GROSSMAN: Thank you, Bret. Can
everybody hear me on the portable mike? My name is
Chris Grossman, and I'm a member of the Performance
Assessment staff in the Division of Waste Management
here at the NRC. And as Bret said, I'm going to walk
through some of what we've done with the High-Level
Waste Risk Insights Initiative to this point and some
of the findings that we've found, and then I'll turn
it back over to Bret who will, as he said, walk you
through the implications and how we plan to implement
that into our program.

So to start off with, I'll give a brief
outline of the talk, and I'm going to cover the first
two bullets, on Slide 2 for those at the Center for
Nuclear Waste Regulatory Analysis, starting off with
an overview of the project and why we're doing this
project and lay some of the groundwork for the
Initiative. And then talk about some of the results
of our activities that we've been engaged in over the
last several months. And then turn it back over to
Bret for the final two bullets, which deal with the
implications of these results and finally the
implementation of these results into our program.

So to start off with, for an overview of
the Initiative, I hope to paint a clear picture and
communicate some answers to these three questions:
Namely, why are we doing this, give you some insights
into the motivation for this project and some of our
goals that we hope to achieve, and then also some of
the activities that we've been engaged in to achieve
those goals, and then, finally, conclude my portion
with some of the findings that have come out of these
activities we've been engaged in to this point.

Moving on to Slide 4, so I'll start off
with the motivation, and several stakeholders,
including this Committee, have expressed concern that
they're not convinced our work is risk informed. And
that says to me that we are either not clearly
communicating the risk insights of our work or we are
clearly communicating those risk insights, but the
ACNW, for instance, is convinced that our work is not
commensurate with the risk.

So another objective of this project is to
prepare for the Department of Energy's rebaselining
efforts in which they are rebaselining their program
using risk insights to prioritize their work,
including the level of scope of work they believe is
required to fulfill the KTI agreements as part of the
issue resolution process.

And, finally, we hope to, as Bret said,
forge a common understanding and communicate these
risk insights to our stakeholders, both internal and
external, and primarily among the key technical issues
we want to forge that common understanding.



MEMBER GARRICK: I would have expected
maybe -- and maybe it's buried in the second one -- I
would have expected to see something up there along
the lines to enhance the ability to appropriately
allocate resources.

MR. GROSSMAN: I think that, as you said,
it is buried in the improved communication and
integration portion.


MR. GROSSMAN: The integration dealing
with that and how we implement the findings into our

MEMBER GARRICK: Okay. Thank you.

MR. LESLIE: I'll touch on that at the


MR. GROSSMAN: So on Slide 5 then, what
did we do to accomplish some of the goals that we set
out for, and mainly the activities included all the
technical staff here at the headquarters and at the
Center for Nuclear Waste Regulatory Analysis in San
Antonio that are involved in the issue resolution
activities. And that included High-Level Waste Branch
and Total System Performance Assessment Integration
KTI staff here at the NRC and at the Center. The
activities were facilitated by a core group which was
led by Bret Leslie and consists of myself and two
TSPAI staff members at the Center, namely Roland Benke
and Stefan Mayer.

And the most important thing to take from
this slide were -- the two activities that we engaged
in were having the staff from Key Technical Issues
rate the agreements and then holding 16 meetings in
which we sat down and discussed those ratings with the
KTI staff.

MEMBER WYMER: In what sense did you rate

MR. GROSSMAN: We'll address that as we
move along, so hold that question. It's a very
important question.

But to start off with, before we could
have the Key Technical Issues rate the agreements, we
provided a little guidance on why were doing this and
what we wanted to achieve, which was very similar to
what we are presenting here today in terms of content.
And I'll address that more specifically in a future

Subsequent to the guidance, the KTI's did
rate the agreements according to important, and then
we had 16 meetings total for each of the KTIs and
preclosure in which we sat down and went through each
agreement, and we discussed three things. We
discussed the individual ratings for the agreements in
terms of importance and the insights that were used to
arrive at those ratings. And we also discussed the
scope of work that was expected of DOE to complete the
agreement. These meetings were facilitated
discussions, and we had note takers at both here, at
headquarters, and at the Center to capture the flavor
of the conversation as we progressed through the

Subsequently, after these meetings, we
have interacted with the Office Risk Task Group to
gain an understanding of how some of the lessons
learned from these meetings could be applied to the
program. And then we've also kept Division Management
briefed on our activities the progress we've made to
date. And we've currently developing action plans
with the KTIs to plot a path forward of how to address
some of the issues that have come out of the meetings
and the discussions.

So as I said, on Slide 6, I hope to talk
a little bit about the guidance that we provided to
the staff at the outset of this Initiative. And
namely the guidance was answers to these questions,
and a lot of the answers we hope to provide and
communicate to you today in this presentation. They
deal with things as what is the purpose of this
Initiative? What are some of the goals that we hope
to accomplish? What activities are we going to be
engaged in with the staff? Have we outlined a
schedule for some of the project? And then we also
outlined what we felt were the staff's
responsibilities to help us complete this project and
meet our goals. And, finally, we laid out what are
the risk insights. And I'll give you a little more
insight into what we defined as a risk insight in the

On slide 7, the information on this slide
is what we provided to the staff as a definition for
risk insight. And we also provided copies of the
Commission's White Paper on risk-informed and
performance-based regulation. As an aside, the White
Paper on the risk-informed and performance-based
regulation refers to risk insights as, "The results
and findings that come from risk assessments." And
because Part 63, which is the regulation that governs
the repository, requires a risk assessment to address
performance in the post-closure period, the
description presented here is consistent with that
White Paper.

Namely, the four things that we -- the
four pieces of guidance that we provided on the
definition of risk insights are ties to the regulatory
requirements, and this implies the performance
objective as set out in Part 63, as well as the
multiple barriers requirement. A basis in the DOE
safety case. The regulations require that the license
be granted or denied upon the Department of Energy's
safety case, and insights can come from that. The
insights can also be derived from performance
assessment analyses, whether they be performance
assessment codes, such as the one we have here at the
NRC or DOE's total system performance assessment or
other performance assessment codes related to the
Yucca Mountain project. They could also be from
offline calculations. For instance, DOE provided
examples of sensitivity studies where the justified
their approach in terms of risk. And, finally, risk
insights can come from information that is needed to
be understood by the Key Technical Issue teams.

And the results and findings from the
analyses of one part of the system that are necessary
for staff to evaluate another part of the system need
to be shared. For instance, I'm going to give you an
example, the hydrologic factors that are important for
waste package performance. A lot of people, when we
came into these meetings, felt that dripping was
important for waste package corrosion, when in fact
through the discussions we were able to bring
everybody to a common understanding that in fact waste
package degradation relies more on relative humidity.

So the KTI staff rated the agreements, and
they rated the agreements for importance, based on a
scale of one to five, one being low importance and
five being high importance. And we used these ratings
to facilitate the discussions during the meetings and
to enable staff to explain to each other, who rated
the agreements, why the agreement was important or
not. We asked the staff for the basis of the ratings
and to discuss those bases during the meetings.
For instance, we wanted to collect the
specific references to analyses that have been
performed that point to their basis for the ratings.
Now, due to the time constraints at some of the
meetings, we were not able to collect those specific
references, and we hope to do that in a Center
deliverable later this year.

In our guidance, we did not strictly
specify the factors to use in determining the rating,
and we did this to gain some of the breadth of staff
perspectives, to get a flavor for where people were
coming from when they made their decisions based -- or
made their ratings on importance.
And some of the factors that the KTI staff
considered in rating the agreements were things that
could be considered in determining a risk-informed and
performance-based program. For instance, the
importance of information required by an agreement was
one factor that they used to make the ratings.
Another example would be --

The importance of information --

MR. GROSSMAN: Required by an agreement.
That was one factor that people used.

does that mean? I mean if it's not risk -- you're
contrasting it and saying it's not risk but it's
importance --

MR. GROSSMAN: Not everybody rated based
on risk.

trying to understand what importance is then.

MR. LESLIE: Let me try to take a stab at
this. Remember that there's a multiple barrier
requirement that requires a description of the
capability of the barrier, and so every barrier may
not be that important to risk, okay? But for DOE to
demonstrate the description of the capability of the
barrier, we've asked information. Why is it that the
unsaturated zone is an effective barrier? They have
to make that demonstration, they have to provide that
description. And so a particular agreement might not
be very important in terms of risk but is very
important for DOE's ability to be able to describe the
barrier. Without the information, they can't describe
the barrier. So hopefully that clarifies how some
people viewed some of the agreements.

MR. GROSSMAN: I'll provide a few other
examples here. Some of the agreements associated with
seismicity some of the staff rated them as important
because of the high visibility of the issue to
external stakeholders and because in the past these
topics have gained attention in licensing hearings for
major nuclear facilities. The staff rated the
agreements as important even though the staffing
knowledge that from a true risk perspective these
agreements were probably not important.

MEMBER GARRICK: Yes. I think one of the
things we're wrestling with is that we can certainly
sympathize with the observation that risk is not the
only factor to be considered here, but what we're
really trying to understand is after you've considered
all the other factors, what kind of role did risk
really play? And to the extent that we can understand
that, then I think it would answer a lot of our

MR. LESLIE: Chris is going to go through
three examples of the igneous activity, the container
life and source term and repository design and thermal
mechanical effects of KTIs, and hopefully after he
goes through that, that will go a good ways to address
your question.


MR. LESLIE: And if it's still there, ask
it again.

MEMBER GARRICK: Right. See, part of the
confusion is that this is referred to as the risk
initiative. And then you immediately proceed to tell
us that it's a lot more than that and that it covers
a lot of other thing and so on. And so to the extent
that you can help us get resolution on the degree to
which it is a risk initiative, it will be very

MR. LESLIE: Yes. And we did not call it
a risk ranking exercise.


MR. LESLIE: It's a Risk Insights
Initiative. And one of the things that you'll see is
that to do any sort of ranking, you need to make sure
that you have all the risk information available. So
if you assume you have all the information available,
you can rank things. I approached this and said do we
have all the information necessary to rank them in
risk space? So what is the risk insight behind this
agreement? I wanted the staff to be able to say,
"Well, here's the document, here's the calculation
that says this agreement makes a factor of ten in
dose," okay? So when you describe this, it really is
a process of learning. It's not we have the risk
insights, and this is what the results are.

MEMBER GARRICK: Yes. The problem here
that we're trying to wrestle with is that we're
putting the spotlight here on risk and its role, and
the issue that we want to understand is that given the
way the Initiative is being interpreted, is there a
chance that when you put the real risk influence in
context that there's not much context there and that
it's business as usual? That's the thing I'm trying
to get an understanding of, that it is not business as
usual, that there is in fact tangible perceptive
movement towards the decisions being truly risk


MR. GROSSMAN: I think this Initiative
will address that. We're still moving through it, and
one of the things that we hope to do, which we'll
address later on in the Center deliverable that comes
out later this year, is we want to make sure that
those risk insights that we're using for the basis of
our agreements and issue resolution process are in
fact documented.


MR. GROSSMAN: So that will be one of the
outcomes of this. We're still working toward that.
So to deal with the last bullets on this
slide, I just want to make the point that in general
we found that the Key Technical Issues staff rated our
agreements from a local perspective and that they had
a sense of what was important within their agreements.
And also, on the same token, the Performance
Assessment staff tended to rate the agreements
relative to an overall system perspective. And we'll
see some of that later as I move through the three

So during these meetings, we had the
discussions, and we discussed each individual ratings
of the agreements, and what we tried to do was come to
a common understanding of the importance of that
agreement. And if that common understanding was
reached, then the agreement was given a composite
rating reflective of the staff ratings and discussion.
Now, with these composite ratings, they
can be used with a KTI to determine which agreements
and subissues are most important.

MEMBER WYMER: Were there any places where
you could not reach a common agreement?
MR. GROSSMAN: There were in fact places
where we could not, yes, and we'll have a summary
number of how many there were.

Because the KTI staff tended to rate more
from a local perspective, the composite ratings could
not necessarily be used outside of the KTI to
determine overall the importance of the agreements to
system performance. And I would recommend use caution
to try and directly apply these ratings to that.
So let's jump into some of the findings
from our discussions in ratings of the agreements. We
learned that in general there is an understanding of
what issues are important within each KTI.

CHAIRMAN HORNBERGER: Doesn't that have to
be true by definition because of the way you have
importance? I can't picture because you asked people
within the KTIs to judge which was important and then
your conclusion is that they tend to understand which
issues are important.

MR. LESLIE: The perspective you'll get by
comparing CLST, igneous activity and RDTME will
capture that concern. You'll see that CLST is an
example of where they have a very good understanding,
they're consistent with performance assessment. So
we'll get to that specific example.

MR. GROSSMAN: We also learned that those
-- that the KTIs have an understanding of which issues
are important in their area, that not all the KTIs
have a good understanding of where they fit into the
bigger picture of overall performance. We also
learned that not all the KTIs are at the same place in
terms of being risk informed, and we have a few
examples, as Bret mentioned, that we hope to go
through to illustrate this, namely the igneous
activity KTI, the container life source term KTI and
the repository design and thermal mechanical effects

And because we had staff from both the
technical areas and performance assessment
participating in these discussions, it allowed for
staff to improve their understanding of the overall
performance of the repository, and in fact we had
several staff tell us afterwards that this was
invaluable exercise because it allowed them to gain a
broader perspective on the performance of the
repository and to improve their understanding of just
what the actual important processes are.

Moving on to Slide 11, we have an example
here of the initial igneous activity agreement
ratings, and these are the ratings from before the
meeting with the KTI to discuss the ratings. And what
we have here, let me explain this graph. Along the
bottom we have the agreements, and we have the
importance scale on the Y-axis, rated from one to
five, with one being low importance and five being
high importance. The dark blue bars represent the
initial ratings by the igneous activity KTI staff, and
the hatched red bars represent the initial ratings by
other staff. And the other staff was comprised of
staff from outside the igneous activity KTI who had
familiarity with the agreement. Most notably, we saw
a lot of Performance Assessment involved in this
rating since they deal with the entire system.

And one of the big things to take away
from this is that there was widespread diversions of
opinions of the importance between the two groups.
CHAIRMAN HORNBERGER: Do you have on the
top of your head what Agreement 2.07 is?
MR. LESLIE: Not on the top of his head
but on the table. Document the basis for airborne
particle concentrations used in the TSPA in Revision
1 and inhalation.

there, how about 2.17?

MR. LESLIE: DOE will evaluate conclusions
of the risk effects of eolian and falluvial
remobilization are bounded by conservative modeling
assumptions in the TSPA, and then DOE will examine
these things.


MR. GROSSMAN: Now, one of the reasons for
some of the widespread difference with the igneous
activity, which we pointed out a little earlier, is
that people used different factors to rate. For the
Agreement 2.07 in which you see a large difference
between the other staff and the Igneous Activity
staff, the Igneous Activity staff tended to rate that
of lower importance because it's been completed. So
that was one of the reasons that they used for their
rating there.

Now, during the discussions with Igneous
Activity, we were able to reach a common understanding
for --

MEMBER WYMER: I don't really understand
that too good. How can completing a task diminish its

MR. GROSSMAN: That was some of the
perspective that staff is using.

MR. LESLIE: Yes. You would have fit
right into our conversation, Ray.
You sound a lot like Tim McCartin in that

MEMBER WYMER: I'm honored.

MR. GROSSMAN: Slide 12 summarizes some of
the findings from the Igneous Activity KTI. And
repeating what I said, there was a lot of initial
diversions of opinion of the importance of the
agreements, but during the meetings we were able to
reach a common understanding for 13 of the 19
agreements. And one of the things to note with
Igneous Activity is that we had very clear focused
discussions, and that in general the Igneous Activity
staff were able to identify the exact factor of the
impact of that agreement that it will have on the

For the six agreements for which we could
not reach consensus, we agreed to hold future
discussions and also determine the need where
additional analyses might be needed.

MEMBER GARRICK: Just as a matter of
process here, and you may have answered this, did both
groups have access to the same information, the same
evidence, number one, and was there any activity that,
in advance of getting their judgments, along the lines
of getting briefings or presentations or the type of
thing, for example, you would do if you were running
an expert elicitation in accordance with NRC rules on
running expert elicitations? If the two groups are
not looking at the same knowledge base or the same
supporting evidence, that's one thing. But if they
are looking at the same information, then it's a
matter of how much the individuals have dug into the

MR. GROSSMAN: We did not specify the
exact information they should use, and one of the
reasons was is we wanted that to be brought to the
table during the discussions and to lay that out so
everybody could see the different bases that people
were using for the agreements and to help everybody
arrive at that information together. So we didn't
limit the information that they could use or specify
which sources.

MEMBER GARRICK: So this wasn't really a
very formal process. This was pretty darned informal.
It wasn't a very structured process.
MR. LESLIE: Yes. Let me try to compare
it to the briefing you had at last month's meeting.
It was not structured questions, and the reason is
that we didn't design it that way. One, we really
wanted to reinvigorate the issue resolution process.
We hadn't participated in technical exchanges and get
the PA staff and the technical staff together and talk
about things, communicate. And that's really what
this is about, to make sure that they're on the same
page. You said something about did they have the same
information available? Well, the same information
happens to fill this table.



MEMBER GARRICK: Well, that's why also,
Bret, I asked the question was there any preliminary
activity that would have attempted to digest that for
the benefit of the two groups that were looking at --
MR. LESLIE: No. It was an ugly sausage
making experience.


MR. LESLIE: But I think the thing was is
that both the PA staff learned about some of the
details that the KTI staff knew, okay? And the KTI
staff learned from the PA perspective of some of the
different subtleties. And I think unless that
information is shared in a frank discussion -- and
that's why we didn't structure it and say, "Use this
factor to rate it and create it." I really wanted to
have that discussion so that people could come away,
again, with this common understanding.

In many cases, everyone was already on the
same page. And so when I looked at the initial
ratings and I was the Facilitator, I was picking at
the extremes, on the staff members who rated it a one
and a five, to understand why they were coming from
where they are. And the whole idea was to make sure
that the information that staff member X had staff
member Y was aware of. And often they were aware of
it. Sometimes they still maintained that divergence,
other times, "Well, now that you say that, oh, yes, I
can go from a one to a four or a five," because the
staff member was forced to say why this is important
in terms of risk.

consistent with across the board for what you've done
as a fraction of the numbers that you achieved common

MR. HAMDAN: Igneous versus all KTIs.


MR. LESLIE: About, about.

MR. GROSSMAN: Two-thirds.

MR. LESLIE: So we ended up with ten
meetings and 51 where there was divergence at the end.
So five versus six.

CHAIRMAN HORNBERGER: That's 51 versus --
out of --

MR. LESLIE: Out of all ten. Nine KTIs in
preclosure had 51 agreements, so that would be five
per KTI. So is six out of the norm? No. That's an
average. And Chris will get to one where's there more
than that.


MR. LARKINS: Just a quick question. You
say Bullet 2 says KTI staff were able to identify the
impact of the agreements on dose. Does that mean
relative to the PA staff? If you take something like
aerosol size and inhalation effects, does the KTI
staff have a better understanding of the phenomena and
the impact than the PA?

MR. GROSSMAN: Well, when we talk about
the impact on the dose, what we're talking about is
that the staffer at the meeting were able to identify
the actual factor, whether it raised the dose by, say,
an order of magnitude or by a factor of two, and
that's what we mean by they were able to identify the
impact. In other words, they had done the analyses to
say how this agreement will impact their performance.

MR. LARKINS: No, I understand that, but
more so than --

MR. LESLIE: This was like Brit Hill and
John Trapp saying, "This agreement means a factor of
ten," and if you look at the parameter range we're
looking at, the uncertainty in that could affect the
factor of ten in dose. Now, sometimes the PA would
say -- Tim or Dick would say, "Well, you know, if I
look at the sensitivity, it might only affect it by a
factor of two." And so some of the divergence was

But this was the KTI that was able to come
up and say, "Okay, we've done our sensitivity
analyses. This is the reason why we think this
agreement is so important. This is the one that
really could drive changes in a dose, either higher or
lower." And so that's the point that we're trying to
get at, that the KTI staff were really able to
identify the impact of their agreements in terms of
the impact to dose.

MEMBER GARRICK: Let me ask something. If
you were to play the Weakest Link game here, and start
ferreting people out on the basis of what they know
about the details, would you expect convergence? In
other words, I don't know how many people you talked
to, it sounds like a relatively large number. It also
sounds like that the people covered a wide spectrum of
points of view and expertise, and that's good, that's
good because it addresses some of these other concerns
that you already said that you're interested in. But
I'm curious from a technical standpoint how these same
results would look on ferreting out systematically the
people who know less about it than others in terms of
the real technical issues.

MR. LESLIE: Let me answer it this way.
When we did these initial ratings, I protected the
names. Chris and I knew who the ratings were.
Everyone had a rating sheet when they walked into the
meeting of the initial ratings, which had no names on
it, but you could tell by when I was asking the
question who I was picking on and figure out who
things were. Everyone learned to different degrees.
Some people learned quite a bit in these meetings, and
other people learned less. I don't think that you
could say that we'll pick on this guy because he
doesn't know. I mean he might not have been involved
as much in the integration of things. And so this,
again, was kind of a learning exercise. And if you
were to ask him, he would say, "Yes, I learned
something from this exercise and I thought I knew
everything." Well, he might not say that but close.

So, again, if we can go through the two
other KTI examples, you may get a better flavor.

MR. LESLIE: Right. Okay. I guess the
important here is what do the regulators learn from
this in terms of contributing -- that would contribute
to the decisionmaking process? No doubt that the
individuals are involved in this exercise are going to
learn varying degrees, but I think that one of the
reasons that we're doing all of these things is to try
to get a better handle on what's really important here
and what should be the path forward with respect to
resolving issues.

MR. LESLIE: Correct.

CHAIRMAN HORNBERGER: I know you want to
get on. Okay.


CHAIRMAN HORNBERGER: But one quick one,
just a follow-up on John Larkins' question. So your
Bullet 2, "The KTI staff were able to identify the
impact of the agreements on dose," is that true for
the six out of the 19 where you didn't reach a common
understanding or only for the 13?

MR. GROSSMAN: No. It was mainly for the
13 agreements of the 19.

MR. LESLIE: Or in some cases, there are
particular agreements where there was -- again, the PA
staff might say, "We've done a back-of-the-envelope.
We haven't completed our run and done the sensitivity
analysis, but we think it's going to end up down
here," and the Igneous Activity folks say, "Well, we
think once we complete this we'll end up here."
Again, what we were after is for those agreements
where there was some disagreement or divergence, were
there additional analyses that were going to converge
them? Tim, did you want to add something?

MR. McCARTIN: Yes, if I could -- Tim
McCartin, NRC staff -- if I could just add one thing
about what does the regulator get, and you may be
getting to this, and I apologize if I'm stilling your
thunder, but a lot of very useful discussion went on
on the technical aspects of why this agreement is
there. Having gotten to that through the technical
discussion, the second thing that Bret and Chris asked
was, "Okay, now looking at the agreement, what's the
level of effort to do this, and is it consistent with
the importance we've talked through." And I think
that's where part of that buy-off is, do we see a
consistency between, well, how important this was,
what it meant to the calculation and are we spending
20 FTE on something that looks like it's a minor issue
or, gee, we've got a half FTE on something that looks
to be extremely important. And when you talk to that
level of effort, I think that, in part, is getting to
part of the usefulness to the regulator, I would say.

Go ahead, Chris.

MR. GROSSMAN: Thank you. Moving on to
our second example, here I have the container life and
source term initial agreement ratings, and we have six
subissues in Container Life and Source Term KTI. The
top graph represents Subissue 1 and then Subissue 2
and 3 on the lower left and Subissues 4, 5 and 6 on
the lower right. And the graphs are structured the
same way as they were for the Igneous Activity KTI.
And some of the takeaway messages from
this slide are that most of the KTIs have an
understanding of the importance of their agreements
within their KTI, which we mentioned earlier. And
this can be seen here by Subissue 1, which deals with
the corrosion of the engineering barrier system was
generally rated higher by both the KTI staff an staff
from other KTIs, higher than Subissues 2, 3, 4, 5 or
6. And this tends to be commensurate with its impact
on the performance.

MEMBER GARRICK: Doesn't this also show a
lot more agreement than the --

MR. GROSSMAN: Yes. That's the second
point to take away from this. Thank you very much.
In that here we do see a lot more
agreement between the Container Life and Source Term
KTI and other staff. And, again, a reminder here that
a lot of the other staff was comprised of Performance
Assessment who tended to take more of a system level

MR. LESLIE: Can you infer from this that
chemists are just more agreeable than geoscientists?

MEMBER GARRICK: We haven't found that.

MEMBER WYMER: Or are we seeing an example
of brainwashing?

MR. GROSSMAN: And Slide 14 summarizes the
findings from the previous slide. I'd like to add
here, though, that for the Container Life and Source
Term, a lot of the risk insights were discussed on a
qualitative level, and unlike the Igneous Activity,
they were able to put a factor of influence on the
performance, and that's one of the things we hope to
address with the continuation of this project is to
collect those specific risk insights. And we were
able to determine during the meetings where additional
risk insights would be needed to help address some of
those shortcomings.

On Slide 15, I have my third and final
example, and here we have the Repository Design on
Thermal Effects KTI. These are their agreements
ratings, and the chart is structured the same as the
previous two. And here we see that not all the KTIs
are in the same place. We have a lot of divergence
between other staff and the RDTME KTI staff.
And what this says to me is that staff
from the Repository Design on Thermal Mechanical
Effects KTI needed a better understanding of the
importance of their agreements in terms of the overall
system perspective because of this large disagreement
between the two.

MR. HAMDAN: Could it be the other way
around? Can it be the other way around that the SDS
doesn't understand the details of the KTI and the
issues involved?

MR. GROSSMAN: That is a possibility that
maybe these issues were not abstracted well into
analyses, for instance, and they just haven't been
able to perform the adequate analyses to determine

CHAIRMAN HORNBERGER: That would mean that
Tim McCartin doesn't know everything.

MEMBER WYMER: Don't experts, in general,
seem to think their field is the most important,

going to say, you know, I thought that all the charts
had looked exactly like this.

MR. LESLIE: But they don't.

CHAIRMAN HORNBERGER: I know. This is the
one that isn't surprising to me, because the experts
in the field, you would think that they would rate
what they're doing as very important.

MR. GROSSMAN: We also summarize a little
more broadly and we looked at the divergence in the
ratings for subissues, and the ratings were still
based on the agreements, but the agreements within a
subissue of how the compare between the KTI staff and
staff from another KTI. And what we found here is
that these subissues were the ones in which we had
large initial divergence in the ratings. And one
thing to note with these is the difference was between
Performance Assessment staff and the KTI staff here.
And the thing to note about that is in all the cases
here, the KTI staff rated them higher or more
important than the Performance Assessment staff,
alluding to what Dr. Hornberger had suggested earlier.

MEMBER WYMER: To what extent would the
influence of other KTIs come in here? For example, I
can envision that somebody might not think these KTIs
were particularly important, because container life
and source term was the 800-pound gorilla, and
therefore these are down-rated relatively. How does
that play into this?

MR. GROSSMAN: There was some of that in
the ratings, and some staff rated everything but the
containers as very low importance or much lower
importance, I should say, because they felt that the
container was so dominating in the system.

MEMBER GARRICK: We saw that in that first
exhibit, did we not, some of that?

MR. GROSSMAN: But there are also staff
who looked at other things, such as how do they impact
the multiple barrier requirements, and there still
needs to be a description of multiple barriers. And
how do these agreements get to addressing that issue
and their importance for determining that?

MR. LESLIE: Let me expand upon that a
little bit. I mean let's go to the second tick mark
which was faulting. The staff, the KTI staff, Phil
Justice and John Stamatakes at the Center, understand
from a risk perspective, again, that those agreements
aren't very important, but they're saying, "Look at
the press, look at what issues have been litigated.
We've done it in other licensing, and this is why it's
important, and that's why these agreements are more
important than they are from a purely risk

You've got to remember, the staff used
multiple lines to determine whether something is
important, and that's why we have to use caution when
you're talking about these different divergences
between what the PA did and what the KTI did. In some
cases, you're right, from a risk perspective, they
thought -- the KTI staff thought that their work was
extremely important and rated it high. But in other
cases they used other factors to say it was important.
So there's a subtle mix in here when you talk about
these divergences.

MR. LARKINS: Were all of these
importances ranked the same way or was there any
weighting of importance measures.
MR. LESLIE: No. Again, it was
unstructured, primarily to learn where we need to
bring closure. I mean if you do it structured, you
might not get to this kind of result, and what I was
looking for was an honest assessment of all the staff
of what they thought these agreements were.

MEMBER GARRICK: Yes. One of the things
that's a little hard to appreciate here with respect
to this issue of divergence is why design would be in
that group, unless the interviewees are considering
the large uncertainty about faulting and seismicity.
Because given a certain seismic risk curve and a
certain faulting history, there shouldn't be that much
uncertainty about the design. In other words, I think
we know how to design against a seismic risk curve
pretty darn well.

And so I'm a little surprised to see that
in there with something like faulting and something
like fracturing and geologic setting unless we're
double counting here in terms of our interpretation.
In other words, if the divergence is driven in the
design question by seismicity and faulting, then that
doesn't reveal a great deal about the issue of
confidence in the design.

MR. LESLIE: Let me try to answer for Raj,
Nataraja, and see if I can recall some of the
reasoning behind this. If we had a design from the
Department of Energy that we could compare to the
criteria, we might have a different answer. Okay.
One, we don't have the level of design detail, and how
often has Department of Energy changed its design? So
part of the uncomfortableness on --


MR. LESLIE: -- that -- you know, again,
we asked a very broad question to try to get at all
the subtleties of why people thought agreements were
important. That team would say, "Okay. If we had a
design and we could the analysis right now, we would
be able to say whether this is important or not, but
we're still waiting."


MR. GROSSMAN: No problem. And, finally,
one last slide here on divergence. These subissues
represent subissues where the Performance Assessment
staff rated the agreements higher than the Key
Technical Issues staff. And we see here some familiar
faces, the engineer barrier system corrosion, the
environmental conditions for the corrosion and the two
igneous activity subissues.

So to summarize all of our composite
ratings, I want to throw in a little caveat that the
Initiative was not rigorously designed to determine
which agreements are more important from a pure risk
perspective to repository safety or overall
performance; rather, our primary purpose was to foster
better integration and communication among the KTI
disciplines and the Performance Assessment staff. We
wanted to bring the people together in their
understanding. We also wanted to provide staff with
a better appreciation of the total system. And we
feel that these goals were met during the discussions.

Another purpose of the effort was to help
the staff prepare to review and respond to the
Department of Energy's recent rebaselining initiative
and to identify any impacts to the NRC's current
program. And we feel that these goals were met during
the discussions and the subsequent activities.
So the results were based on approximately
16 roundtable discussions where we openly discussed
and challenged each other on whether and why an
agreement is important to the safety and if so, how
important? It should be recognized that some of the
staff considered factors other than just risk when
determining whether an agreement is important. These
results represent a composite of the ratings by each
KTI group and not necessarily that a consensus was
reached on these agreements.

The results should also be considered a
snapshot in time that will and should be revisited as
additional analyses are done to fill certain
information gaps discovered through this Initiative.
Therefore, the data in this slide and backup Slides 32
through 38 should always be placed in the proper

MEMBER GARRICK: One of the things that
we've talked a lot about in terms of achieving greater
transparency is the linkages between issues and the
risk values, for example. If you were to decompose
this, not by the agreements, but rather by what the
PA, for example, identifies as the 300 leading
contributors to the risk of the repository, and then
looked at that against this, would you expect to see
anything that would be a basis for any type of
correlation? What I'm saying is that if you looked at
the TSPA and in the TSPA you were able to somehow turn
up the microscope sufficient to identify the top 300
contributors to the risk calculation of the TSPA,
would there be anything that you could do that would
correlate that information with this?

CHAIRMAN HORNBERGER: You're talking about
that 292 somehow -- is that why you use 300?

MR. LESLIE: Well --


MEMBER GARRICK: Yes. I know. I know
they're agreements, and that's what I'm getting at

MR. LESLIE: There is not a one-to-one

MEMBER GARRICK: Well, I know there's no
one-to-one correlation, and I know these are
agreements, but what I'm really trying to get at is on
the one hand we seem to be using the agreements as a
result to partially address the question of how risk
informed we are. And what I'm --

MR. LESLIE: No. The agreements are the
result of using the risk insights. And the example
I'm going to give later will really tie things
together, and I'm going to preview it right now. The
DOE puts more reliance on the unsaturated zone than
the saturated zone. We look and use our sensitivity
analysis to say the saturated zone is more important.
So when the KTI guys went out there and talked with
DOE, there are more agreements in the unsaturated zone
than in the sat zone, because DOE is asserting
performance there.


MR. LESLIE: Which parameters? We asked,
okay, if you look at your sensitivity, it's the flow
spacing interval or the active fracture model. There
are a couple different ways you could look at the
agreements. You could go back and say does this
agreement look at which part of Part 63 that's
addressing parameter uncertainty or model uncertainty,
but the results of the performance assessment and our
knowledge that went into it determine -- and was a
major contributor in determining what are the
agreements? It's not that, oh, we made the agreements
and let's see if they fit the risk perspective. I
think what --

MEMBER GARRICK: Yes. Part of my problem
is that a lot more went into this assessment, this
rating that you've done than the matter of risk. So
I'm still left with kind of knowing to what extent
you're really risk informed.

MR. LESLIE: Let me try to answer it a
different way. You didn't end up at a high rating, a
composite rating unless PA was aboard. Okay? So I
would say probably highs and medium highs, these are
the things that the PA and the KTI staff said, "Yes,
this is a critical parameter in the performance
assessment." As you go further and further down,
these are reflecting the input -- you can't get a
composite score of low unless the PA folks are also
saying it's low, that this isn't a very important
piece of information in the performance assessment.
So this is reflective. You've got to remember, it's
a composite score, but it's a composite score of the
PA staff who were ranking things with risk primarily,
overall system performance primarily.

MEMBER GARRICK: All right. Well, let's
let you go ahead. Yes, Latif?
MR. HAMDAN: Also, the question -- but the
thing is you are still talking about 16 and 25
agreements. And the answer to the direct question is
it seems that this exercise was done to evaluate the
agreements, and maybe we can't answer his question
about 300 contributors to risk at this time. We may
have to do another exercise.

MEMBER GARRICK: Yes. That's a good
comment. But at the same time, this comes under the
byline of a Risk Insight Initiative, and so we keep
pounding on that.


MR. McCARTIN: Yes. Tim McCartin. I
think your point's well taken. There was a lot of
hand wringing as to how this relates to something in
the performance assessment calculation, and I think
one of the things that -- and once again, I don't know
if this is in one of Bret's later slides -- but there
were promises made for we need to do a few more
calculations, we need to make some things a little
clearer in terms of what is the risk information here.
And I think that is a key part of this. I think it's
there. I think there is a lot of information there,
and --

MEMBER GARRICK: Okay. Well, we keep
jumping the gun on Bret, Chris, and we'll try to hold
ourselves back a little. We'll try to --

CHAIRMAN HORNBERGER: We'll try to be more

MR. McCARTIN: Right. But let me just add
one thing. One of the issues, and it requires really
looking at what's in the code, and it's not easy, is
if your container lasts more than 10,000 years, you
have a zero dose. And so people were saying, "Well,
nothing else matters," and that's not true. As Bret
indicated, we have a multiple barrier requirement, and
it's looking at what have you done in your PA code in
terms of the capability of these other barriers? And
you can look at things like retardation in the
saturated zone. If that has the potential, it's very
important, so that gets factored in.

MEMBER GARRICK: Right. It' like the
comment about the role of the saturated zone.


MEMBER GARRICK: Was that conclusion
reached taking into account the compliance time, for
example? So --

MR. LESLIE: Again, the type of the
regulatory requirements, which isn't just dose,
there's a multiple barrier requirement --


MR. LESLIE: -- and what we'll evaluate is
their performance assessment, their models, how they
describe the barriers that they're saying contribute
to the performance. If DOE incorporates lots of
models which have very little contribution to risk,
that's their prerogative. Will we review it to the
same extent of an abstraction that's really important?
No. And I'll get to that at the end in the one
example, and I'll try to tie pretty much everything

CHAIRMAN HORNBERGER: At least you hope
you will.

MR. LESLIE: Well, I said try. I didn't
say I would, I said I'd try.

CHAIRMAN HORNBERGER: If you get to the

MR. LESLIE: Yes. Well, that's why we
signed up for a three-hour slide. Go ahead.

MR. GROSSMAN: Okay. One thing to note
from this slide is that for many of the agreements we
did reach a composite rating or a common understanding
of the importance of those agreements. However, for
some of them, we could not, and we've agreed to hold
future discussions to resolve these divergences and to
plan some additional analyses in areas where they're
needed to help elucidate the importance of those

And, finally, to wrap up my portion of the
talk, I'll talk a little bit about -- I mentioned
earlier that we also discussed the expected level of
effort required of DOE to complete the agreements, and
what these numbers represent are the composites for
those levels of efforts that we expect. I'm sorry?

CHAIRMAN HORNBERGER: What is a level of

MR. GROSSMAN: These are the number of
agreements that fall into these categories, and the
categories are broken down, roughly, that small is
below about 1 FTE, a medium is in the range of a 1 to
a 5 FTE, and a high is anything above a 5.


MR. GROSSMAN: It should be noted that
there are 16 of them where the level of effort still
needs to be resolved, and there's some uncertainty,
and the staff has engaged their DOE counterparts to
resolve this.

And with that, I'll turn it over to Bret,
who will take you into some of the implications and
how to implement this into the program.

MR. LESLIE: Okay. And while we're doing
a switchover on the mike, there are several slides
pretty much from Slide 32 through, I think, 38, which
are kind of backup, where we pulled out what are the
major -- most important agreements and kind of
separated them out in terms of are there some
important agreements with low effort, moderate effort
or high effort and where there's some importance in
level of mismatches. And so these were ones where we
could talk to DOE and I'll allow you to look at that
while we switch places.

Well, thanks, Chris, for warming up the
audience, although they didn't need any help. Let me
move on. I guess that's what happens when you're the
first briefing in a three-day meeting.
By this point, what I'm going to be
talking about are some of the general conclusions.
One of the things, and you got this flavor from Tim,
but individual staff members said this was probably
the most important exercise and task that they've done
in several years. And it was really communication and
integration. And this is something the Committee has
talked about for quite some time, and this was, again,
focused on that important issue.

And I think we really -- the staff came
away with a much better understanding of how the
repository performs by getting some of the insights
from the PA staff. And, again, I'll go back to the
example where the hydrologist came in, because DOE has
said seepage and infiltration as a principle factor to
repository performance, okay? Our staff says, "Oh,
yes. That's important." Well, the PA staff may
ignore the document and look at what the results say,

And so as we go through and look at how
Goldson actually abstracts things and what is driving
performance, you come away with, okay, for the first
10,000 years, what controls waste package degradation?
It turns out it's the critical relative humidity. So
seepage and dripping and infiltration is relatively

Well, the hydrologists are thinking, well,
we need to be focused on these things. Now they can
come back and say, "Okay, well, it's not that
important, and we can step back a little bit from
where we were, because here DOE is saying it's a
principal factor but in terms of performance base,
it's not that important."
So this was really an important point,
because --

MEMBER GARRICK: But, again, Bret, that
really depends on some very straightforward things,
like the corrosion model that you choose --

MR. LESLIE: That's correct.

MEMBER GARRICK: -- as to whether seepage
is important, for example.

MR. LESLIE: And, again, it's based upon
what the Department of Energy is going forward in the
various -- I mean it takes into account all that
information. I mean you get what is put into the

One of the things that I forced the people
to do in the meetings was really to talk at the
layman's level. I mean a lot of this common
understanding is the specialists tend to talk in their
jargon, and what we were trying to get at is to
explain things in simple terms so that everyone can
understand and be on the same page. And we're going
to try to document this in the Center of deliverable,
and I'll talk about that in a little bit.
I think as Chris pointed out in some of
the slides, that the KTIs tend to understand which
issues are most important within their area. It's not
true of all the KTIs; you saw one example of that. I
would say this exercise really reinvigorated the issue
resolution process. It was the first time we've gone
through al 293 agreements with everyone involved. You
know, those PA folks, like Tim and Dave Esh, were
there at every meeting.

Fifty hours of telecoms and arguments
isn't necessarily a fun thing to do, but what it did
is it told the KTI staff, okay, this agreement's
really important, and DOE is proposing to say that
this agreement is not important in their rebaselining
effort. So what happened is people like Tae Ahn and
Neal Coleman would take that information and say,
"Okay, I really understand that this agreement's
important." They got on the horn with their DOE
counterpart and said, "Okay. This is a point of
discussion. We need this information, and maybe you
don't understand why we need this information." So we
saw immediate results, that the people were using
these meetings to help them in their issue resolution

I think a very important thing is we
identified areas for improvement. And now what I'm
going to do is walk through some of these. We've
touched on them a little bit. Tim has aptly stole
parts of my thunder, but that's okay, and let's move
on. What I'm going to try to do is summarize, and
I'll go into a little detail, but, basically, as the
Facilitator, I had a parking lot, and so when I had
issues that were kind of beyond the scope of the
discussion of the individual agreement, I took down
those ideas and put them in the parking lot. And as
we went through these meetings, we saw some common
themes, and some of these themes were, well, the staff
needs training on interpreting what Part 63 really
means or the staff needs training on how their Yucca
Mountain review process is actually done.

Also, specifically, when we went through
these agreements, the analyses, okay. Some of the
broad divergence, for instance, in the repository
design and thermal mechanical effects area, you could
group maybe ten of those agreements around one thing,
which was rock fall. Now, the PA staff were saying,
"Well, we think -- we haven't actually gone down and
-- I can do this thought process in my head and do an
event sequence and say rock fall is not very
important. But have I actually gone down and written
it down and explained that to the KTI staff?" And the
answer was, "No, we haven't. We haven't incorporated
it in our performance assessment code, so it would
never show up as important in the performance code."
So what we did is when we got to these
agreements, it is -- Tim is saying, yes, it is. It is
partially in that, that's probably the wrong portion
of the code to address, but there were other areas
that aren't in the performance assessment because the
PA staff have thought that these processes weren't
very important.

So what was great about this is people
like Dave Esh and Doug Gute down at the Center stepped
up to the mike and said, "Okay. Well, we need to --
from a PA and a KTI space, we need to have a better
handle on drift collapse and rock fall." So those two
guys got tagged in the meeting minutes to do an
analysis, from the KTI perspective and from the
performance assessment perspective, they could both
agree that all the relevant factors were being
considered so that they could get those risk insights.
And, finally, one of the things that we
did was that there was kind of a general understanding
that we need to understand some ways to better
integrate and communicate. So moving on to Slide 22,
some of the training topics, and I've done this and
kind of grouped them into common concepts that were
consistently brought up.

And I'd like to point out that the
specific areas for the Part 63 requirements are on
Slide 38 where staff, for instance, wanted to know,
"Well, what is really required by multiple barrier
requirement? What does description of capability
really mean? How do I actually do that analysis?"
And so what we've done is listed some of those back on
Slide 38. Again, how do I really perform a risk
informed review? What is the thought process? Are
these some examples? How do I use these Yucca
Mountain Review Plan?

Now, all of this isn't really surprising,
okay? What has to happen, and the staff is in the
process of doing this, is for technical reviewers, for
potential license applications, the staff has to be
qualified. Part of that qualification process is
actually to do these things. And so we're designing
courses right now to address these things.
We're also planning some, I wouldn't say
stop-gap, but some immediate training, and right now
we're trying to identify the scope, content and the
form. For instance, can we have Tim come in and show
how, the saturated zone on retardation, the staff
should review that description of the barrier
capability? Can we do that in a Yucca team meeting?
Can we do that via video so that the Center staff and
our staff are being trained at the same time? So some
of these are what we're working on right now. These
are some of the outcomes of that process. We
identified areas in which we need to have a common
understanding. Everyone needs to be on the same page,
needs to understand these various aspects.

Also, it was clear that staff needed a
little better understanding on certain aspects
associated with the licensing or hearing process. For
instance, they needed to try to get a better
understanding of how the hearing process actually
works and can we provide some examples of other
licensing activities and how do contentions or
potential contentions arise and so on and so forth?
So that's all I'm going to talk about for
training right now.

MR. LARKINS: Will the information on how
to perform a risk-informed review be available?

MR. LESLIE: Yes. In Slide 28 and 29, I
think I will try to walk through an example of that.
Yes. That information is contained in the review
plan. It's literally -- okay, we've written this
review plan. We haven't gone through and said, "Okay,
staff, in this section, this is really what we're
talking about." We haven't done that formally.
Everyone's seen the review plan, but you and I can
read the same words and interpret differently. And
that's really what we were getting at at this, because
we would have these long protracted discussions, "Oh,
but multiple barriers means I analyze it this way with
one-off analysis," and Tim would be coming from the
other extreme and says, "No, no. That's not what
we're after. We're after the descriptions of
capability, and I can do this really easily." So
there's that broad -- we need that common
understanding of what those terms actually mean.
Other areas for improvement. I talked
already a little about the analyses, and I just gave
an example. Can Tim or someone else come in and teach
the staff, "Okay, let's look at this barrier, and this
is how we will try to review it in terms of the
description of the capability." And so Tim has a
spreadsheet, I think, that's partially completed on
the alluvium in the saturated zone, and he would then
finish that calculation and bring it to the team and
say, "Okay, when I'm talking about description of
capability, this is really what we're after." We're
trying to come to grips on what is the proper way to
do this.

Another one is, and maybe Ray will
appreciate this, I did my own thought exercise on
terms of chemistry in terms of what processes and what
-- if I were geochemist and not a technical assistant
anymore, what I would really be focusing on? So, you
know, you look at the drip shield versus the waste
package. We asked different questions in terms of
chemistry. For the drip shield, it's a reaction.

You're worried about fluoride. Well, you can do a
mass balance and say, "Well, how much fluoride could
I have?" I can use the hydrologist to tell me how
much water. I can do a bounty calculation to say just
how much fluoride could get there. Whereas from the
waste package side I'm more concerned because the
constituents could be catalysts. So now I'm not
worried about how much, it's whether if things arrive
or not. So I'm planning to try to bring that type of
thought process, put it down in words and talk to the
staff how I envision things and how I focus things in
terms of processes was well.

We pointed out analyses to collect new
risk insights. In some areas, we haven't done what
we've done in our brains and put on paper, and we
haven't explained in some cases what we've done to the
KTI staff. We've really done with but we haven't
explained what we've done in our performance
assessment and our sensitivity analysis and explained
those results. Tim brought that point out in these
meetings. He says, "You know," -- after five or six
of these meetings, he goes, "You know, we really
involved the KTI staff in developing the PA code and
coming up with the abstractions and they spent a lot
of time integrating -- doing that integration upfront.
But they've done these evaluations, and you heard
about it last time, the sensitivity evaluation, has
the PA staff spent enough time going back and
explaining what the results of those sensitivity
studies really mean?"

So, for instance, when I was here last
week, I heard Sitakanta say, "Well, for this
parameter, we looked at the most sensitive
parameters." I'm more concerned about the range on
this parameter, and on the other parameter I'm more
concerned about the mean value. So that information
needs to get fed back to the KTIs. So some of this is
we want to document the risk insights for each of the
agreements that already exist, and we think there's
quite a few of them, but we didn't capture them in
this process, in the meeting minutes. We know the
information exists. And then an analysis to refine
the risk insights, and this would be going back to the
container life and source term in addressing the
uncertainty issue. What are the processes that are
more important in terms of uncertainty and in terms of

To guide these analyses, what Chris
indicated is we developed draft action plans, and we
wrote meeting summaries and we had people down at the
Center and up here taking real-time minutes, just like
we do at the technical exchanges. And so we have not
a very good transcribed record, but we have two
records of everything that was said, and so we've used
those meeting notes and the meeting summaries to
develop action plans. In other words, for the
repository design and thermal mechanical effects, we
need analyses on rock fall, drift collapse and focused
flow. And so what we're trying to do is to get a task
list to address each of the areas, and each KTI action
plan will be slightly different because there are
different needs, there are different places in the

MR. LARKINS: Quick question: Where the
uncertainty drives a particular risk insight, how will
you use that information?

MR. LESLIE: In many different ways. What
are you -- in particular are you focused on?

MR. LARKINS: Well, I mean you have a
particular scenario or phenomenon that's a very large
uncertainty in a particular area, how are you going to
use that in terms of assessing whether DOE has
appropriately addressed the issue or the uncertainty?

MR. LESLIE: Well, I think in some ways it
already is. I mean that's why the -- and I'll go back
to the Igneous Activity staff folks, where the
uncertainty in these parameters, you know, we have
agreements on specific parameters, could drive it
lower or higher. And they're using those values to
focus on, you know, well, DOE may say this particular
agreement they don't think is important. Well, our
staff is going to say, "Well, look, by golly, here's
why it's important. What's the basis for you not
saying it's important."

MR. LARKINS: Okay. I understand.

MR. LESLIE: Okay. Let me move on. And
I added integration and communication, and I'm now on
Slide 24. One of the things, again, and I'll talk
about this, is really for all to do an integrated
assessment, everyone needs to understand how the
system operates. And one of the things that -- again,
and I go back to Tim because Tim gets tasked to do a
lot of things like this, but, I guess, recently he was
asked to put together a simplification in layman's
language of how our system operates and how DOE's
system is operating.

He put it in a table and he shared that
table, but it explain things, why humidity is
important rather than seepage, in clear terms, okay?
And so it outlines the differences between the DOE
approach and the NRC approach. And we think if we can
expand that, because, again, the staff members are the
ones who want this understanding, who need that
understanding to understand the importance of their
agreement, because maybe information that they're
gaining is important to someone else in performance

So the additional integration efforts,
okay, this was I was hoping a one-time deal, but by
popular demand I think it's going to have to occur
again. And, again, that's why I'm prepping Chris to
do it all next time, because it was so much fun.
That's the third bullet down here, repeat the
exercise. I think, in general, the staff felt that it
was a very valuable exercise, it got them thinking
about what things were important, what actions need to
-- what analyses needed to be done. I think after
we've done some of these analyses, we might find
different insights, and we might design the exercise
differently. We might focus on risk ranking in terms
of risk only. But for the first round we were trying
to get at communication and integration. So this was
a helpful exercise to go through all of these

When I say more effective use of meetings,
you know, every week we meet with the Yucca team
meeting and it's on with the Center. Can we do some
these as video when we're doing some of the training?
Can the PA staff come in and more carefully in
layman's language explain some of these things in
terms of the results of the sensitivity analysis? So
we're looking at how we can use the various types of
meetings we have to be a little more effective to
integrating all that information.
I list the integrated issue resolution
status report. This is where the review has been
integrated in terms of the model abstractions. I'm
not going to say too much about it right now.
Finally, the Center deliverable.

CHAIRMAN HORNBERGER: Before you do leave
that, though, can you tell us the status of the
integrated IRSR?

MR. LESLIE: I can tell the status this
way: The project manager that's involved in it is in
DOE space in Las Vegas right now conducting a
technical exchange. I don't know what the status is.
They're reviewing -- I mean there's a Center
deliverable that's the final product coming in fairly
soon, but there is some sensitivity about the release
of it. It's a snapshot way back ago, and the longer
we wait, do we actually release it because it's old
information. So there's still some discussion in
terms of that, and that's why I didn't want to spend
too much time talking about the integrated IRSR.

curious about the -- I wanted the answer you gave me.

MR. LESLIE: Okay. Latif has a question?

MEMBER GARRICK: Go ahead, Latif.

MR. HAMDAN: Yes, Bret, this Center
deliverable is going to be done based on the way it
has been completed or i.e. the agreements or it's
going to be done after to cover the understanding of
risk insights to the entire project?

MR. LESLIE: I listed it last, and I
should have addressed it last, because the next slide
will get at that issue exactly.

MR. HAMDAN: But can you tell me if it's
going to be done based on what --

MR. LESLIE: I will tell you what's going
to get done.
The final thing is we also decided we
needed to develop a communications plan, because,
obviously, with so many questions today, people don't
really understand what we've done. And we need to
have a very clear message on what we did, why we did
it, and we need to be able to explain that, both
internally and externally. What do you mean? I mean
DOE has characterized this as a risk ranking of the
agreements exercise. That's not what it was, okay?
And so there's a lot of misperceptions, and actually
we're planning on using this meeting. Obviously, I
think our message isn't very clear yet, but we're
hoping to use the questions and feedback from you
today to go in and help us to develop that
communications plan. What did we stumble on today?

change the title to the "Everything the Staff Might
Think Important Insights Initiative."


MR. LARKINS: I was going to ask that
question, where you have to repeat the exercise this
year, thinking far enough long where you've got some
ideas maybe to do this with accepted risk importance
measures or metrics?

MR. LESLIE: Again, we could use the 16
vectors that the DOE used. I mean part of this is,
one, you've got to know whether you have all the risk
information before you actually start ranking. And I
would agree that because we have additional analyses
that we need to complete. You can't do that until you
know what the answer is, otherwise you're fooling

You may think you know what the answer is,
but until you actually sit down and document the
thought process and say okay, yeah, if I do this event
sequence this really doesn't matter you run a danger
of not capturing the complete system. So I think we
could design this differently and we probably will
design it differently, but we haven't gotten to that

MR. LARKINS: Well, my point was you
mentioned some metrics which were not safety-related
or risk-driven and that's what I was concerned about.

MR. LESLIE: Oh yeah. Tim?

MR. McCARTIN: I guess from the PA
perspective, I'll say the gauntlet has been thrown
down in that you saw some of the -- I'm one of the
ones that thought it an extremely valuable exercise,
the discussions were great, the debate was great.
There are some areas where we need to be able to
quantitatively show the importance of these things, be
it in a barrier capability analysis or an effect on
the dose, etcetera, but a number of the analyses, I
think in the next exercise, the next time this is done
I would propose what Dr. Garrick suggested. There
should be some risk information up front provided that
we all start from a common ground.
And as Bret and Chris indicated, there
were areas where we need a little better understanding
and we need to work at that, but okay, here's the risk
information. Now you can do these other things and
you can bring in some other things, but here is the
risk information. I think that would be a helpful
aspect to the next time we do it because we saw areas
where there clearly were disagreement discussions.
Of course, in PA we felt we had the right
answer the right perspective on its importance, but
clearly the numbers and the analyses weren't there to
provide that.

MR. LARKINS: Yes, the other part of this
and I raised the question before about the
uncertainties, driving uncertain risks, depending on
the way you handled the uncertainties they may be
masking things which are of risk importance or risk
significance that you need to also consider.

MR. McCARTIN: Absolutely. That's a
critical part of it. I think we're trying to think in
a very broad sense what kind of information can we
bring from a quantitative standpoint. Certainly the
treatment of uncertainty is a part of that.

MR. LESLIE: Let me add on to that. I
mean that's -- we struggled with that a little bit
because some of the agreements in the publicist and
performance assessment and integration KTI deal with
things that are not really quantitative, but they go
to risk dilution. And effectively integrating
uncertainty throughout the analysis, that's a little
tough to say just exactly how much importance it is to
dose, but there were things like that where it's not
always you can't go just directly to dose to address

MR. LEE: Just to follow up a point that
John was making.

MR. LARKINS: Microphone?

MR. LEE: Oh, sorry, just a follow up to
John Larkins' point. To what extent did the long-live
waste package in your view influence decisions about
what was important and what wasn't important? Because
it seems to me that with an analytical capability that
in many respects is site-specific but design-specific
if I can refer to the waste packages of design, it
seems that would have a bearing on the perception of
what issues are important as opposed to what may not
be important or risk significant.

MR. McCARTIN: Well, generally, from the
PA standpoint, we also considered the capability of
those barriers and if something like matrix diffusion
had the potential to cause a significant impact on any
eventual dose, then that would become important. The
capabilities -- I mean that's the one thing. You have
to be careful, the capability of the barriers which is
a fundamental part of 63 does not require the waste
package to fail early.

And so, but if you just have the PA
calculation at 10,000 years, it could be zero if
there's no failed containers. But you have to look
deeper. You have to look at what is the capability
being described. So in that sense, a long-lived waste
package is important, but if matrix diffusion can
delay things thousands of years, that also is

MR. LESLIE: Let me move on to Latif and
we'll get to the matrix diffusion thing in four more

One of the things we did, as Chris said,
is we met with the Office Risk Task Group and they
said the number one thing that you need to do is to
document your insights. And as I think Chris said
early on, we asked the staff to do three things in
these meetings and prior to prepare for the meeting.
What was your rating? What's your reason and what are
your risk insights and the level?
There were people who showed up chomping
at the bit to show the diagram. We didn't have time
to do that. So the Risk Task Group, we believe that
there are many examples they could pull out of DOE,
EPRI, NRC sensitivities where the information, the
risk insight, how important it is exists. We need to
document that.

So the Risk Task Group said this is very
important. They were highly encouraging us to
document this. We have a deliverable from the Center
in the performance assessment that's due later this
year. It may be due in June and I don't know, we'll
have to look at how much goes into it, but the idea --
and we're still figuring out exactly what this
document will be, but the idea would be to go through
each of those agreements and say here's the risk
insight, here's how important it is.
Or we need the risk insight, this is in
the analysis that's going to be done or we need an
analysis and here are the results and here's how
important it is. So we want to be able to talk to
each of our agreements and be able to communicate
about them in terms of why they're important to risk,
if we can do that.
For some of those things that can't be
directly tied to risk such as trying to say that the
process of abstraction is consistent throughout all
the abstractions, how do you actually say that's
important in terms of the dose number? We need to be
able to capture the essence of those descriptions of
why the agreements are important. So that deliverable
will document the specific risk insights and document
the reasonings by why people thought the staff more

Now this then becomes the document in
which the next time around we have the risk insights.
There's a common basis for comparison. Those risk
insights will be here.

VICE CHAIRMAN WYMER: Bret, you touched on
something I was going to leave the question until the
end, but since you mentioned it, I'll ask it now. To
what extent did you use performance assessment like
the EPRI assessments other than the TPA, TSPA which
have interplayed so much?

MR. LESLIE: I can answer that kind of
facetiously because I refuse to rate any of the
agreements, so I didn't use any information because I
didn't want to be biased. I don't actually know
because we didn't get to that portion of the
discussion. I know that the staff members are aware
of the various agreements or various analyses, but I
don't know to what extent some of the analyses in the
EPRI, the most EPRI document could be brought to the
table because the way they've abstracted things are
quite a bit different. And again, we've got to
remember that we've got to go back and say what is the
basis. What we're looking at is the DOE safety case.
So we're primarily focused on what DOE has done and
what we've done in terms of sensitivity, but if there
is other information --

VICE CHAIRMAN WYMER: It probably should
be used.

MR. LESLIE: Yes, the intent was and we
made that fairly explicit in our guidance.

MR. HAMDAN: Bret, back to the question.
If the synthesis is going to document the insights of
this early stage, it will be two things, number one,
it will be agreement based as opposed to risk insights
which the Committee is after the entire automatic
review process and number two, you define this as
preliminary and I'm just wondering if it's too soon
for the Center to do any at this time. If it was up
to me, you would think, you would want to wait until
you repeat the exercise and you look beyond the
agreements. You have the uniform set of criteria and
then they'll document what they find.

MR. LESLIE: Well, that outlines my worse
nightmare which is we didn't learn anything. Because
unless document what we did, we will repeat this
exercise using the same sort of thing. What we need
to do is to have a launching pad to launch the next
round of this exercise. We need to have those risk
insights so that everyone can understand them,
everyone is on the same page. That's what did not
occur at this time.

MR. HAMDAN: But with the understanding
that this for agreements and that when you finish the
larger exercise you have to document again?

MR. LESLIE: Yeah, if we learn something
new. What about the 51 agreements where maybe we need
to come to closure? If we can document that we've
come to convergence on those in the next go around
that's real progress. We're all on the same page. We
have all of those risk insights. We've completed
those analyses. The answers might be quite a bit

MR. LARKINS: Let him finish his

MR. LESLIE: Thanks. The other thing that
we want to have in this deliverable is this common
understanding and I pointed out this a little bit
earlier that in essence, Tim has come up with this
outline. It's a couple page table that outlines the
major abstractions, explains it, both what we do in
our performance assessment and what the Department of
Energy does. We want to expand that. A lot of this
information exists in other documents, but what we
want is a common document that all the staff can use,
get the same understanding. Again, in a different
way, in terms of describing how the system works, not
agreement space, but how the system works.

Again, we would document any areas that
are still, the analyses are still needed. We would
probably try to provide an example or two on how using
risk insights in a review. What we -- we noted in our
response to your September 28th letter on issue
resolution that this document would also indicate how
the NRC staff could use these risk insights during
future review such as using the Yucca Mountain Review
Plan. And again, I'm going to get to that.
So if we can give an example of a multiple
barrier analysis calculation in this document so that
the staff can see the thought process and how, what
tools can be used, I think that's going to be
beneficial to everyone.

So this is one of the things and I would
lastly point out that this is going to be a joint
deliverable between the NRC staff. Chris and I will
work with Stefan and Roland to complete this and
hopefully we won't have too much to do after this
meeting. We can focus on completing this. I think
this is really important to do. Otherwise, we will
not have captured what we've done and be able to
communicate it very effectively.
These next two slides are kind of how are
you going to use the information? And I think we've
shown already and talked about how we've used the
results of these risk insights in the risk resolution
process. We're focused on how DOE is doing its rebase

Are there areas in which they are not
going to be doing enough information -- you know, they
want to reduce the scope of work and our analysis
suggests that that agreement or the information
required by that agreement is very important. So
we're looking for ways to better integrate this
information into the issue resolution process. I said
I would get back to that, to evaluation of resource
allocation. Again, I think once we have all the risk
insights associated with all of these agreements in
the areas of the performance, then you can start to
answer these questions.

There's been some reservation about
ranking things and I think this exercise goes to
pointing out one of the reasons why perhaps we waited.
We need some of those risk insights to be documented
in some of the areas where we haven't done it
formerly. We need to have a very good justification
when we say this area -- this KTI is relatively
unimportant. We shouldn't be making decisions based
upon what's in someone's head. We should have on the
table the basis for that decision.

And how can we use this information, then
the results from the risk insights initiatives and
again, I'm talking about the results from the
meetings, but the results from the Center deliverable
where we've documented what are the sensitivity
analyses supporting this particular area? How can we
use those insights in determining different areas,
emerging area, pre-closure.

You've heard from pre-closure a couple of
times, but I think there are 10 subject areas and
we've dealt with 3 so far. Can we use the PCSA tool
at the Center to focus which areas need to be
addressed sooner, rather than later, in terms of risk

I think we spent the latter half of the
pre-closure matter and Raj was very good, Raj and
Banad did a good job of saying okay, from our limited
analyses using the PCSA tool, pre-closure safety
assessment tool, we think that these three areas
should be at the top of the list and should be the
focus of the most, the next technical exchange. So
this is an area where the analyses are as robust, I
shouldn't say robust, but as mature as they are in the
post-closure place.

How can we use this risk information in
the different program areas and one, this is a
nonquantitative slide, but in defining a risk informed

VICE CHAIRMAN WYMER: What are the units
on the bottom?

MR. LESLIE: Amount of risk information

And these are just different areas. We
need to define what is a risk informed program.
You're saying if you did everything at this end at the
right end you would be a risk based program. We're
only relying on risk information, okay?
So this is only to indicate that we're in
the process of thinking how do we use the results of
this round of the risk insights initiative and the
next round in which we have all the insights to define
how much information, how much risk information is
used in various activities. Should it all be equal?
I'll toss this one out to you guys to think about.
What is your vision of a risk informed program? Is it
the same amount of risk information in each program

So anyway, the previous slide and this
slide are really kind of -- this is -- we need to
still address these things.
Let me take a water break here.

The next two slides try to roll up a whole
bunch of things. They try to roll up the results of
the risk insights discussions, the meetings we had,
the differences in approaches and thoughts between the
performance assessment and the KTI staff, differences
in terms of the DOE's TSPA and the NRC's TPA code
results, how we're reviewing and using our information
in the issue resolution process, how we would review
something in the Yucca Mountain review plan based upon
risk insights. And this was kind of -- I could see
the light bulbs going off when we got to this and this
happens to be in the unsaturated zone and saturated
zone environment KTI.

Tim came in and rated all the saturated
zone agreements more important. From a PA
perspective, our code suggests the saturated zone is
a more important barrier. Okay? If you were to look
at the KTI on the other hand, they rated all their
unsaturated zone agreements more important. Why is
that? DOE is taking more credit in the unsaturated
zone. So their focus, their agreements and it's like
a factor of two to three to one. In other words,
there are two to three times the number of agreements
on the physical hydrology of the unsaturated zone
relative to the saturated zone.
So the KTI staff understand these
differences already.

MEMBER GARRICK: But again, isn't this
just simply a matter that you both used a different

MR. LESLIE: No. Here's the reason.

MEMBER GARRICK: I mean if you used piece
of your transport as the primary basis of moving stuff
after it's mobilized by some mysterious way, you're
going to get a different result than what the TPA.

MR. LESLIE: Right, and all you need to
look at in terms of -- and this is how Tim explained
it and I'm sorry to rely on Tim, but it's a good
explanation. If you have the same physics and the
same equations to describe radionuclide transport in
the unsaturated zone and in the saturated zone in the
DOE's performance assessment, one would just logically
think that the saturated zone is much more important.
You have matrix diffusion in both. What's the

The gradient is almost vertical in the
unsaturated zone and almost horizontal in the
saturated zone. Why? So therefore, one would say the
residence time of water is much shorter and therefore
the ability of matrix diffusion to be important in the
unsaturated zone should be less, not more. So from a
performance assessment perspective, they're saying are
they abstracting it the same way.
That's how the differences between our
codes leads us to evaluate what DOE is doing. We're
still looking at what DOE is doing.
Let me --

MEMBER GARRICK: I think there's much more
to be said about that. We won't say it here.

CHAIRMAN HORNBERGER: He just wanted you
to know that he's being restrained.

MR. LESLIE: You ought to see how much I
want to say on this slide.

I'm trying to restrain myself. These
differences, really, that came out in this discussion
were very important because they told the PA, folks
like Tim and others, that they really need to ensure
that the consistency in abstractions between the
unsaturated and saturated zone has to be a focus.
Well, that's great. We have an agreement on that one
in PA space.

The KTI staff on the other hand, the
saturated/unsaturated flow under isothermal conditions
KTI need to continue to focus issue resolution in the
unsaturated zone agreements. This is again because --
this is where DOE is taking the credit and you might
say that the agreements are based upon the credibility
of how they're abstracting.

So let me, in effect, one of the
agreements and I think Dr. Garrick, you'll appreciate
this, is the USFIC KTI said DOE provide us the
sensitivity analysis that shows that the unsaturated
zone is more important than the saturated zone because
we don't understand this very well. Why is it. And
indeed, they've supplied that information. We have
closed that agreement and indeed, the DOE is sticking
to their guns. The unsaturated zone is more

Now, is that the final word on why it's
important? In effect, in our letter basically, let me
see if I've got it. We reviewed the various things.
We were looking at the sensitivity analysis matrix
diffusion. We're still concerned about the
abstraction and this issue is going to be a point of
discussion with the Department of Energy. We sent a
letter out on February 8th that went into this in
terms of agreement 6.01 in the unsaturated zone and
saying this issue isn't closed because we can't
believe that the same physics gives you the wrong
answer, in essence.

If you're using a consistent abstraction,
you know, either your path plant is extremely long in
the unsaturated zone even though it should be
vertical. So there are some aspects of that we need
clarification and the DOE and the NRC staff are
working on that clarification, particularly in
agreement 3.29 in the TSPAI KTI.
So our review of DOE's work on those
differences is documented in two places. When we get
something in from the Department of Energy in terms of
issue resolution they submit us a document that says
this information is the information you requested in
agreement such and such. We review that information
and reply to the Department of Energy.

That reply, our review is in the letter we
send. It's a public letter and we sent this back on
February 8th, back to the Department of Energy saying
okay, yeah, we see your sensitivity analysis. We need
additional discussion of this. We'll take care of
that additional discussion when we review what you're
going to provide for agreement 3.29 which you will
show consistent abstractions. Demonstrate the physics
and the equations are the same.

So that's kind of -- and the other place
it's documented is in the issue, integrated issue
resolution status report. Again, it's applying what
the Yucca Mountain review plan acceptance criteria
are. The focus is did they do it properly? They've
integrated the information. The other thing is that
the insights that support these different
characterizations of the unsaturated and saturated
zone, the difference between the TSPA code and the TPA
code will also be captured in the Center deliverable.
So we'll have another basis for why are
the USFIC agreements and USFIC KTI focused on the
unsaturated zone and there is a reason. It's because
DOE has taken a lot of credit for it. Or more credit
than in the saturated zone.

Okay, that takes care of the issue
resolution, differences between performance assessment
and KTI staff and how we're interacting with DOE.
ACNW has asked this question several
times. How would the different abstractions be
reviewed using the draft Yucca Mountain review plan?
The focus of our review would be on how DOE describes
the capability of the barrier. At the beginning of
every model abstractions section in the Yucca Mountain
review plan which is NUREG 1804 revision 2, there's a
paragraph that says do it based upon how important it

For instance, to review this model
abstraction and this one happens to be the flow paths
in the unsaturated zone. To review the model
abstraction, evaluate the adequacy relative -- review
the adequacy of the Department of Energy's potential
license application, relative to the degree to which
the Department of Energy relies on flow paths in the
unsaturated zone to demonstrate its safety case.
Review this abstraction considering the risk
information evaluated in the multiple barrier section.
So before they even review that
abstraction, they have to go to the multiple barriers
and the description of the capability. So this is
where we send our staff. Before you look at this in
terms of the overall performance assessment. How
important is this barrier?

And again, we're going to rely on what
DOE's safety case is. What is it that they're saying.
That's how we're going to be focusing. That's their

So for example, and I go on, for example,
if the U.S. Department of Energy relies on flow paths
in the unsaturated zone to provide significant delay
and/or dilution in the transport of radionuclides,
then perform a detailed review. If, on the other
hand, DOE demonstrates this abstraction has a minor
impact, conduct a simplified review and there's more
language to that effect.

So in this instance, if the Department of
Energy came in with a license application and if they
kept the same type of performance, we would be
focusing our hydrologist on the unsaturated zone
rather than the saturated zone because this is what
DOE has claimed it relies on.

Now in terms of the issue resolution
space, this is why we have so many agreements in the
unsaturated zone. We want to make sure that the
information supports their abstraction. So I hope in
the last two slides kind of tied a lot of loose ends
together and I'm going to move on to the conclusion.
After today's briefing, I don't know if
our briefing was a successful communication exercise,
but I think the meetings we held and the Center
deliverable that we're going to provide will result in
a successful communication and integration exercise.
The results from these meetings that we
held are being used real time. The rating results
again cannot be used directly to determine the most-
important-to-risk agreements across all the KTIs
because the KTIs use different factors. We didn't
constrain them. We would have gotten a different
answer if we said just use risk to reach your
agreements. We did not do that. We did not want to
do that. I didn't want to do that.

We've clearly defined areas where more
work needs to be done and I think this needs to be
repeated. And with that, I'm going to stop and get a
breath of air and some water and we'll be happen,
Chris and I will be happy to entertain questions,
additional questions.

MEMBER GARRICK: Well, there's no doubt
that this is an interesting exercise you've gone
through. I'm curious about a couple of things.
Before there was probabilistic performance assessment,
there was performance assessment. If you look at some
of the early models, they weren't very probabilistic
in some cases and the early models of the WIPP PA, for
example, were completely nonprobabilistic.
What I'm curious about is if these
assessments were nonprobabilistic, do you think your
exercise would have resulted in anything particularly

MR. LESLIE: Yeah, actually. I mean --
I'll give one example. There are several agreements
that deal with the probability issue. I mean how do
you evaluate a probability agreement
nonprobabilistically? How important are things like
igneous activity unless you do it in terms of -- we'd
get a very different answer, I think.

MEMBER GARRICK: You know, many years ago
when the NRC was challenged about their lack of
aggressiveness with respect to the use of PRA and
reactors, there was a very good talk given by a PRA
senior person that said the truth is we've considered
risk from the very beginning. It's just that we
called it safety.

And I guess what I'm trying to get at here
is has the effort to make the performance assessments
probabilistic, have they really contributed anything?

MR. LESLIE: I'll let Tim answer that.

MR. McCARTIN: Your question, if we didn't
include probability, would it be -- I mean there are
a couple of things. Rock fall and faulting.
Faulting, part of the reason that it's not that --
some undetected faults and there could be some
movement wouldn't damage containers, but the
probability of that occurring is low enough that
you're not going to damage that many containers that
say to a 10-4 probability. If you damage a few
containers, you weigh it by that probability and from
the PA ranking of things it became a very small issue.
Rock fall is some of the larger seismic
events that have a possibility of damaging containers,
but the probability is very small. So once again,
yeah, you might damage a few containers with large
seismic events. Probability is low and that effect is
very small. So some of the -- at least the PA ratings
on those were small because of the probability aspect.
I don't know if that gets --

CHAIRMAN HORNBERGER: Well, I don't know.
I'm probably totally wrong, but to me I didn't hear
John's question in sync with your response.

MR. McCARTIN: Oh, okay.

you're talking about -- I'll ask John a question to
see, because I'm probably wrong. Can't you have, for
example, a seismic probability curve and still do a
deterministic analysis on that basis? In other words,
is defining the probability of an igneous event the
issue in terms of having a probabilistic model?

MEMBER GARRICK: Yeah, that's -- I'm sure
that if you were doing it nonprobabilistically and you
came to your question about rocks, you would probably
screen it out on the basis that it was not of high
likelihood event and proceed, but the analysis would
never reveal the fact that likelihood considerations
were necessarily a factor in that and I guess what I'm
really trying to get at, how much of what we're doing
here is really different. We're using the word a
great deal, but sometimes we're using it synonymous
with work that we've done for and it's like the
discussion that John Larkins brought up regarding

It is not always obvious of how
uncertainty enters into the evaluation process. We
know that a decision maker, given the two options each
having approximately the same value from a Central
tendency standpoint, if that's all the decision maker
had to go on, would flip a coin and proceed. But if
the decision maker had two probability distributions
that showed the uncertainty of one was very much
greater than the uncertainty of the other, then the
decision maker would be a fool to flip a coin and
proceed. So there's a lot of underlying things here
that we're wrestling with to try to understand just --
what the role of the probabilistic component is and
that, of course, has a major impact on the real
relevance of it from a risk standpoint and so these
are just questions. I'm looking at your figure.
MR. McCARTIN: If I could -- just one part
along those lines and I don't know if this is getting
at it, but I do know Dick Coddell presented some
information, the sensitivity analyses last month and
one of the things -- we are trying to get a better
sense of, a very interesting aspect of the
probabilistic calculation that -- where we varied all
the -- a number of parameters.

A very small percentage of the
realizations actually contribute the vast majority to
that mean dose and we're trying to look at ways to
better understand well, what's driving that 10 percent
of the realizations that are resulting in 90 percent,
let's say of the dose. And I think you're right.
There's a lot of information there. Have we mined at
all? Absolutely not and we are looking to understand
that, but that is one aspect that we're aware of.
Exactly what it's telling us, I don't think we know

MEMBER GARRICK: Yeah, okay, well, let me
turn to my colleagues here.

VICE CHAIRMAN WYMER: I just have a few
sort of gratuitous comments.

MR. LESLIE: Where's the chemistry?

I appreciated your not -- a while back.
I don't know whose idea it was to come up with this
communication integration exercise, but it seems from
the sound of it it was a very good idea. It was --

MR. LESLIE: Thank you.

surprised it hadn't been done earlier, before you got
into the act there, Bret, but it certainly sounds like
it was worthwhile.

The other is I'm looking forward to the
meeting that you might present to us the results of
your second meeting where we actually do get some
specific risk insights rather than how you're going
about getting them.
That's it.

MR. LESLIE: I think in some ways you have
gotten this before in different presentations and I'll
give the example. I think the last time Bret briefed
you, he kind of had agreements and said this is the
importance in terms of uncertainty. I think we can do
that better.

VICE CHAIRMAN WYMER: Yes, that's was I
was thinking. This is much more structured to get at



CHAIRMAN HORNBERGER: Yeah, I'd also like
to say as you know I think for years the ACNW has been
concerned about integration across the KTIs and
whether the performance assessment really was tied in
with the KTIs and I think that what your primary aim
in terms of communication and integration has to be,
you have to be commended for doing this. I can
believe you that everybody thought it was a great idea
because I think it does need to get done.
As you can probably tell, I'm a little
leery about even showing some of the bars that you
showed because you don't have any common metric that
people were using and so if I give something a 5
because I like the beat and John gives it a 5 because
he can dance to it, it doesn't mean --

MEMBER GARRICK: Have you ever seen me

mean, you know? And so I'd even be a little careful.
Whatever use they ere to you internally, I think that
they might be of limited use to communicate with
people like us anyway.

I think that what we've been trying to
grapple with, again, is you can tell is when we see
this as a risk insights initiative, we would have been
quite interested, I think, to see if you could have
deconvolved those bar charts and I don't want to
suggest that some of the other nonrisk issues aren't
important. They clearly are important to have
documentation of the multiple barriers. It's one of
the requirements and that may or may not have a risk
component and there may be other things that simply
have to get done.

Nevertheless, what we've been looking for
for a long time out of the KTIs is to have somebody
come in and tell us here are the things that we really
think contribute to the risk. Here are the dominant
contributors and it would have been nice if you could
have deconvolved those bars to tell us where the
staff, at this point in time and I'm not to have you
set your feet in concrete and said okay, this is where
you are for evermore, but just to have the snapshot at
a point in time as to what your risk insights were,
having gone through this exercise.

MR. LESLIE: That's fair. I think if you
go to slides 32 through 38, I think we've captured
them there. They might say it differently. They
might expand upon it. They might show sensitivity
results, but I think again, if you go back to getting
up at the high, both the KTI and the PA folks from a
performance assessment perspective will be at a
parameter or so on and so forth. You could probably
go back to those agreements and start to pull out this
specific parameter or this model uncertainty is what's
most important.

MEMBER GARRICK: Yeah, and I would --


MR. McCARTIN: I'll say one benefit of
having the ranking and letting people rank it anyway
they want, at least when you get in the room, if I
ranked it the lowest and you've ranked it the highest,
it's a good vehicle for getting people to start
talking. However, I think what you're suggesting to
deconvolve it is mandatory and possibly for the next
round we could ask people rate it based on risk, dose,
exclusively. Then if you then have a second rating
that based on other factors, be it licensing risk or
contention in licensing here, what you think the
importance of this is and it might be interesting, but
I think you're right to force everyone to base it just
on this number and then you could have the second one
that gives importance in a more global sense and --
CHAIRMAN HORNBERGER: You can clearly have
several different axes along which you ask people to
make measurements and that saves you from having to
deconvolve it afterwards.

MR. LESLIE: Right.

CHAIRMAN HORNBERGER: So there are many
ways to do survey. By the way, I don't dispute the
fact that for this go around, you got great value out
of doing it just the way you did. I was just saying
that having done it and having gotten a lot out of it,
it doesn't follow that you have to present that to us
in the form of summary bar charts.

MEMBER GARRICK: Yeah, sometimes summary
bar charts show a level of understanding that is way
beyond what is intended for you to show.

that as a compliment, write it down and think about

MR. LESLIE: I can read the transcript.

MEMBER GARRICK: I think we're all in
agreement that this has been a very interesting
exercise. I think you're also getting the message
that in the final analysis we'd like to put the
template of the real risk contributors on top of that
and see just exactly what the differences are and be
able to develop additional insight as to whether or
not we're spending an appropriate amount of money and
time on the most important issues.

CHAIRMAN HORNBERGER: I have one, sort of
short question and then just one other comment that I
wanted to make.
The short question is you had mentioned in
your example, and you probably did this inadvertently,
so I'm being unfair to you, but that's never held me
back before.
You said that oh, maybe you could have the
Center use the PCSA to rank the things that are most
important. What comes out of the PCSA that allows you
to rank things?

MR. LESLIE: I'm probably the wrong person
to answer that question. I'm going to look for
somebody from preclosure, but basically you've had the
briefing. I don't see anyone from preclosure here.
Banad, could you help to answer that
question? I could attempt to answer it and I probably
would --

MR. JUGANATH: Could you please repeat the

was made that you might use the PCSA to rank
important, things that are most important to look at
in the preclosure area.
And I'm just wondering what measure you
would be doing the ranking on?

MR. JUGANATH: The measure I think would
be safety as its compliance with the performance
requirements of dose to comply with the dose, what are
the components needed to be functional. The criteria,
the so-called safety items and the Q-list items in
other program. We find that using the PCSA and they
are the ones that are important. They need to be
functional and they contribute to risk if it's not --
if they're not performing. Basically, to comply with
the dose.

MR. LESLIE: And so you can look at the
dominant event sequences and you can see what are the
-- what areas are contributing.

MR. BENKE: This is Roland Benke at the
Center. I could maybe add something.

MR. LESLIE: Go ahead, Roland.

MR. BENKE: Although I had a hard time
hearing Banad, so I might be just duplicating some of
his response, I apologize for that, but in general, we
can do our own independent analyses using the PCSA
tool and so we can come up with that sequence
probability and calculate consequences from those and
our resulting numbers of dose can be compared to the
requirements of the rule and we can, because as you
know, the requirements of the rule put things in
Category 1 and Category 2. We can look at the
percentage of the limit that's being approached, let's
say as one idea that was thrown out in the discussions
where we could perhaps use some sort of a metric
between Category 1 and Category 2.

MR. LESLIE: Thanks, Roland.

MR. BENKE: Sure.

for you and maybe you can turn it around into a
question if you're so inclined, Bret.
You mentioned, you gave this nice example
of the unsaturated zone versus the saturated zone
between -- the differences that you have with the
Department's analysis. At some level and again,
looking at the YMRP where you illustrated the
paragraph that's repeated basically many times in the
document that says well if the Department of Energy
doesn't have this as a critical part of their
analysis, then we can do a simplified analysis.
At first blush that sounds good, but then
I could turn it around and say suppose the staff were
absolutely convinced that the most important thing for
the Yucca Mountain were the saturated zone and the
Department of Energy didn't want to take that into
consideration. I think that as a Member of the ACNW
I could make an argument that says the NRC should be
in the business of evaluating safety and that just
because the Department of Energy doesn't take that
into account that the NRC should not necessarily just
sweep that under the rug. As I say, that's a comment.
If you want to make it a question, there is an implied
question there.

MR. LESLIE: I'll take it as a comment.
If you want me to take it as a question,
that's fine as well. We don't need to. We'll just
take it as a comment for now.

MEMBER GARRICK: Any other questions from

MS. DEERING: Bret, there's one thing that
still lingers on in my mind as a little hazy and maybe
this becomes part of your question about
communications plan, if there's something you need to
do to clarify because it's probably I'm just not
getting it, but the focus of the whole exercise sounds
like it's been on agreements space and issues
resolution space.

A lot of benefits reaped are being
recycled back into that loop. When you go to talk
about the Yucca Mountain review plan and license
application review space, it gets less clear to me how
you take the benefits you're reaping here and applying
them in that context. So I'm looking at your first
slide 4, why we are doing it and nowhere is mentioned
that it could benefit a licensing review process.
These risk insights could become valuable
in a licensing review process. And then back on slide
29, same thing. You gave the example of the YMRP and
how you bring these into context. I just don't see it
real crystal clear, so I guess I would just leave that
as a comment more than something you have to answer
now for communicating how that works.

MR. LESLIE: Yeah, actually, I'll take
that one as a question.

The performance assessment that we're
reviewing right now in issue resolution space will be
substantially the same that we might see in a
potential license application. The model abstractions
aren't going to change between what we're seeing
substantially. So the information we're gaining in
terms of where the Department of Energy is focusing
their effort is likely to be the same information that
we'll see in a potential license application if the
site recommendation and everything else goes forward.


MR. HAMDAN: Yeah, I just want to finish
my third on the Center Deliverable and I just want to
make the point as it has been mentioned by all the
Members and Lynn, I think you've got your money's
worth out of this exercise, without any question and
I think it's insightful and useful. But as has been
mentioned the metrics used in these charts were not
consistent. You called it a preliminary effort. You
called it an exercise. You and Tim said we'll need to
repeat this and I think you will and most important of
all it's agreement based as opposed to real risk, the
words that Ray used, which needed for recommended
review plan.

So unless you want to do two deliverables,
one for the agreements and one for the real risks, it
may be premature to jump the gun and prepare a
deliverable now that you may live to regret later
because you changed your mind because your answers
will change.

MR. LESLIE: Okay. Useful comment.

MEMBER GARRICK: I think that adds to the
points that we've been trying to make.
Yes, Mike?

MR. LEE: I'll jump on the bandwagon and
say kudos for a very interesting analysis and I'm sure
over the next months the staff is going to spend a lot
of time interpreting the results.
The one question I had though is that you
mentioned that the staff, what I'll call the subject
matter experts or the KTI leads and support group or
the support staff, have been involved in the
development of the TPA code and part of your analysis
reveals some differences in interpretation of the
significance of TPA results between, I guess, what I'd
call the PA practitioners and the subject matter

Is one of the lessons from this exercise
the -- I guess the need to have the subject matter
experts or the KTI leads more involved on a regular
basis on actually getting involved and doing the PA
work. I'm not saying that you have to start
conducting classes and doing numerical integration and
learning Simpson's Rule and things like that, but it
seems to me that there's an opportunity here to -- I'm
not going to say brainwash the staff, but it seems
there's an opportunity here to --

I can call it "train", but somehow it
seems that the staff has proceeded on an
implementation process and then some of these insights
have led to I guess -- I'm not going to say revelation
but some kind of reaction that they're not agreeing or
understanding or complying or whatever the right word
is. I see Tim's hand.

MR. McCARTIN: Yeah, the issue is one and
I'll say from a PA standpoint there has always been a
lot of involvement in the development of the code.
Once we have the code, I guess you could say that's
possibly where the integration process broke down in
that we get the code and as PA staff we've got our new
toy and we start using it and developing -- we take
months and months of running new code, looking at the
results, trying to understand what it means and that
analysis time is very extensive.
That's where a huge amount of the
learning-- we have not gone back and sat down and in
a detailed fashion and gone through the gee, we looked
at these results and this is what it's saying to us
and getting the KTIs -- do you agree, that we see this
and it seems to imply that whatever, faulting doesn't
seem to have a big impact.

That dialogue in terms of interpreting the
results which is the really hard part of PA and
explaining it and understanding it, we haven't done as
much integration and I think that's what Bret and
Chris were trying to get at. This exercise brought
that up and I think is fostering it. We need to do
more of that. What are the results really saying?
And I think that's --

MR. LEE: I'm not just -- I know there's
a lot of -- the staff has a very full plate and
perhaps this is one area that there needs to be some
discretion introduced into that plate. Maybe a
dessert plate or something like that, a new entre, to
give the KTI leads more of an opportunity to get
involved in that type of work.

MR. LESLIE: Yeah, I maybe didn't address
this very well and I'm seeing that now in some of
these questions. When we found areas where we need
additional analyses, my goal wasn't in this meeting to
say oh yeah, this is -- we need this analysis. I
picked on the guys who are arguing. I said okay,
you've got the disagreement, you come to the table.
You guys sit down, do the calculation either on-line,
develop the abstraction from the PI and the KTI.
That's how they're going to come to agreement.
Now, we captured all of those different
areas on analyses. We think they're very important to
do. One of the things that we need to do is to
finalize those action plans, determine how many
analyses we need to do that we haven't already have in
the Center's operations plan and if we need to change
things, we'll change the things. But the idea is to
use that discussion process to figure out who the
players are.

MR. LEE: As a follow-on then, and this is
just a question or a comment and you can take it as
just however you'd like to take it, but in future code
development and I know that the time is running short,
perhaps there's a need for more kind of unified
documentation of the assumptions leading into the code
instead of writing just user's manuals.

MR. LESLIE: Right.

MR. LEE: That would -- certainly that
type of exercise, I think from previous experience,
helped to get a more unified view of the --

MR. LESLIE: I believe in the user's
manual those assumptions are explicitly stated.
That's what Jim and Tim are going to say.
The other thing that I want to pull away
from this is is that not all these analyses are a new
abstraction in the performance assessment code. Raj
jumped up and said well, look it, we can look at rock
fall from event sequence. This event happens and it
has the probability and this is its impact and you
know, and go through it.

The analyses don't necessarily have to be
a new TPA analysis. They need to document the thought
process and can be an off-line calculation to look at
what the impact is. The mass balance that I did for
fluoride is an example of one where do we need a new
abstraction for fluoride? I would argue probably not.
But I can do a simplified calculation off-line to
justify why we don't need to do that. And we need to
do that with the subject matter experts and the PA
experts together.

MEMBER GARRICK: Okay, any other?

MR. FIRTH: James Firth, NRC staff. I
guess I wanted to clarify a couple of things. During
our TPA 3.2 analyses, we did involve all of the KTIs
in terms of doing analyses. The TPA 4.0 analyses that
you were briefed on during your last meeting was a
little bit more of a reduced effort in terms of who
was actively involved in that activity.
That's something we're intending to move
away from between now and the license application in
terms of preparing for a review, where we're doing
some of the planning now in terms of getting more
people involved, similar to what we were able to do
with the TPA 3.2 analyses.
One reason why it was a little bit more
reduced in this past year is because were having to
spend so much effort in terms of the discussions with
DOE in terms of issue resolution. So we have to see
what we could accomplish.

MEMBER GARRICK: Yes, but I think the
point is a very important one and it's been made by
several that there's nothing like getting people
together to address these issues and we ought to be
very focused on how to sustain the energy and
enthusiasm that you've created with this exercise for
the future effort.
Are there any more questions? John?

MR. LARKINS: Just one, real quick. As
you more forward with this either on the agreements or
risk informing the KTIs, are you planning on engaging
DOE on these or is DOE doing something comparable that
they're going to --

MR. LESLIE: Well, yes and yes. I mean
DOE did its rebase-lining based upon 16 factors and
risk might have only been one of those too.
What we tried to do and especially for
those that ended up being important and important in
terms of risk, again, remember, the PA folks were
using risk. They're a subset of those values. So
those high importance composite values do reflect

So yes, the KTI staff are looking at in
terms of issue resolution those that DOE said wow, we
can -- our program has changed and we're doing a
different approach and maybe this information isn't as
important any more. Well, our staff used these, this
exercise to say well, wait a second, let's talk to DOE
so that they really clearly understand why this
information is important and why we need that

I don't know what the outcome of that is
because right now that technical exchange is on a
telecon right now. It's been happening yesterday
evening and today. So perhaps we'll get some insights
from the result of that meeting.
MEMBER GARRICK: All right. I think we've
had a very good session. I think I'll -- John, did
you have another comment you wanted to make? I think
I'll turn it back over to the Chairman who will
hopefully declare a recess.


CHAIRMAN HORNBERGER: We are going to move
on to a new topic on unlikely events and John is going
to tell us how unlikely it is that Tim will finish on

MR. McCARTIN: Today, I'd like to -- as
the Committee knows, the public comment period for the
proposed amendment for unlikely events closed April
10th. We did get three commenters right at the end.
And what I'll do today, I'd like to at least go
through a quick summary of the proposed rule just to
sort of refresh our memory. I will go relatively fast
through that.

Please stop me if you want me to explain
anything a little further. I'll a little more time on
the summary of the comments and be aware I got these
comments Monday morning, so it's not like we've poured
over them in any great detail. They weren't lengthy,
but I think I can give a good depiction of them and
give you a schedule for finalizing the amendment which
would bear upon the schedule the Committee might have
for providing guidance.

Briefly, going to the regulatory
background, as you know when the EPA published their
standard they had the three requirements for
individual protection, groundwater protection and
human intrusion. Very unlikely events is excluded
from all of those.

However, for the two analyses for human
intrusion and groundwater protection those very
specific analyses unlikely events were to be excluded.
Those analyses, human intrusion, really is looking at
the robustness of the repository to an intrusion
event. Groundwater protection is looking at the
degradation of the groundwater resource. They're two
very specialized calculations. They did not specify
the unlikely -- what unlikely was in probability
space. They left that to NRC.

Continuing in the standards for those two
analyses, the exclusion of unlikely events, the way we
have read the preamble suggests you're trying to look
at what the expected or likely behavior of the
repository is. With that information we put forward
our amendment which we considered to look at unlikely
we look at three broad categories: very unlikely,
unlikely and likely. In the rule, very unlikely is
already defined quantitatively as a 10-8 probability
per year.

We felt that in looking at where unlikely
would be that we wanted to -- we thought it would be
easier to set what likely is and then if you have very
unlikely and you have likely, you have thereby defined
the space for unlikely for what would be the lower
bound for where likely would occur and we felt
somewhere between 10-6 to 10-4 per year, remembering
that this is over a 10,000-year period which is why
you might not consider that for a 10-year period, but
for a 10,000-year period.

Quantitatively, 10-6 per year is a 1
percent change of occurring within 10,000 years. We
think a 1 percent chance of occurring is neither
expected nor likely. The 10-5 per year is a 10
percent chance of occurring. 10-4 per year is very
likely. There's a very high probability of occurrence
for something like that. Not surprising, we ended up
picking the 10-5 per year as the lower bound for
likely events. And so we, in the proposed amendment,
we have -- the unlikely characterized by a range, that
range being between 10-8 and 10-5 per year in terms of
the span or .01 percent chance up to a 10 percent
chance of occurring with 10,000 years.

That's what we have gone out for comment
on. As I indicated, the public comment period closed
April 10th. At the very end, we received three
comments. The order you see here is the order in
which they were docketed at the Commission by the
Secretary. The Environmental Protection Agency
offered comments, the State of Nevada and the Nuclear
Energy Institute. And I'll briefly go through the
three sets of comments.

CHAIRMAN HORNBERGER: Tim, just to help
educate me, when a federal agency like the US EPA
makes a public comment, how does that occur? Staff
prepares it and it filters its way up and the
Administrator signs it? Is it an official conveyance
that way?

MR. McCARTIN: An agency can elect to send
it from whatever level they want.


MR. McCARTIN: The Office Director, I
believe, I'd have to look at the comment and I can
tell you that -- was it -- okay it was Morsonowsky,
the Division Director for the Office of Radiation
Protection Division.

MEMBER GARRICK: So that has the blessing
of the whole Agency?

MR. McCARTIN: No. Well, I would have to
get back to EPA on that. It is coming under the
signature of someone at John Greeves' level. He's a
Division Director.

anything like that? Would John Greeves ever send out
a comment that would be labeled as from the U.S.
Nuclear Regulatory Commission?

MR. McCARTIN: He certainly could.


MR. McCARTIN: There's nothing to preclude

CHAIRMAN HORNBERGER: And that would be
interpreted as having the blessing of the NRC?
MR. McCARTIN: Well, no. I think you --
from NRC perspective, the higher up you go, the higher
the buy in.

MEMBER GARRICK: The more the blessing,

MR. McCARTIN: Yes, the more the blessing.
I'd have to look. I know when we commented on the EPA
standard in groundwater protection, in particular, I
believe the comments went from the EDO. But I think
each agency, they can determine -- it's sort of an
agency prerogative and I'm trying to think if on part
63 and I can get back to the Committee on this, as I
recall, the Office Director sent in the comments. It
was Steve Paige at the time, I believe, and I could
ask -- I'll get back to you. I'll ask Ray Clark from
the EPA in terms of how much review is afforded.

MR. LEE: I think when the NRC commented
on DOE's Part 963, my recollection is Marty Virgilio
may have signed out the comments, so I think the short
answer is it just depends on what organization, agency
wants to do.
But certainly things were wired before the
comments went out.

MR. HAMDAN: If the EPA in the Standard
CFR 117 listed an NRC value for the likely event why
in the world would the EPA come back and comment?

MR. McCARTIN: Well, as a federal agency
they are allowed to comment on proposals by the NRC.
They had an opinion they wanted to express to us and
provide comment. It's a free country.

With one exception, I guess I'll say for
the Committee's benefit, obviously, when the Committee
comments they don't get docketed as public comments
because they're an advisor to the Commission and that
it would be somewhat -- there are some instances where
comments received, they do get docketed, but they
don't get docketed as part of public comment. I mean
there's things like that but everyone else is docketed
as public comment for the most part.
In terms of the EPA comments, they
recommend a 10-6 per year probability for the cutoff.
They gave four reasons for that. They said that 10-6
is midway between 10-8 and 10-4.

CHAIRMAN HORNBERGER: It's true, but --

MR. McCARTIN: Well --

MEMBER GARRICK: It let's you do a log

MR. McCARTIN: Yes, you have to think on
a log scale in terms of -- if you're a linear mind
it's decidedly different, but it is halfway between

MEMBER GARRICK: But it's also true that
10-5 is half between 10-8 and 10-3, right?

They felt 10-6 would be more widely
accepted and easier to defend. However, they did give
no basis for that, that it was just a statement made.
They also said they didn't see a need to
be restrictive on screening FEPs in that there always
was the situation for the rule that if something
didn't have a significant effect on performance, you
didn't have to include it and so part of what I'm
reading, rather than make it a 10-5 per year, you can
make it less restrictive and make it 10-6, but you
always have this fall back position of if it doesn't
matter much, you don't have to include it.
And finally, their fourth point was that
when they looked at some DOE analyses that the 5th and
95th percentile, the variation in consequence was a
factor of a 100 for those calculations. And the 10-6
is two orders of magnitude of factor of 100 different
than 10-4. And they think that, I guess, gives more
basis for the --

MEMBER GARRICK: But they don't put
forward any specific evidence?

MR. McCARTIN: No. They are just
statements made.


times farther away from the sun than --

MR. McCARTIN: No comment.
This is one page and so it's very simple.
We had those four arguments.
Going to the State, the State of Nevada
recommends that we use 10-8 per year for the cutoff.
They would be the same as very unlikely. They believe
that the value should be the same as for individual
protection as for groundwater protection and human
intrusion. They said although EPA told us to specify
a probability for unlikely events, they did not say it
had to be a different number. It does not require it
to be a different number. And so there's that part.
They maintain that groundwater protection
deserves the same rigor as individual protection.
Third, the screening of igneous activity seems to be
inappropriate given igneous activity is such a
dominant dose contributor. There are some problems
with that, of course, that certainly for groundwater
protection the dose contribution from igneous is an
inhalation air pathway calculation. It is not a
groundwater dose and so although it's dominant in that
area for groundwater protection, it's a different
aspect than necessarily groundwater protection.
And finally for screening human intrusion,
if you screen it out from the human intrusion
calculation, if you wanted to compare the human
intrusion calculation to the individual protection
calculation, you would have -- it would blur that
comparison because some have some things screened out,
some don't and so those were the issues raised by the
State in supporting a 10-8 value.

In terms of NEI, they support the 10-5 per
year. They felt that this focused compliance on the
most important risk contributors. It effectively
bounds the speculation for unlikely events and it
provides for a reasonably and prudently conservative
analysis and they think 10-5 is sufficiently low
enough. And so those are the three sets of comments.

Briefly, my own personal opinion is that in the
comments we've seen, no one -- none of the comments
directly imply that the 10-5 as a reasonable value for
likely in terms of if we're trying to look at the
likely conditions is necessarily inappropriate.
Now I say that without -- we are going to
shortly go into the process of responding to the
comments. We will prepare a Commission paper,
etcetera, but right now it seems like there weren't
many comments that went to the arguments that NRC put

MR. LARSON: Who signed the Nevada letter?

MR. McCARTIN: It was Eagan and
Associates, the law firm that provided the comments.


MR. McCARTIN: NEI was signed by Steve

MEMBER GARRICK: The law firm representing

MR. McCARTIN: The State. And it was
someone from that law firm, I'm not familiar with the

question, the 10-5 that -- in your role, doesn't that
define likely?

MR. McCARTIN: The lower bound of likely.


MR. McCARTIN: And so anything less than
that would be unlikely.

CHAIRMAN HORNBERGER: So .99 times 10-5 is

MR. McCARTIN: Correct.

it to be .99 times 10-6 that it would be unlikely?



MR. McCARTIN: Yes. In terms of our
current schedule, the Commission in Part 63 when it
was finalized said we would be conducting an expedited
rulemaking. We are still working under that expedited
rule making direction. We intend to proceed quickly
to finalizing the amendment. We will develop, right
now the plan I was told I probably shouldn't do this,
but that's why the word tentative is up front there,
that we'd like to have a Commission paper transmitting
the draft final amendment to the Commission done by
May 17th, getting it to the Commission by no later
than the end of June, sooner if we can.
We've taken the expedited rule making to
mean we should proceed as quickly as we can. Like I
said, the -- with only three commenters and not
extensive comments, probably I'll say 7 to 8 pages
total between all three comments, we should be able to
develop a package for the Commission to consider.
I don't know how the May 17th deadline
fits into the Committee's schedule, but you can see
we're going to proceed fairly rapidly to get into
concurrence and whether you want to get in prior to
that or June 28th is certainly when it gets to the
Commission, a letter to the Commission prior to that
would be useful for their deliberations.

CHAIRMAN HORNBERGER: Tim, I know the last
time you talked to us on this you gave me at least one
example of a FEP that was I guess between 10-5 and 10-8
that would be screened out.


CHAIRMAN HORNBERGER: Can you give me one
that's between 10-5 and 10-6?
I'm trying to figure out some way to judge
the difference between what EPA proposes and what you



MR. McCARTIN: You're getting in the range
certainly I would say that it's possible, not more
than a qualitative sense that an undetected fault,
movement of an undetected fault might be on the order
of 10-5 and whether it's a little above that or a
little bit less, I just don't know.
Certainly, you might have some very large
seismic events that might get you up to the between
that -- but I'm not certain. Those are the two kinds
of things that you might be able to -- at 10-5 you
might be able to screen out some very large seismic

trying to think, so let's take a very large seismic
event. Does that mean that what DOE would have to do
is an analysis of human intrusion taking place at the
time of a large earthquake?

MR. McCARTIN: No, not at the time of a
large earthquake. The human intrusion calculation you
would have a bore hole going through a waste container
providing a pathway to the saturated zone, and you
have a failed container. You now have other seismic
events occurring. I'm not sure --

effect on the failed container?

MR. McCARTIN: If there was one, yes.

CHAIRMAN HORNBERGER: Okay, and it would
be similar for a volcano. They don't have to be
contemporaneous, they just --

MR. McCARTIN: Right, yes, correct. Yes,
yes. It would be -- what you're doing in human
intrusions, you're looking at the natural evolution of
the site. For individual protection, you're going
down to the 10-8.


MR. McCARTIN: For human intrusion, you
would stop at 10-6, and so the way I look at it, if
there's some things that were in the analysis between
10-6 and 10-8 you might not include them in the
analysis and likewise 10-5.

CHAIRMAN HORNBERGER: That was helpful to

MEMBER GARRICK: Can you remind us once
again? I think you covered this last time, Tim. What
the NRC's principal basis is for their definition of
unlikely events?

MR. McCARTIN: Well, in general, we're
relying on the preamble to the standard that when they
said exclude unlikely events, we were trying to focus
on the likely behavior. Given you're looking at the
likely behavior, we in an order of magnitude sense,
looked at the 10-4, 10-5, 10-6 as a potential cutoff for
the boundary. At 10-4, it's very likely that it would
occur over a 10,000-year period. 10-5, it's a 10
percent chance of occurring. 10-6, it's a 1 percent
chance of occurring.

We looked at the 10-4, as well, it should
be a little -- it should be somewhat less than almost
a sure thing. At the 10-6, a 1 percent chance of
occurring it seemed that was too low, that to say at
the lower end something is likely if it has a 1
percent chance of occurring, going then -- the simple
thing was 10-5, it's a 10 percent chance of occurring,
we think, okay, one could argue maybe it should be a
30, 40, 50 percent, but just in the order of magnitude
sense, we felt 10 percent was prudent, that anything
less than a 10 percent chance of occurring is clearly
unlikely, but it is somewhat of a qualitative
subjective decision. If I had to say it's sort of
that common sense approach, with the -- with the one
thing we have, the very unlikely was quantitatively
defined at the other end.

MEMBER GARRICK: But in practice, there's
nothing to prevent, by choosing a range, there's
nothing from you to prevent you from making a decision
that would be completely consistent with EPA's
There's nothing to prevent you from --

MR. McCARTIN: Accepting 1 percent?


MR. McCARTIN: Correct. And that's part
of the process we will look at, the suggestions for
what EPA said and the basis for that 1 percent and --

MEMBER GARRICK: Now, of course, the thing
we commented on a little last time is to -- the
question of how to deal with mischiefnous with respect
to the use of this in screening.
MR. McCARTIN: Correct.

MEMBER GARRICK: By subdividing events in
such a way that you are able to screen when
unsubdivided you could not.

MR. McCARTIN: Correct.

MEMBER GARRICK: And that's just a
judgment thing to deal with that?

MR. McCARTIN: It's judgment, but it has
been put in the rule in terms of the technical
criteria, at least some of the background information
prior to getting to the quantitative requirements and
we're expecting these things to be large groupings.
It's not that you can subdivide it to very small
groupings and get a very small probability because you
subdivided an event.

We also have it in the review plan that
seismicity is looked at as a class of events, not say
-- let's say if you took what's the probability of
getting something on the Richter scale between 7.1 and
7.01? Well, that might be very small, but it's the
class of seismicity, the class of volcanism.
And so we certainly have given the
indication that the intent, with the probability
cutoff is looking at a class of scenarios and so
you're right, there is some judgment there, but we
think and in our discussions with DOE, there's never
really been an area that we've seen where they've
subdivided in a very narrow way merely to screen
something out.

MEMBER GARRICK: Why did this -- why was
there the need to do this?

MR. McCARTIN: Well --

MEMBER GARRICK: Is this EPA-driven?

MR. McCARTIN: I'll say yes, in part.
There's two reasons to it, two answers to that
question. One, when EPA finalized their standard,
they said exclude unlikely events from the human
intrusion and groundwater protection calculation and
we'll leave it up to NRC to specify that number, what
that exactly is.

The Commission had a choice. And there's
two ways that it could have been handled. We could
have left it qualitatively, left it -- indeed, DOE has
the -- based on our approval, they can screen out
unlikely events, what that exact number was and would
be litigated in the licensing hearing. Rather than
litigate it in the licensing hearing, what the
Commission opted to do, a more efficient thing to do,
rather than leave that as a somewhat vague concept,
let's put a number on it now and get public comment
and resolve what unlikely is and it's just a more
efficient way of dealing with that.

MEMBER GARRICK: But they're making a
comment now that's different from what they said the
NRC is free to do, is this not going to be interpreted
by the public that we're in conflict again?

MR. McCARTIN: Well, when they said NRC
was -- NRC should do this, they never said they
wouldn't comment on what we did. So in that sense, I
don't know if they're in conflict with the final
standard. The fact that they recommend a value, I'd
have to go back and look at the exact words. I think
it's just recommend. We are not obligated to take the
recommendation. I think we are obligated to respond.


MR. McCARTIN: To the basis they provided,
but -- yeah, I think to members of the public, in
general, I think it will look that we are in conflict.

MEMBER GARRICK: Okay. Ray, you got any
comments on this wonderful subject?

VICE CHAIRMAN WYMER: I'm too excited to

MR. McCARTIN: Let the record show.

MEMBER GARRICK: Make sure that's in the
record that he's too excited to talk.

MR. HAMDAN: Yeah, I have two comments,
Tim. Number one, and this is my worry, did we exhaust
the literature to make sure that this unlikely and
likely definition is not there somewhere by either
some rule or a large scope for the government or EPA
for that matter and they themselves forgot about?

MR. McCARTIN: Sure, well, when we were
writing the proposal, we talked to people in the
reactor space, other individuals throughout the
Commission. I will say the word unlikely appears --
it's ubiquitous in NRC guidance, regulations.
Unlikely is a good, qualitative word.

MR. HAMDAN: How about likely?

MR. McCARTIN: Well, likely --

MR. HAMDAN: Same thing?

MR. McCARTIN: Yeah, I would say likely is
the same thing. What we said in the proposal that in
essentially every application when a word like
unlikely is used, it needs to be addressed in the
context of the application that's occurring and where
this came in, I talked to some people in reactor risk
and they said we would like to core melt to be
unlikely. Absolutely true.

Now a core melt can cause instant
fatalities. So when they think unlikely space,
they're talking to preclude instant fatalities, you
may want a very low number for the word unlikely.
If I translate that over to where we are,
for groundwater protection, iodine being one of the
key contributors, it's a .2 millirem dose that we're
preventing. That is a 10-7, approximately, risk
factor of a fatal cancer. So instant fatality in one
case, a 10-7 risk of a cancer fatality. The word
unlikely, although it's used in both places, I don't
think it's appropriate to try to say whatever
probability number you want to assign to it should be
the same and so I think we've looked at it for this
very specific application. It does not imply any
precedent for other parts of NRC which we provided in
our proposal and I think that's the only way --
unlikely is a very good word, but if you try to put a
number on it, I think it can only be done in the
context of the application.

MR. HAMDAN: I have another question on
your point and that is we have experience with
differences between EPA and NRC and other programs and
we know we've had years of deliberations and arguing
with very bad PR and have we thought this through that
if we just take the EPA number for the heck of it,
just to avoid all these problems, whether it's worth
it or not for us to find that extra -- since the risk
is small and I think you have said that, I'm wondering
if we stop short of having another battle between the
Commission and the EPA on some things, maybe as small
maybe as this?

MR. McCARTIN: Well --


MR. McCARTIN: Yes, I mean that is a
Commission decision in my mind and I think in terms of
developing the Commission paper, what we as the staff
owe the Commission is the best technical answer we can
give them. Identify any policy implications such as
that to them that could be with whatever
recommendation we have. I can't say which way we will
go. I think ultimately things like that, as you posed
it, should we adopt the EPA number just to not look
like we're in a fight. That really --

MEMBER GARRICK: There is a big

MR. McCARTIN: That's a Commission

MEMBER GARRICK: There is a big difference
here between this issue and the radiation standard
issue in that EPA did not say we were free to assign
a radiation standard.

MR. HAMDAN: I know but they're


MR. HAMDAN: It can happen again and we
have seen them. Just one quick point to Tim, would
you consider in your Commission paper bringing this
up, looking at the options for the Commission? Would
you bring this up or not?

MR. McCARTIN: Well, prior to rushing it,
I can't -- the first -- I think I would stay with the
-- what we have always tried to give the Commission is
our best technical advice. And we always give them
options and pros and cons if we can. And obviously,
EPA has a special role in the Yucca Mountain process,
however, as Dr. Garrett points out for this one, they
relinquish that to us. We will -- we typically give
pros and cons.

I would not be surprised if we have pros
and cons like that in there. It's one of those things
that clearly -- what I view my obligation is to
correctly portray what the comments were, give
reasonable answers and give pros and cons in a
recommended approach. The Commission -- that kind of
decision is a Commission decision and I'm comfortable
with it. The duty upon me, as I say, I need to give
them the best information I can.

MEMBER GARRICK: I think so. Any other
questions from any Members, any staff? Anybody else?
All right, I think we're satisfied.

MR. McCARTIN: Great.

we've come to that joyous point in our agenda where we
get to talk about ACNW reports, so we won't need to be
on the record any longer.
(Whereupon, at 4:00 p.m., the meeting was

Page Last Reviewed/Updated Monday, October 02, 2017