110th ACNW Meeting U.S. Nuclear Regulatory Commission, June 28, 1999
UNITED STATES OF AMERICA
NUCLEAR REGULATORY COMMISSION
ADVISORY COMMITTEE ON NUCLEAR WASTE
***
MEETING: 110TH ADVISORY COMMITTEE ON
NUCLEAR WASTE (ACNW)
***
Southwest Research Center
Building 189
6220 Culebra Road
San Antonio, Texas
Monday, June 28, 1999
The Committee met, pursuant to notice, at 8:30 a.m.
MEMBERS PRESENT:
B. JOHN GARRICK, ACNW Chairman
GEORGE HORNBERGER, ACNW Vice Chairman
RAYMOND WYMER, ACNW Member
CHARLES FAIRHURST, ACNW Member. P R O C E E D I N G S
[8:30 a.m.]
MR. GARRICK: Good morning. The meeting will now come to
order.
This is the first day of the 110th meeting of the Advisory
Committee on Nuclear Waste. My name is John Garrick, Chairman of the
ACNW. Other members of the committee include George Hornberger, Ray
Wymer, and Charles Fairhurst.
The entire meeting will be open to the public. During
today's meeting, the committee will review activities underway at the
center. Included in that review will be the ten high level waste key
technical issues. Four of these KTIs will receive some special
attention and emphasis. We will discuss with the staff the evaluation
and contributions to risk. Included will be presentations on
sensitivity studies, event tree post-processing, and importance
analysis.
We will review with the staff their evaluation of the risk
contribution of potential igneous activity at the Yucca Mountain site,
and discuss committee activities and future agenda items.
Howard Larson is the Designated Federal Official for today's
session, and this meeting is being conducted in accordance with the
provisions of the Federal Advisory Committee Act.
We have received no requests to make -- I guess we received
one minor request or one -- Andy, do you want to fill us in on that?
MR. CAMPBELL: We've received one request for a speaker,
Sarah Lee Salar. However, she has not contacted us again. So that may
be on Wednesday.
MR. GARRICK: Okay. We have received a written comment for
inclusion in the record from Dr. Donald L. Baker, Aquarius Engineering,
Fayetteville, Arkansas. His comments have been provided in advance to
the members and will be included with the meeting transcript.
Should anyone wish to address the committee, please make
your wishes known to one of the committee staff. It is requested that
each speaker use one of the microphones, identify himself or herself,
and speak with clarify and volume so that they can be heard.
Before proceeding with the first agenda item, I would like
to cover some brief items of current interest. As most of us know,
Commissioner Greta Dicus will become Chairman of the NRC when Chairman
Shirley Jackson's term expires June 30. Chairman Jackson made the
announcement at one of the periodic all-hands meetings with NRC staff on
June 15.
The White House confirmed the appointment in a press release
at about that same time.
We welcome Ms. Cheryl Hawkins, a 1999 graduate in chemical
engineering from the University of Maryland, Baltimore Campus, to the
ACNW staff as a summer intern.
We also would like to recognize a member of the
administrative staff, Michele Kelton, who received the NRC Meritorious
Service Award for Support Staff Excellence. We're very pleased about
that, Michele.
South Carolina regulators have recently determined that the
potential remaining disposal capacity at the low level radioactive waste
disposal facility in Barnwell, South Carolina is only 3.2 million cubic
feet, approximately half of previous estimates, according to Virgil
Autrey of the South Carolina Department of Health and Environment
Control.
Assuming an annual disposal rate of about 300,000 cubic
feet, this capacity will be sufficient for approximately ten years.
California has decided not to appeal a court decision
against transferring Ward Valley land for a low level rad waste disposal
site. Instead, Governor Greg Davis has asked the University of
California President Richard Atkinson to chair an advisory group to come
up with alternatives for low level waste disposal.
The group will include academic, scientific, environmental
and biotechnical technology experts and representatives from utilities
and state agencies.
In March, a Federal Judge refused to order the Department of
Interior to transfer Ward Valley land to California.
US Ecology's rad waste operations at Oak Ridge received an
award for meeting and exceeding Federal water quality standards. The
Kentucky-Tennessee Water Environment Association, a group of water
quality experts, awarded its pre-treatment excellence award to the
American Ecology subsidiary, which operates low level waste processing
and recycling centers at the Tennessee site.
On June 10, South Carolina Governor Jim Hodges announced the
creation of a task force, quote, "to examine the final disposition of
South Carolina's low level nuclear waste facilities."
He made a number of comments, including "My stated goal," he
said, "would be to get South Carolina out of the business of taking
nuclear waste from around the country." He believes that this is a
policy strongly supported across the State of South Carolina, and
discussed a number of options that might be considered.
The DOE has spent about 16 years and perhaps half a billion
dollars on a separations technology, before deciding the process
produced too much benzene to be used safely, according to the General
Accounting Office. The in-take precipitation process was designed to
separate high-level nuclear waste from 34 million gallons of liquid
stored in tanks at the Savannah River site in South Carolina.
Initially, the facility was to have begun operating in 1988.
The General Accounting Office indicated that DOE now estimates an
alternative process will not be available until perhaps 2007 and would
cost 2.3 to 3.5 billion over its lifetime.
Unless there are other issues -- there is one, I guess, that
we should mention.
On May 29, the Texas State Legislature adopted a conference
report containing a provision abolishing the Texas low level Radioactive
Waste Disposal Authority as a separate entity, but transferring its
staff, funding and functions to the Texas Natural Resource Conservation
Commission.
The provision was added to the conference report just before
the legislature adjourned, when it became apparent that other
legislation relating to the authority's functions would not be passed.
The Governor is likely to approve the legislation, since it affects many
other vital state agencies.
I don't know, perhaps it's been approved by now.
Unless there are other points of interest from either the
committee or staff, we'll proceed to the agenda. Our first agenda item
is Program Overview on Progress toward KTI Resolution, and I guess Budhi
Sagar is going lead that discussion.
MR. SAGAR: Good morning. My name is Budhi Sagar, and I'm
Technical Director at the center. Before I start my presentation, I
would like to welcome the members of the ACNW and the staff to the
center. We always welcome your comments, your criticism, and we try to
act, to the extent we are competent to act on your suggestions.
We would try to focus this whole meeting. We asked the
presenters to look at your self-evaluation that you have recently
submitted to the Commission, and there were some issues brought up there
which we you have the intention to follow-up. So we asked the speakers,
including myself, to try to react to the status of those issues.
As requested by you, I will try to take only about 40
minutes on my prepared comments, and the rest of the time is for you to
ask questions or get into discussion, as you wish.
What I would like to do in this presentation is to provide
you an idea of the key high level waste program milestones that are
coming up. I will use two viewgraphs for those.
In this fiscal year, the big activity that we had conducted
is the review of viability assessment. I would take a few minutes to
summarize what we did on that.
The status of issue resolution, I have a rather busy table
that's included in the presentation here that you have a hard copy of,
and those would be hard to project because they are small fonts. Also,
I would like to take only a few minutes to go over only a few things.
Those are for you to take a look at and if you have questions, please
feel free to ask.
As Dr. Garrick said in the beginning, only four KTIs would
get some special treatment here in this meeting. The other six are in
that table that I presented to you, but there are the KTI leads present
in the audience. So if you have questions, we certainly will try to
answer those questions on those KTIs.
There is a big effort that we have started now, which is the
development of the Yucca Mountain review plan. We expect to have
Revision 2 of this review plan prepared before the license application
comes in to NRC. So I'll spend a few minutes describing that.
The review tools, which is basically in two parts; the
review of models and the review of data, and some of the models that we
would use to review DOE's analysis, I would try to touch on them.
The total system performance assessment code, TPA, is a
measure part of that, but then we have some auxiliary codes that we
would spend some time on.
I'm going to give you a few seconds on the future outlook
that we see.
The key high level program milestones are on this chart
here. There are two charts, actually, one listing in text what the
milestones are and one providing you a schedule along with the
milestones here.
I'd like to touch on a few things here. The site-specific
rule for Yucca Mountain is a major activity. The final rule, even
though the EPA standard is not out in the public yet, is still on
schedule, as far as the NRC staff is concerned. I think by December we
intend to submit the final rule to the Commission.
The pre-license resolution of KTIs is obviously very
important, has been going on for the last four or five years. If you
see on that line in the chart the issue resolution status report, the
IRSRs, are the main documents in which we document the resolution. But
as you will see, at the end of fiscal 2000 or beginning of fiscal 2001,
the final revision of IRSR, which is in Revision 3, would be issued.
After that, we intend not to revise the issue resolution status reports.
But instead, we would move on to the next item, which is the
development of the YMRP, or the Yucca Mountain Review Plan. The Rev. 0
is supposed to be out here in about November this year and then the Rev.
1 would be at the end of fiscal 2000 and Rev. 2 by the end of fiscal
2001.
What we have planned to do, and I will state that later
again perhaps, is that the acceptance criteria and the review methods
which are presently contained in the issue resolution status reports
would be taken out of there and put into the Yucca Mountain Review Plan,
but the technical basis, official resolution would stay in the IRSR. So
the two have to go in parallel and be consistent.
Then I would like to point out the review of the draft EIS,
which is the third line from the bottom in this chart. We expect to get
the draft EIS in August sometime. It was supposed to be the end of
July. I heard now that it would be somewhat late. We would get about
five to six weeks to turn around the review.
This is one of the activities in which we had not really
participated in the sense of attending workshops or meetings at DOE.
With a short turnaround time, I assumed there would be an intense set of
activities to accomplish this.
The review of the license application, of course, that's
where everything is converging to, all the documents we are producing,
all the tools we are developing will eventually lead us to having the
capability to provide a competent review of the LA.
The bottom-most line you see has no milestones because there
is still some thought being given to that. DOE is supposed to be
prepare, in their license application, what they would do regarding
performance information. It's required in Part 63, but it's still, I
guess, somewhat of a fuzzy topic at this point, in most people's minds.
The overall approach to achieving milestones then is to try
to integrate all the activities. This is a multi-disciplinary work, and
to resolve an issue, we do have to make an active effort to make sure
things get integrated.
The focus is always on issue resolution and most of the work
is prioritized, and both Bill Reamer and Wes Patrick will speak about
this aspect of it, how the risk contributions are factored into focusing
work.
And one of the main activities to make sure that we remain
consistent among at least three major documents, the Yucca
Mountain-specific regulation, the review plan and the IRSRs.
Of course, there is the underlying foundation for all of
them, which is the risk-informed performance-based regulation. That's
Part 63. Then the acceptance criteria and review methods, we have to
make sure that those come across as an integrated set of acceptance
criteria and review methods; not necessarily entirely discipline-based.
We eventually have to produce a safety evaluation report
once the license application is reviewed. We assume that the technical
basis that would be documented in the issue resolution status reports
would provide a foundation for developing that report.
So the idea is to keep these major documents together and
consistent.
The total system performance assessment, which is
essentially what the risk-informed regulation requires, would make these
-- would be the central focus of most of the reviews.
The strategy for resolving key technical issues is to focus
and integrate the CNWRA independent work. We do consider not only what
work CNWRA and NRC staff does, but obviously what is submitted by DOE,
including the plans for their work in the future and all other
literature that we can lay our hands on. So we try to put all that
together to come to a conclusion, if we can declare, at the staff level
anyway, that the issue is resolved or what needs to get done before we
would say that the issue is resolved.
And the strategy is to interact frequently with DOE. As I
said, one of these things that we have not done is in terms of the DEIS,
which are trying to find what exactly is contained in that. That we
have not really interacted with. So it will be interesting to see how
we turn around the review in five to six weeks on that, and I expect it
to be a pretty large document. We'll see how that works out.
And we do document the issue resolution, as we see it,
provide it to DOE for their comments, and it's out in the public. So we
want to benefit from other technical people taking a look at it and
providing criticism and comments. So when we do the next update, the
revision, we can include those suggestions in the update.
Of course, the ultimate resolution would be achieved when we
review the license application and the Yucca Mountain review plan would
provide guidance to both NRC staff and to DOE staff what we expect to do
in that review.
The issue resolution status reports, as I said, are being
somewhat restructured in the sense we will take out the acceptance
criteria and the review matters out of that document and put it into the
review plan. But, again, it's probably worthwhile to repeat what we
mean by resolution at this point, at the staff level.
It simply means that all the data that's available to us,
all the information that's available to us, including the future work
plans of DOE, the staff has no questions in this time, in the sense we
believe that the data and the models and the uncertainties are bounded
to a point where we believe we have enough to review.
Of course, that thing can be opened up by the board once the
license application comes in.
The contents are provided here, I won't repeat, I won't read
them, but the only thing you have to note is that the acceptance
criteria and review matters have been taken out of the IRSRs.
Here is the schedule for Revision 2. As you can see, by the
end of this fiscal year, except for radionuclide transport KTI, which is
one revision behind, all the KTIs, because of the fact that no work was
done on this KTI for a couple of years during the budget crunch starting
in '96. All other KTIs would be updated this fiscal year to Revision 2.
Nobody can read this, I think, from the screen, but you have
hard copies of this table. This is the table I was referring to in the
beginning, which is a handy way of providing you the status of the
various KTIs. The first column names the KTI, the second column are the
sub-issues that you would see in the IRSR, for example. The third
column provides you an estimated time, date, year, when that sub-issue
is supposed to be resolved. And the fourth column gives you a little
bit of description of where we are at this time.
As you would see, there are some sub-issues where we say the
resolution has been achieved. Those are, in fact, a minority. The
majority of the sub-issues we say partially resolved, because we are
either waiting for some data or some models that we have discussed with
DOE that would eventually come, provided to us.
But if you look at the dates column, we do indeed expect the
resolution of most of these sub-issues before the license application
comes in. We do, of course, need to continue working and developing
technical basis. That will eventually -- as I said, that will
eventually be -- that would be documented in the safety evaluation
report.
So I would quickly pass over these and let you read that at
your leisure. If you have any questions on any of those KTIs, which
will not be discussed in detail, we do have staff present here in the
meeting to answer those questions.
There are some difficulties we have faced, which I have
identified on chart 15, in issue resolution. Quality assurance
problems, both in terms of data and models, at DOE have been ongoing for
some time. NRC set up the QA working group. They have visited DOE
several times, I think at least twice, in Las Vegas.
We believe and we understand that DOE has consolidated the
number of procedures that apply and is implementing those procedures
pretty well. The thinking is that the QA would be doing well pretty
soon.
There are uncertainties in DOE's schedule. Of course, we
have to be flexible and be able to react to DOE's schedule when things
delayed. We do have to do the planning, assuming the schedule would
stay, but if they change, we have to go back, do the replanning, et
cetera, but that causes uncertainty in when and how things will get
done.
The repository design, even though we believe DOE should
have the flexibility to revise its design, to react to whatever it sees
when it goes underground, et cetera, et cetera, but certain main items
we believe ought to be fixed, because if we have to develop review tools
and DOE has to develop models and they have to develop data, you need
some time to develop all the data and models for any new design that
they come up with. And if they change the design, a few months before
the LA, we're not sure if they can -- if they would be able to provide
evidence that that design is okay or not.
It's a question of time. It's not that they shouldn't
revise the design, but would they be able to generate enough technical
evidence about the safety of that design. That's the third point here.
You will hear about this, the volcanism KTI. There is still
some contention. Again, we believe that based on the risk-informed
regulation, that if we do consider the low probability, but the high
consequence and multiply the two, the net result, the so-called expected
risk, is indeed not negligible and, therefore, both the DOE has to do
some credible work to satisfy the NRC, as well as we have to continue
doing some work to make sure we are able to review whatever DOE
provides.
One of the items that NRC takes credit for in their PAs, and
you will hear some if it later on, is the alluvium part of the saturated
flow path from the repository to the critical, the postulated critical
group, about 20 kilometers from the repository.
And since this is new, you know, it used to be that you had
to meet the regulation at about five kilometers, extending that flow
path to 20 kilometers, there is not much data on the transport
properties.
The DOE, on the other hand, did not take much credit for
alluvium. They took credit for cladding, for example, and that met the
standards, rather than alluvium. But the absorption properties of the
alluvium, we believe, are important. NICOL is collecting some data, but
we don't really see any plans in the DOE program to collect that data.
That may be something that -- that has been intimated to DOE through a
letter and they may come up with some plans to collect that data.
There is, of course, delay in the promulgation of EPA
standard, which means Part 63 would have to be revised to be consistent
with the EPA standard, whenever that becomes final.
And we have noticed some -- and this may be of special
interest to you because of your interest in copper processes -- the
drift scale heater test was supposed to provide some definitive data
about coupling, but because of some problems in the conduct of the test
in the sense of unmonitored release of energy through the bulkhead, how
to interpret that, that may become an issue. So that, again, I think,
in one of the KTIs, that may be brought back, that issue.
One thing we have noticed in your commentary on the staff's
work is that the staff doesn't really understand what you mean by
systems engineering approach. We would definitely like to understand
what you mean by systems engineering approach.
I was told that some presentation would be done in this
meeting by Andy or one of you, I'm not sure. But that's something we
need to talk about, what we think you mean, and this may be entirely
wrong, is to take a total systems approach. What I've put in my title
here is hierarchical decomposition of the repository system, the system
as a whole, and then how the things flow downwards, how do we break it
up into smaller and smaller portions so you can build up a so-called
total system code.
So we do identify subsystems and their key elements, as you
will see in the next chart, which is, again, very difficult to read, but
you have hard copies. Then we have to provide a framework for
implementing risk-informed performance based systems approach.
We think that the hierarchical decomposition that we have
come up with will give us a foundation, will give us a framework for
implementing risk-informed performance based systems approach.
It forms a basis for developing the total system performance
assessment code and provides us a basis for doing integrated reviews
rather that discipline-based reviews.
If you go to the next chart in your notebook, the top level,
of course, is the total system, which is the repository performance and
Part 63 specifies a performance criteria in terms of expected dose,
which you may call risk, at the total system level.
It has at least three identifiable subsystems, the
engineered barriers, the geosphere and the biosphere, which has then
some main components. Again, this is based on staff's knowledge at this
point. I mean, there is no uniqueness to these components, for example.
Those could definitely be different if one wanted to. So that's the
organization one selects in trying to manage the program.
Then the bottom-most part, which are termed here key
elements of subsystem abstractions, and we have coined a new term for
that now, which are called integrated sub-issues; rather than the
sub-issues you saw in the previous table, which were by KTI, many
sub-issues feed into these integrated sub-issues.
There are 14 of them, if you count them. So to us, this
does implement the system engineering approach, but, again, I think any
suggestions you can come up with to make it more efficient or more
effective would certainly be appreciated and helpful.
Okay. There goes the electronics system. Anyway, if you
turn to your next page, the next two viewgraphs provide you a quick
review of viability assessment, which was a major activity this past
fiscal year. We focused on only post-closure and not pre-closure in
this review. If you look at the previous chart, I said there are 14
integrated sub-issues or key elements of subsystem abstraction, that's
how the review of VA was organized. That is, that was organized based
on those 14 integrated subissues and the comments were generated based
on those 14 integrated subissues.
We had a very short turn-around time for actual review, even
though we had been participating for a long time in DOE's workshops and
meetings. So we knew a lot before we got the document, which helped us.
We believe we will do a similar strategy, we will follow a similar
strategy in the LA review, in the sense of keeping in touch with DOE, to
the extent possible, so that when LA comes, we know most of what would
be in the LA.
The next couple of viewgraphs are on one other major
activity that we have started to undertake now. That is the development
of the Yucca Mountain review plan.
This would be a guidance to the NRC staff and DOE on what
the focus of the review would be in the license application. All review
methods, and, again, as I will show you in the next chart, we intend to
organize this review plan on the 14 integrated subissues basis, at least
for post-closure performance review.
So the review methods, acceptance criteria, and what the
evaluation findings may look like, examples of that will be documented
in this review plan.
We do assume that by 2001, Revision 3 would be in final
form, ready for use in the review of LA.
The next chart shows you briefly the outline, as has been
prepared. This may have already been presented to you in other
meetings. No? Okay. But as you see, if you go to the third level
here, the review is essentially pre-closure safety, post-closure safety,
administrative and programmatic subparts. Then I have broken up the
pre-closure safety, post-closure safety and administrative subparts into
the fourth level, into the details.
If you go to the post-closure safety, the very top, you see
the four main components of post-closure, the performance assessment
requirement. This is based on Part 63, of course. Performance
confirmation, multiple barriers, and human intrusion.
The performance assessment then is the 14 integrated
subissues. So each one would have the acceptance criteria and review
method developed. Again, this, to me, is an example of how the systems
approach would apply to the review.
We assume something similar to happen to the integrated
safety analysis in the pre-closure safety part of the review plan; that
is, that would be further subdivided into some cohesive integrated
subissues for pre-closure, for which the review method would be
developed.
The last leg of review is the status of TPA and auxiliary
codes. These are, of course, important and time-consuming to develop,
requires a lot of effort to develop them, test them, verify them,
document them. Some of these have been provided to DOE. We expect
similar reciprocal action on the part of DOE in the sense we hopefully
would get their codes, so we can review them.
The TPA, of course, up at the top line, is probably the most
important. We are going through, I think in July, mid-July, we have set
up a peer review or external review, as it's called now, of this TPA
code. There are eight experts, a wide conflict of interest, the
majority of those experts are from outside the US because most of the US
experts have been snagged by DOE already.
So we would have a two-day meeting here -- three-day meeting
here. We would provide them documents, of course, before they come
here. The latest version of this code that we are using now is Version
3.2, but once the review occurs and we already have some suggestions
from staff, the users of the code, what should be changed, combine that
with whatever comments we get from the peer reviewers, we would update
the code to 4.0, which we believe would be the code we will use for LA
review, assuming, of course, the schedule holds.
The asterisk on three of these codes means they're
internally developed here. The others are acquired, but tested and
under configuration management procedures.
On the second column here, I've listed what the codes would
be used for. The main work horse for us for studying the coupled
processes is MULTIFLOW, which does couple three of the four important
process groups. It couples the flow, the heat transfer, and chemistry.
Yet, we are modifying this code to make sure it can accommodate
fractures and faults and dual porosity, so on and so forth.
The only process missing out of here is, of course, the
mechanical stresses, although we have a code that couples stresses to
thermal. So there is a two-process model and a three-process model.
Of course, one of the objectives of what we do here is to
see what coupling needs to be really considered, what you might be able
to decouple and still bound your results.
So you study at the lower level, at the detailed process
level, how the couplings work and by the time you move up to the
performance assessment level, you try to see what can be decoupled
without affecting the safety assessment, as we are doing.
As I said, on chart 22, I have a summary of what we thought
you were saying in your self-assessment. If you notice, the first one,
the use of performance assessment to reprioritize key technical issues,
and the third one here in this table, the transparent processes, Bill
Reamer would address that in the next presentation.
The use of risk-informed performance based, I have touched
on that here in my presentation, and Wes will touch on it again. But the
fourth item, involve outside senior experts, will also be addressed by
Wes Patrick.
The application of system engineering approach, as I said,
we need to clarify what exactly do you mean by that. We think we are
following, to some extent, that approach. I put under that, in more
detail, some of the things you have mentioned in your table in
self-assessment.
I think most of them, we are doing something. It's a
question of how far we need to go on each one of them. And some of them
would be presented to you during this meeting.
My last viewgraph is on future, the most important one that
is coming soon is the review of the draft environmental impact
statement, which, as I said, we expect to get somewhere in August and
then turn it around in about six weeks.
I have already described that there would be focus on
updating the TPA code to Version 4.0, after we go through the peer
review, external review, and that's the code we expect to use both in
reviewing the site recommendation and developing the Commission's
sufficiency comments and the license application.
The integrated safety review we had not applied to a
repository system before, so that is somewhat new. We expect, just like
the TPA code, we expect some tool, some measure tool to be either
required or developed for completing the ISA, or integrated safety
analysis.
That would have to be done at a fast pace, because to meet
the LA review deadline. We had had time for the post-closure TPA code
development, but this has much less time. So we have to do it fast.
But there is a lot more knowledge about it than we had about the TPA.
So it would hopefully work.
The fourth item is kind of new, where we interact with DOE
and other stakeholders, especially public meetings that NRC has been
conducting recently. They have conducted two meetings on the Part 63 in
Nevada, but that would continue at probably an accelerated pace.
And the performance confirmation is something we have
started to think about, what would NRC, for example, require both in the
pre-closure and post-closure part on performance confirmation. We, in
fact, would like to look at what DOE would propose in that area. After
all, there is a significant period of time for this activity,
performance confirmation, and maybe we can learn a lot more of these
activities are planned properly.
Those were my prepared comments. Did I finish in time? I
hope so. But the rest of the time is open for discussion, questions.
MR. GARRICK: Thank you, Budhi. Any questions?
MR. HORNBERGER: Yes. Do you want me to start?
MR. GARRICK: Sure.
MR. HORNBERGER: Budhi, it's been my understanding all along
that the IRSRs were really going to provide the bulk of the template for
the Yucca Mountain review plan. I assume that that's correct, but
that's not -- my question is, why did you take the acceptance criteria
out of the IRSRs?
MR. SAGAR: The only reason we took it out is that the Yucca
Mountain review plan would be a document separate from the IRSR and to
keep them in two places, we thought, because their dates are different
for production, that they may get out of data. We may issue a YMRP
which has an acceptance criteria which is not the same in the Yucca
Mountain review plan, whatever rev is going on at that time, would
confuse people.
That was the only reason we said keep it in one document so
that there is always -- at any given time, there is only one set of
acceptance criteria and not two.
Otherwise, the question is, is this the latest one or is
that the latest one.
MR. HORNBERGER: I guess the timeline, then, overlaps so
that the Rev. 0 for the YMRP would be out essentially at the same time
or slightly ahead of the Rev. 3 of the IRSR.
MR. SAGAR: Right. Rev. 0 would be out in October-November
of this year. Is that right, Keith?
MR. McCONNELL: Of '99. The annotated outline would be
November of this year, which is '99.
MR. HORNBERGER: March '99 has already passed.
MR. McCONNELL: Rev. 0 would be March of 2000.
MR. HORNBERGER: On the YMRP, one of the things that I
noticed that your, whatever level it was, the fourth level, I think, you
had the performance assessment and multiple barriers as separate boxes.
I'm curious as to your thinking on that.
MR. SAGAR: This is based on Part 63 in the sense that the
multiple barriers is mentioned separately.
MR. HORNBERGER: Yes.
MR. SAGAR: So there would be some finding on multiple
barriers. We would have to say, yes, DOE has used multiple barriers.
It may be based on just the performance assessment that is done, but --
MR. HORNBERGER: You don't necessarily see a totally
separate analysis for multiple barriers. The information may come from
the PA.
MR. SAGAR: The information would come from the PA? No.
There is that discussion going on whether it does indeed require
something more than meeting the 25 millirem per year, for example, the
expected peak dose. Is there something more required in saying, yes,
they have used multiple barriers or not? Can the PA itself be designed
to provide information to conclude? Yes, it can be, but what exactly
that would be has to be decided yet.
MR. HORNBERGER: Similarly, with the human intrusion, there
is a separate box that you would see it as a somewhat stylized analysis
using the PA approach.
MR. SAGAR: Yes. That's the plan and I think that's what
the YMRP people were saying, that they would have a stylized analysis.
In fact, some of the factors of that stylized analysis are defined in
Part 63. So the applicant would know what to assume and what to do.
MR. HORNBERGER: You mentioned that you have a review of the
TPA coming up. Is the list of your external reviewers available?
MR. SAGAR: Certainly.
MR. HORNBERGER: I think we'd be interested in looking at
that list.
MR. SAGAR: We can provide the list.
MR. HORNBERGER: Thanks. The draft EIS -- my last question,
and then I'll pass.
MR. GARRICK: You've got plenty of time, it's all right.
MR. HORNBERGER: Okay. The draft EIS review, this is --
it's a -- you have a short timeframe to do this, right?
MR. SAGAR: Very short.
MR. HORNBERGER: Do you have any sense of what that review
will entail? I mean, this is likely to be a very long document. Are
you going to review every aspect of it?
MR. SAGAR: Well, Bill Reamer may be able to answer this
question better than I. What I, in discussion with NRC staff, have
learned is that we will definitely review the radiological part of the
EIS in detail. Then there is the biological, ecological part, which we
may review in some cursory fashion.
There is the socioeconomic part, which probably would be the
least -- of least interest to NRC. The OGC, I understand, has written a
Commission paper to describe what the scope of the DEIS and then EIS
review would be for NRC. I haven't seen that paper yet, but they would
define what the scope is.
Given the short turnaround time, my guess would be our
stress would be mostly on the radiological part, and we do expect that a
lot of what they had done in the VA perhaps would be in the EIS, but
that's not for sure, because there is a different DOE contractor doing
the EIS than the people who were doing VA. So whether those two are
parallel and consistent, I'm not sure.
MR. REAMER: I think we have a presentation on Wednesday on
the EIS and the EIS review plan. So that may be the point where we will
be more specific in responding to your questions. But at this point,
we're not looking to limit the EIS review just to radiological issues.
We're anticipating a review plan that recognizes that an adequate EIS,
an adequate DOE EIS is an important foundation document for licensing.
So the scope of our review is trying to take that into
account in a broad sense. But clearly, once we get the document, the
document that we have not seen at this point, we are going to have focus
and prioritize, given the time restrictions, the 90-day comment period
that we've been provided.
MR. WYMER: Just a follow-up, and maybe you will cover this
later on when you talk about the EIS. But typically, in an EIS, you
have three, four or five alternative cases. Will you be able to say or
will you try to say anything about the relevant merits of the various
cases with respect to licensability, as they apply to licensability?
MR. SAGAR: Again, I guess I'm not totally knowledgeable
about this, but my understanding was that NAPA restricts what
alternatives DOE would have to investigate. For example, no disposal is
not an alternative.
MR. WYMER: Right. That's always a rebutted case. But then
they'll have intermediate --
MR. SAGAR: In fact, it's not required for this particular
one, I'm told.
MR. WYMER: Was it on this one? Okay.
MR. SAGAR: There are some special provisions in NAPA for
this particular EIS. Again, somebody else from NRC staff may be able to
give a better answer to this one.
MR. REAMER: Two thoughts. One, very shortly after the
document comes out, I think we are hopeful of being able to interact
with you at least on a one-on-one basis early in our review, so that we
can get that input, before we undertake our review and prepare our
comments.
The second point is that to me, an EIS document is more a
disclosure document, completeness of disclosure, consideration of
impacts. I'm not sure whether licensability really belongs in that
calculus.
But in any event, we might need to have the document in
front of us before we can really interact on that kind of question, so
we could be a little bit more focused.
MR. WYMER: It seems to me that you could get in kind of a
box if you have to address several cases with respect to it, or you'd
have to consider them at least.
MR. REAMER: Yes. And we're very concerned about the
prejudgment issue, as well. We don't even have a license application.
We don't have documents that -- licensing documents to review. You
know, we have technical basis documents and inputs, but we don't have an
application. We don't know whether there will be a license application.
MR. WYMER: Right. I had some technical questions, but I
think they'd be better addressed later on. I see some spots in the
agenda where they can be addressed.
One other point I have is in your last viewgraph, you talked
about develop concepts on performance confirmation. I suppose that's
sort of the lead-in step to coming up with acceptance criteria. That's
how you've decided to break it down into a step-wise process.
MR. SAGAR: Right. The performance information, the
monitoring, which would be, I suppose, the dominant thing that DOE would
have to do, either of mechanical stresses or corrosion of flow of water
or whatever we can think about.
MR. WYMER: Chemistry.
MR. SAGAR: Chemistry.
MR. HORNBERGER: That's a no, never mind.
MR. SAGAR: What monitoring would be done for a
significantly long period of time. It could be as long as 300 years or
as short as 50 years. It's not entirely clear what instrumentation is
really appropriate and available and what to expect.
Of course, once you get some monitored data, what do you do
with it? How do you analyze it to make judgments, et cetera? We
haven't really focused on it. We were all focused on post-closure
performance assessment and developing the tools and so on and so forth
and now on pre-closure, the ISA. But this is one that would soon become
quite important, I believe.
So we need to pay some attention to that is all I'm
suggesting here.
MR. WYMER: I think I'll defer my other questions until we
get into more technical discussion.
MR. GARRICK: Okay. Charles?
MR. FAIRHURST: A number of questions I had have been
addressed already. But I'm interested whether the sort of process
you've been going through with the issue resolution status reports and
you've got a point here of interact with DOE and other stakeholders.
At some point, is there some point at which you will say you
have to finish dialogue with DOE on an informal basis because of, the
regular phrase, separation? You see what I mean? At some point, I
imagine you keep going with the public forever, because that's not a
conflict, but you're regulating an operation and at some point they will
-- is there some time at which you have to say we now, just for
credibility sake, just for public credibility, we have to --
MR. SAGAR: Again, it's the policy issue that I leave to
Bill. My guess is that once we have finalized the YMRP through
interaction with DOE, in the sense we do consider their comments, that
that would be the time we would say the YMRP, you present to us the LA,
the critical comments, but I don't know. I think during the license
review, staff requests for information, based on what we see in the LA,
that would be that sort of an interaction.
So there would be interactions going on. Whether we would
essentially modify the YMRP in response to them, I think at some point
we would stop and say this is the review plan.
MR. REAMER: I really don't have anything more to add.
Maybe if you have a little more specific situation in mind, that might
help.
MR. FAIRHURST: No. But I think of one hearing, one of the
Commissioners or somebody said that at some point, that one has to,
like, draw the line and say we're waiting for you to submit an
application, and then we will review that. We're not going to have a
continuous interaction and dialogue, so that when it comes, we know
everything that's in it and we have already made a prejudgment, which I
don't think is necessarily bad.
But from a public credibility point of view --
MR. REAMER: Well, nothing in pre-licensing -- of course,
the way we've divided it up is everything prior to the license
application we call pre-licensing consultation.
MR. FAIRHURST: Okay.
MR. REAMER: None of that is binding on really anyone. It's
not binding on the staff, it's surely not binding on the Commission or
any board.
It's all our meetings, our in-public, we try to conduct them
as we would conduct a meeting if there were a license application
pending in terms of those.
MR. FAIRHURST: Okay. Good. So essentially you can go
right up -- which makes sense. Okay.
MR. GARRICK: Budhi, since you're sort of program manager
here and we look at your program schedule, would you care to comment on
what you believe to be the areas or items of greatest concern in terms
of being in a position to carry out your assignment?
Of course, you did identify difficulties in issue
resolution, but I'm just wondering if there's some milestones or issues
that you are especially concerned about, either with respect to
resources that are going to be required over a short period of time or
with respect to technical capability or what have you.
Would you comment on that a little bit?
MR. SAGAR: I think the two or three issues that we believe
would dominate the issue resolution, QA is one because it affects both
the data and the models that we think DOE would provide us, and if those
don't have the pedigree that we expect them to have, it would be quite
difficult to say, okay, this is acceptable.
The second, in my mind, is the time issue, where if the
design does change, if they change the method, for example, they use for
waste package, even though it may be pretty good, even though they may
make it safer, the time won't be there for them to collect data.
Certainly it won't be there for us to collect confirmatory data before
we can come to a conclusion that indeed the standards or these
regulations are met.
Those, I think, are the two main blocks I see as stumbling.
Then, of course, you asked for the resource one, and I think in the
budget planning which I have been privy to that NRC has done, I think
given those resources that we believed we have designed to go up to the
review of the license application, I think we would have sufficient
resources.
As Wes will speak to you in his presentation, we did have --
we have had difficulty in hiring, quote-unquote, performance assessment
staff in that element and that is primarily because, as we have said
several times before, performance assessment is not a discipline. It's
not something that is taught anywhere. Most people with experience are
people who work at DOE and we can't hire them because of the conflict of
interest clause we have.
So what we have to do is then to try to find people who have
the capability, but no experience, and try to train them here. Once we
train them, there is market for them elsewhere and to retain them is not
quite trivial for us.
So that's the only resource that's strained. I think we are
fully staffed in geochemistry and hydrology, rock mechanics and design
area, and material sciences.
MR. REAMER: And systems engineering.
MR. SAGAR: Well, every -- it's like PA. Everybody is a
system engineer, if they know what they are doing.
So that's the resource constraint that I see.
MR. GARRICK: But something like QA, that's not a new issue.
MR. SAGAR: No.
MR. GARRICK: That's been around a long time. It would seem
that there ought to be some evidence surfacing by now that either that's
under control or getting under control or not going to be a major
problem downstream. Is that something that is addressed in the context
of technical exchanges or what kind of interaction takes place there and
is there any evidence, from your perspective, where that situation is
changing?
MR. SAGAR: Well, there are quarterly exchanges on that
topic with DOE and the latest that we have seen does say that there is
control, that the DOE has come to grips with that issue and that they
are trying to manage.
They have a procedure for qualifying data, for example.
They can't go back in time and collect all that data again under a QA
program. How that application of that procedure would come out through
peer review or whatever other methods are mentioned in that procedure,
we have to see the results of that and hopefully those would work.
But as you can imagine, in this kind of a program, it's the
long-term data that plays a major role. Models is one thing, but what
goes into models is even more important, as a matter of fact. You can
make the models come out with almost any result. But what they are
based on is extremely important and for us to review that part of the
DOE program, I believe, is critical that we figure out how they
calculated the parameters and what is it based on, what the technical
basis is.
And if that's not based on a QA, quality-assured data, they
may be able to qualify most of the long-term data they collected in the
past, but that has to be seen.
But they do have procedures now, they have staff, and the
last time they were here, they told us they now accept the nuclear
culture, they are changing their culture so that they understand that
there would be a license application, that they're a licensee and,
therefore, have to follow certain requirements in terms of QA.
So I think it's under control, but -- Bill?
MR. REAMER: I don't think we want to be too rosy here. The
problem has been -- the problem is in the implementation. It's not
enough to have the plans, it's not enough to tell us that they adopt the
nuclear culture, it's not enough to train people.
It is kind of proof in the pudding of implementation and the
evidence on implementation I think is still uneven and we're adding
staff in the area and it is rapidly becoming one of the principal
issues, if it hasn't already been, that we're faced with and it's a very
difficult issue.
MR. WYMER: In some areas, you rely very heavily on
literature results and how you QA that is a very difficult question.
How do you know that the data are bound? How do you know that they
apply to these circumstances? So I think you're right, you've got a
real problem there. It's not rosy.
What you do yourself you do QA.
MR. SAGAR: We do follow a strict QA program at the center,
which gets audited once a year through ARDI, of course. So the data we
produce we save and it's reflected in the QA program, but that's a very
small piece of data, a small fraction that we can bring forward.
DOE is the main depository of data. So they have to come
out with that part.
MR. GARRICK: Your PA comment and discussion of resources,
there are four that are of great interest to us. I was also going to
raise the question that George did about the distinction that seems to
be made of multiple barriers and human intrusion being something outside
rather than inside the PA process.
We wonder if the PA notion is losing some of its momentum
because of the problems that exist in maintaining a competent and full
set of disciplines that are necessary to do that work.
We, as a committee, have, of course, been pushing that what
we want to -- we'd like to see here is that if there is some technical
aspect that is not getting adequate treatment in the PA process, that
the emphasis ought to be on changing the PA or hiring people or
extending the scope of the analysis or doing whatever you have to to
make the performance assessment do the job it's intended to do.
When you see things like multiple barriers as a separate
item, it begins to look like that's a residue of Part 60 and the
subsystem requirements; that we're still hanging onto this notion of
something is being handled outside the logic engine that has been
devised to deal with these issues.
As a manager, I guess, I would ask, is there -- did you see
anything -- any problems in this regard?
MR. SAGAR: Let me clarify why the box is sitting separate
from PA. I mean, PA could be everything. Anything you do could be
called PA.
MR. GARRICK: Right.
MR. SAGAR: There is nothing we do that's not PA. Right?
MR. GARRICK: Right.
MR. SAGAR: So anything we do for multiple barriers is PA.
But if you read Part 63, it requires you to do PA -- in one of the
slides later on, this would come out, which would say, well, consider
all the credible scenarios and the probability, calculate the
consequences and it should be less than X, 25 millirem per year.
Then you say, gee, you have to do a human intrusion stylized
scenario, which is not part of this 25 millirem. It's separate. But
the analysis you would do for human intrusion is PA. Same thing with
multiple barriers. You shall show that there is no single barrier you
depend upon in achieving the 25 millirem, that you would show that there
are more than one barrier, hopefully, at least one from each one of the
major classes of barriers, the natural and the engineered.
How would you show that? Do a PA. But are all the PAs the
same? Can it be done in one run of a code? Probably not. You would
probably use the same TPA code, but make separate different runs,
assuming different values of parameters and boundary conditions and so
on and so forth to make your point, make your finding.
That's all that set of boxes is saying. I don't think it is
diluting the focus on PA; my personal view, as a manager. But it's part
of PA; yet, you have to do two or three definition types of runs of PA
codes to arrive at those conclusions. I have no problem doing that.
MR. HORNBERGER: I guess a follow-up question is -- a
clarification. So you don't feel any pressure to do something other
than the kind of analysis that's being done.
The reason I ask the question is John Bradehoff, a good
friend of yours and mine, gave the Langbine lecture at the AGU and John
comes out publicly and says, oh, well, performance assessment for waste
repositories is just bonkers, it's a terrible thing to do and that's not
what we should be doing. And when I talk to John, he doesn't tell me
what we should be doing, by the way.
But I sense that there is some pressure to dismiss the
apparatus, what John likes to call the logic engine for doing these
analyses.
So you don't feel any of that pressure?
MR. SAGAR: Well, we feel the pressure. There is another
common friend, Shlomo Neuman, who says the same thing.
MR. HORNBERGER: Yes, Shlomo does the same thing.
MR. SAGAR: And I'm sure there are other people, if we ask
them. I mean, there are people who would like to do very detailed
analysis of every process involved in the repository safety. I simply
say that, well, we do some of that detailed analysis at the lower level,
at the process level, to try to understand what really contributes to
the ultimate risk, and that as practical project people, we do have to
concentrate and focus on those items that do contribute to risk and
understand those more.
I mean, there are all sorts of scientific issues and
questions that we could try to answer in this program, but, no, even if
it is a $7 billion program, can answer all those questions. We have
every discipline involved in here. We have hydrology, geochemistry,
rock mechanics, you name it, we have it.
So if I'm going to write a book on all of the separately,
well, that's almost impossible.
I've heard the same comments. I think the comments stem
from the fact that you simply fight too much; that by the time you get
to the system level code, you have developed tables, you have developed
simple regression equations, you have developed simple input/output
relations, and those are not real world relations that explain the
process.
Well, they're right, it doesn't. But can it bound, can it
estimate, can it calculate the risk? I believe they can. I used to be
the same way, by the way, John. I would have said that ten or 15 years
ago myself.
But I think John and Shlomo ought to do some PAs before they
make those kind of comments. I know what they are doing and they are
contributing to science, I'm not making fun of them, but unless you have
this multi-disciplinary project and you are forced to say give me your
decision, is this okay or not, you can't be entirely dependent upon any
one of those disciplines.
So, yes, we do feel pressure because we get criticism from
all over the place, but that's -- I don't see a way out, as you don't
see a way out.
MR. McCARTIN: Budhi, could we add something from
Washington?
MR. SAGAR: Sure.
MR. McCARTIN: One thing. One of the reasons the multiple
barriers aspect is in there, we're looking at it as really a compliment
to the performance assessment. While there is a requirement for
multiple barriers in the rule, we're looking for DOE to, up front,
explain the multiple barriers they have in their system before we go
into the review of the performance assessment, because we expect that's
what will be embodied in their PA, and obviously we can't review
everything at the same level of rigor in their PA.
So what they say they're relying on for barriers is what we
will hone in on, in part, in our review. So I think the multiple
barriers compliments the performance assessment and it isn't looked at
as a separate piece, if you will.
MR. GARRICK: I think, except for me, the committee is not
zealous of PA. I think what we're really saying is that the committee
is continuously looking for connection between what we're doing and the
issue of performance and the issue of what we ultimately have to
calculate and have as a basis for either accepting the site or rejecting
it.
So there has to be an integration process. There has to be
a mechanism of keeping things in perspective, and that's the thing that
we try very hard to be sensitive to, is to avoid getting into a
situation where these issues become isolated or decoupled or separated
from this integrated issue of dealing with what we like to sometimes
call the "so what" question.
So whenever we see anything that looks like something is
being separated, not just because of PA, but because of the fact that PA
has been assigned the responsibility of providing that integration and
providing the logic structure for leading us to an assessment of
performance, we keep pounding on that. And I think, also, we get a
little concerned if we get caught up in issues of probability and
consequences and trying to analyze those issues separate for the same
reasons.
The beauty of the CCDF is that it has removed the issue of
making decisions between low probability and high consequence and low
consequence and high probability. So in the sense, the standard has
gotten us around that problem.
So we are going to be very sensitive to analysis and
activities that we can't somehow put in this picture and say, well,
okay, this seems to be relevant to understanding the overall performance
and relevant to meeting the standard.
This is partly where we're coming from with respect to the
igneous activity problem, as well. So it isn't our intent here to
advocate or become obsessed with one method of analysis over another,
but it is our intent to try our best to keep things in perspective and
until something better comes along, the perspective machine is the
performance assessment.
MR. SAGAR: And we agree with your statement and I guess I
can only repeat what I said before, that performance assessment would
indeed be used to satisfy any one of those four boxes that you see
there. It's the question of do you do a slight different calculation or
not, can a single calculation give you conclusions for all four.
You know, like the human intrusion, the probability has been
taken out in Part 63. So we cannot the risk in human intrusion as
stated today in Part 63. Similarly, for multiple barriers, it's the
requirement where you're not supposed to assess probability and
consequence for each one of them. Part 63 doesn't require that.
So you have slightly different -- it's performance
assessment, but because the requirement is stated slightly differently,
you do performance assessment, but in a different framework. But we
agree with you. It does give you the main thing.
MR. GARRICK: Well, I think my opinion of one of the most
challenging issues associated with this project is the issue of how to
retain a certain flexibility in design as we're progressing towards a
license application, because many of the experts in the engineering
design of complex systems will indicate that one of the real secrets to
being successful on something that's never been built before is to
maintain a certain amount of flexibility and not, while your ignorance
level is pretty high, be forced to freeze the design in all its detail.
I think that puts the regulators in a very difficult
position, but probably in a position where they have to exercise more
creativity than maybe most applications.
And I know this committee has advocated that we be a little
bit more flexible in dealing with design issues than we might in a
typical project, where we have a precedence and the license is
replicated and what have you.
So I would guess that will be one of our major challenges,
and I know Charles has talked about this quite a bit.
MR. SAGAR: I would say, yes, flexibility is important; in
fact, probably a necessity for the success of the project. It's only
can we collect enough data in the small time that they would have
between the changes of designs to say something about that changed
design and for us to be able to say something about that changed design,
is there enough time.
I understand that NRC has the option of putting license
conditions in the construction authorization that would say if X is
verified, then we can go ahead; if it's not. Then there are -- and Bill
can comment more on that -- that would allow you to build that kind of
flexibility.
MR. GARRICK: This also seemed to be one of the undercurrent
issues in the National Academy's report on rethinking high level waste
management.
MR. HORNBERGER: It wasn't an undercurrent. It's pretty
prominent.
MR. SAGAR: Sridhar, come to the mic, please.
MR. NARASI: I don't know whether it's appropriate for me to
make a comment or not. I'm Sridhar Narasi, I work in the waste package
area.
But the difference between why we talk about too much
flexibility affecting the program and normal engineering practice is
that in the high level waste program, we have an engineering system that
is interfacing with a natural system, whereas in most engineering
practice, the engineering system is interfacing with a process that is
relatively well defined.
For example, if you have a chemical plant, you have the
process well defined. Now you design the engineering system to best
contain whatever process fluid you have.
But as in this case, when you alter the engineering system,
when you retain flexibility, the natural system also changes because it
is responding to the engineering system.
So I think there are challenges in the repository program
that are not the same as the engineering challenges one meets in a
normal conventional engineering practice, and that's why we feel that
while we do need to have flexibility, we do need to be able to change
the material, for example, if some new invented material comes along in
20 years, we have to understand that when new material comes in, we have
to think about the rest of the process, too.
So that puts a certain burden in freezing some parts of the
system. That's one of the reasons behind our comment that the design
keeps changing, making our job difficult in terms of assessing the
overall system performance.
MR. FAIRHURST: Your design is related, I think, more to the
waste package, isn't it? What you were just talking about.
MR. NARASI: Right, yes. I'm not talking about the other --
MR. FAIRHURST: That's what I'm saying, that in some ways, I
don't see it at quite that sensitivity.
MR. NARASI: Well, there is some sensitivity there, too.
For example, if you take the concrete liner out and you want to put all
steel liners, it is going to change the performance of many of the
systems.
MR. FAIRHURST: Or put no liners in.
MR. NARASI: Yes.
MR. GARRICK: Well, it's certainly true that this one is
much more dependent upon natural systems and most engineering projects
are not, but it's not without some precedence. The Panama Canal, for
example, was an example of an engineering project that was very
sensitive to natural phenomena and natural environments.
Okay. Any other questions? Staff, any comments?
MR. FAIRHURST: If John Bradehoff were here, I'd ask him
some.
MR. CAMPBELL: I have a question.
MR. GARRICK: Yes.
MR. CAMPBELL: Budhi, in the recent tech exchange with DOE,
DOE gave a presentation in which they were talking about how the 19
principal factors that were a prominent part of the REA and the
repository safety strategy are going to change with the new design and
they were talking about maybe 30 or even more principal factors that may
come out of this with the new design.
As it now stands, in the VA, there was, if you will, a kind
of a one-to-one correspondence between what we used to call pieces, but
which are now subissues, and the 19 principal factors.
How are you guys going to respond to a proliferation of
principal factors in this new repository safety strategy? Are you going
to kind of keep what you've got right now and say those are sufficient
to define and address key issues and then we'll fit their 19 factors in
it or are you going to have to revise your subissues to reflect their
new principal factors?
MR. SAGAR: Again, other people can respond, but my response
would be that we are not rigid on those 14 integrated subissues. If we
need to respond by changing those, adding to those, subtracting from
those, we would do so. Once we look at 30 or so whatever principal
factors are in their strategy, DOE's safety strategy comes out with, the
first inclination would be see if they can all fit into the 14 we have.
Well, if they don't, that doesn't mean we won't start some
other work under a different subissue to respond to DOE's strategy.
So I don't think we are rigid in one way or the other for
that. That's the flexibility we would maintain.
MR. GARRICK: Any other comments or questions?
[No response.]
MR. GARRICK: Well, that's remarkable. We're within three
minutes of our schedule. Thank you very much.
MR. SAGAR: Thank you very much.
MR. GARRICK: An excellent presentation. I think we'll
proceed to take our break.
[Recess.]
MR. GARRICK: All right. We'll resume. We're now going to
hear from Bill Reamer on risk-informing the planning and prioritizing
process. Bill?
MR. REAMER: Thank you. I'm Bill Reamer, Branch Chief for
the High Level Waste and Performance Assessment Branch. This is my
first occasion in that capacity as a member of the staff to brief before
the committee. I was here a couple of times before when I was in the
Office of General Counsel. I'm really looking forward, in my new
position, to interacting more with the committee.
I'm going to be talking about risk-informing the planing and
prioritizing process. In the interest of full disclosure, I need to say
this is not a process that I have actually experienced firsthand.
But I will and I'll describe it the way I understand it. I
think what that means for you is that if there is a lack of clarity, it
may well be because of the presenter rather than the process.
In any event, we'll get answers to your questions. Please
don't hold your fire by virtue of the fact that you're dealing with
someone who is maybe not as knowledgeable as they ought to be.
But my message today basically is that the prioritizing
process does exist. It's risk-informed, but it also is based on many
other factors that I will go through. It's responsive to new
understandings about performance, but like anything, it's a process that
can be improved and we look forward to working with the committee to
improve it.
Which brings me to the recommendation, slide three,
performance assessment should be used in prioritizing. I think it is.
The technical assistance program should adopt a risk-informed
performance based approach. I think it does. And a formal and
transparent process should be developed for identifying the most
important areas for technical assistance.
We do provide the committee, I think, a good deal of
information in this area, but perhaps there is more that we could
provide. There is something that you're not saying that we should do a
better job at in helping you understand.
Let me just mention a couple of factors kind of right at the
beginning, which I think don't change our motivation and commitment to
be risk-informed, but perhaps they may complicate our task a little.
The first is the relative lack of information, risk
information that we face. The second is the added dimension of the site
being not just a passive host, but really part of the system, what is
its contribution to performance. And the third is the extrapolating of
short-term data to the long timeframes involved, the problems that that
presents to us.
Let me describe first, kind of in general, the prioritizing
process. It's basically the budget and operating plan process. This
slide captures, in really rough fashion, the budget formulation part of
the process. It starts with, it's based on, it's premised on, it
proceeds with the information on risk that's available to us. Of
course, this year, we also had the Arthur Andersen process that was
directing us through the budget formulation, as well, and I think in
July you will get a presentation on that process.
We received certain kind of assumptions, budgeting
assumptions that we have to deal with. There are statutory activities
that we are required to prepare for. It's things like the draft
environmental impact statement that we have to review, the commitment
from Arthur Andersen to do more in the area of public outreach. But
these are kind of givens that we enter into the process with that we
have to deal with.
Against that background, then, we're preparing our, in a
very general sense, our budget. We're identifying our program goals.
We are describing our planned accomplishments. We are identifying
activities to carry those accomplishments out.
All of that is happening to formulate the budget and that
input, that work at the staff level then goes through the High Level
Waste Board, which now will involve the Deputy Director of the division,
myself, and the three section leaders from the High Level Waste Branch,
and Budhi and Wes from the center.
But that review process exists. The interaction goes on.
The question about, well, what does the repository performance and risk
information tell us where we should be spending, in general, where we
should be budgeting our resources.
The budget formulation decision, of course, is made by the
-- at least from the staff level, is made at the division and office
director level.
Once we have our budget input, then we move to budget
execution, budget implementation. This is where operating plans get
developed. Operating plans at an office and a division level, but
really operating plans even more detailed at the key technical issue
level, where KTI teams identify the activities that they intend to
undertake to -- which, when accomplished, will meet the operating plan
for the division and the office.
KTI tables and the center operating plans, I think these are
documents that we do give to you, at least that's my understanding. But
this is kind of a second cut where the risk judgments that inform the
budget formulation process get looked at again in terms of actually
identifying the activities that we are going to undertake in a
particular year and what money we're going to spend on those.
Then there is always the ability to correct, to reprogram
along the way if new information justifies that change.
The prioritizing process is -- I'm going to describe it in
four steps that kind of proceed from the broad to the more specific. We
work against the background of key technical issues that were identified
in a prioritizing process themselves, based on an analysis of Part 60
and identifying those technical areas critical to making a compliance
determination that needed our attention.
Against that background, the results of performance, of DOE
and NRC performance assessments are there to -- the information to
identify the aspects of the program having the most significant risk
components.
We, in theory, could drop KTIs or add KTIs, but the reality
has been really more in terms of adding subissues to certain KTIs. The
repository design KTI is an example where we have added the pre-closure
subissues and the ISA based on this process I am describing.
Criticality is now a subissue in three KTIs, the container
license source term, the near field, and the radionuclide transport KTIs
all have new subissues dealing with the criticality issue.
In any event, at a KTI level, that's step one. Then within
the KTIs, each year, we bin each KTI in terms of high, medium, low.
That is a judgment that is certainly informed by risk information, what
is the contribution in the base case to risk for a particular KTI, and
what's the sensitivity, how does the contribution vary.
But there -- I guess it's important for you to know that
there are other factors, as well, that come into the decision of whether
a particular KTI is given a high, medium or low ranking and these
include could it be an issue in a licensing proceeding. We need to keep
our eye on that ball, as well.
There are certain area where we simply have to move forward,
have to act. We need to have a Part 63, a regulatory framework. We
need to be ready to respond to comments, with comments on the EPA
standard. Each KTI needs a certain minimum investment so that we can be
in a position to prepare a review plan.
There also is the potential for new information; in this
case, it may well be, for example, design information to come into the
calculus of prioritizing a KTI; what is the likelihood that a particular
KTI raises issues that are -- that pose little or no potential to
engineering mitigation, the volcanism consequences KTI; where is DOE
with respect to a particular issue in their own repository safety
strategy, that's an element we need to have an eye on, and what work do
we need to get done in order to make our decisions and how long is it
going to take to do that work.
Then within each KTI, we ask the KTI teams to develop a --
to identify the activities that they will undertake and to prioritize
those activities and to take into account the significance to
performance and issue resolution when we do that, and we have the
example, for example, in container license source term, where we have --
we pulled back on activities involving carbon steel and we've moved
forward on activities with respect to C-22.
We're also asking KTIs to be aware of what is DOE's need
with respect to guidance in areas and we expect the KTI teams will also
consider factors like efficiency in getting their work done and whether
certain issues might present a higher likelihood of success and be given
priority for that reason.
The fourth layer is actually funding the activities and it's
not just a matter of drawing a line and funding the activities above the
line, but it is there are other considerations. We may want to get an
issue closed and have the -- or a subissue closed and have the ability
to do that with relatively little expenditure of money.
There are programmatic considerations apart from this. I
mentioned the EPA standard and the need to allocate resources in that
area, notwithstanding the lack of a standard, a proposed standard that
we can comment on.
Funding decisions also will reflect the status of the
technology. Long lead time items may get priority just because of that
fact alone.
We need to consider available budget, lab work may be
cheaper and get funded over field work which is more expensive. We have
our own, of course, need to maintain our competence in all areas to be
ready for the licensing proceeding and in some cases we may want to
devote more resources to bolstering our case or to eliminating
unnecessary conservatism from a particular analysis, and then, finally,
of course, there may be a logic to actually sequencing activities where
low items need to get done before high priority items can go forward.
Now, I mentioned earlier the process of prioritizing each
KTI into high, medium or low, and your slide 12 is information on that.
I won't spend any particular time on that, but if there are questions,
we can certainly get to those.
Then I believe we were asked to provide an example and the
repository design KTI is the example we chose. The first slide, slide
13, the timeframe here is FY-96 to '98, where the information is saying
that repository design is a low contributor to post-closure performance.
We also think that pre-closure activities, which are similar
to activities we license elsewhere, can be deferred. We're facing
reduced budgets, we see DOE as reducing its own funding in the area, and
in that timeframe, we assign a low priority to the repository design
KTI.
How does that get implemented? It got implemented through
termination of our research program. Center support was terminated,
core staff reduced in this area. Our own oversight of DOE design, the
design program was cut back, and what minimal effort did continue was
carried on in another KTI.
More recently, in the timeframe of FY-99 and 2000, now we
see the information telling us that there are issues under this KTI that
are relevant to performance. We see DOE's own activities in design
accelerating and the design alternatives coming. Our budget is
increased in these years and the combination leads to a higher priority
being given to the repository design KTI.
How does that get translated into actions? Well, we see it
in our own work, initiating work on the ISA for pre-closure, in the
post-closure area, in doing analyses with respect to repeated seismic
loading and thermal mechanical effects, stability. Our increased
efforts in the area of overseeing DOE design and QA, which we've talked
about a little bit earlier, and in updating work in terms of updating
modeling to include effects on waste package corrosion, and participate
in the international code validation project, specifically with respect
to DOE's involvement in that project for the Yucca Mountain drift scale
heater test.
So the conclusions are that we have a prioritizing process.
There are a lot of factors that are involved. It is risk-informed, but
there are other factors that are in there playing, as well. It's a
process which is responsive to change to new information.
What's our path forward? As I said, we can always improve
in this area and we're looking forward to the committee's help in
helping us to improve. The other thing that's pushing us is the
implementation of the Arthur Andersen recommendations, which pretty
clearly focused on finalizing Part 63, the risk-informed regulation,
completing the Yucca Mountain review plan, and getting ready to
implement it, and maintaining and using the PA tools to focus our
program.
So that completes my presentation. We can start on the
questions.
MR. GARRICK: We'll maybe work at the other end. Charles?
MR. FAIRHURST: I was interested, right in the beginning,
you said the geosphere was an integral component of the system and not
simply a passive host.
Do you want to expand a little bit on that, as to what you
mean by that?
MR. REAMER: Well, I guess what we're just suggesting there
is the added dimension in a repository system based on multiple
barriers, where part of the contribution to performance is going to be
from the site, that that's perhaps distinguishable from storage of spent
fuel, where you don't really look to the site to provide that
contribution.
Anyone else, Wes or anyone from the staff who wants to jump
in, please. I want to be sure that you get your questions as completely
answered as possible.
MR. FAIRHURST: This is not a reference to seismicity or
anything like that or volcanism.
MR. REAMER: No. No. I was not intending to refer
specifically to that.
MR. FAIRHURST: I don't quite follow it, but it's not that
crucial.
MR. McCONNELL: Keith McConnell. Maybe I can help. I think
it's more of a focus on the interaction between the near field
environment and the waste package. You can't view the waste package in
isolation from the geology.
MR. FAIRHURST: Okay. Chemistry.
MR. McCONNELL: Chemistry, yes. Chemistry.
MR. FAIRHURST: That's not the natural system, is it? I
mean, for example, putting in concrete liners and changing the chemistry
that way, but it's the waste package that is the thing that's causing
it.
What you're saying is it's the combination of what you put
-- the disturbance is still an engineered manufactured thing. It's not
the natural system. I think it's just semantics.
MR. McCONNELL: Right, right. One may be dependent on the
other, but still you have to view it in terms of the system.
MR. FAIRHURST: You're saying that there are interactions.
MR. McCONNELL: Right.
MR. FAIRHURST: I understand. Thanks. And the other one,
the ranking on potential licensing vulnerability, that's another one.
MR. REAMER: I think that's just saying is this an issue
that could come up in the proceeding, notwithstanding that we think,
based on our own work or DOE's work or all the information that we think
can kind of be put to bed, but we think that there is a high likelihood
that we're going to be engaged on this by others in the process, and we
want to be aware of that and take that into account.
MR. FAIRHURST: You mean other than DOE.
MR. REAMER: Anyone that's involved in the process.
MR. FAIRHURST: I'm trying to think of what that means. Do
you have an example of something where you may say you may be
vulnerable?
MR. REAMER: I'm not sure vulnerable is the right term.
What I see is more -- it's asking the question, do you have any
information, do you have an opinion on the likelihood that this is going
to really be an issue in the proceeding by perhaps someone else and do
you feel comfortable in where you are in being able to respond to that.
MR. McCONNELL: Perhaps I could provide an example for this,
too, and that may be faulting, the actual faulting at the site. While
in risk terms faulting itself may not be a significant contributor to
risk, we know that there -- as people have told us and as the center has
identified, there are at least 32 active faults out there.
So the information on faulting needs to be fairly
comprehensive, because others in the process have identified that as a
concern. I think we need to have some focus on that. It might not be a
significant contributor to risk, but we have to be perceptive enough to
realize it could come up as a significant factor in the process, the
licensing process.
MR. FAIRHURST: But you know where those faults are now.
MR. McCONNELL: Right.
MR. FAIRHURST: And you know that they're active.
MR. McCONNELL: Right.
MR. FAIRHURST: So you know, from an engineering point of
view, what you can do as far as placing the repository with respect to
that, right?
MR. McCONNELL: Right.
MR. FAIRHURST: So are you saying it's because an intervenor
or somebody may ask you to pay particular detailed justification of why
you took a particular position or not?
MR. McCONNELL: Yes, in essence. There are other factors
other than interaction with engineering. There's also the issues that
Jerry Zamansky has raised about seismic pumping and the effect of flow
and tectonic, the combination of flow and tectonic.
So while they might not be significant contributions to
risk, per se, as we've identified it in our ranking of various
subissues, it could come up in the licensing hearing and it's something
that we need to be prepared for during that process.
MR. FAIRHURST: Because in the IRSR process, you presumably
have come to some agreement with DOE on what are the key issues.
MR. McCONNELL: Yes.
MR. FAIRHURST: If you have not come to an agreement, you
have at least identified --
MR. McCONNELL: Right, and we've compared them.
MR. FAIRHURST: Identified that these have fallen by the
wayside.
MR. McCONNELL: Yes.
MR. FAIRHURST: You're saying that you're sort of preparing
for someone not in that dialogue to bring it up.
MR. McCONNELL: Right.
MR. FAIRHURST: Okay.
MR. GARRICK: Ray.
MR. WYMER: In one of your earlier viewgraphs, you
identified the particular engineering challenges; exceptionally long
periods of performance, exceptionally large spatial extent, and high
uncertainty in features, events and processes.
I couldn't agree more that those are severe engineering
challenges. What I don't have a clear understanding of is how you deal
with it. It's almost a philosophical question, of course, and the
philosophy of the approach, because you're never going to really narrow
these down to where you'd like to have them.
So could you say a little bit more about your general
approach to these challenges?
MR. REAMER: I think that may be attributing more than I
intended in the slide. What I'm just saying in the slide is that there
are complications, aspects of the tasks that we face that's maybe
different from, for example, the storage task.
MR. WYMER: Well, that's true, but you do have to deal with
them.
MR. REAMER: We do have to deal with them.
MR. WYMER: And if you could say something about the
philosophical stance you are taking in dealing with those, it would be
helpful.
MR. REAMER: Could you be a little more specific?
MR. WYMER: Well, exceptionally long periods of performance;
for example, there you have the outstanding example is probably
corrosion, and that requires, as somebody mentioned earlier, I think
Budhi did, the extrapolation, the long-term extrapolation into the
future based on short-term experimental results and extraordinary
complexity of the environment, with all kinds of subtleties that will
affect corrosion, evaporation, concentration, bacterial action, if any.
You're never going to really resolve that. That's a
specific example of the long-term period of performance. So what
position do you take?
MR. REAMER: I think the first thing we want to see is how
is it going to be handled by the applicant or the potential applicant,
because we need to have something to respond to.
MR. WYMER: Yes, but you have to have something in the back
of your mind that you think is rational to start with. Okay.
MR. McCARTIN: Bill, could I offer a thought?
MR. REAMER: Yes.
MR. McCARTIN: In terms of our TPA code and the approaches
that you'll hear about later on in the day and tomorrow, I think where
we have -- where there appears to be a lot of uncertainty, we certainly
have an approach that we think is conservative because of that
uncertainty.
And there are a lot of areas, you're right, of -- and some
of that is differences in our approaches between ourselves and the DOE
and as Keith indicated, we're ready to analyze several things, like
faulting. We have analyzed undetected faults in the PA and it doesn't
affect performance a lot. So that gives us some confidence that
faulting -- if we haven't picked up all the faults, it doesn't appear to
be a serious problem.
Also, we've looked at juvenile failures. We tend to have a
much higher number of juvenile failures than the DOE. Maybe we're more
conservative.
So there are a lot of areas that I think that as the
uncertainty gets higher, I think we need to look at some of the what
ifs, and that's what our code, as you will see in the next couple days,
we have a lot of different ways to look at the problem, so that we don't
have to necessarily -- we only have to resolve areas where there is a
big impact on performance.
MR. WYMER: It seems to me you're sort of in a box between
uncertainty, on the one hand, and excessive conservatism on the other
hand.
MR. PATRICK: If I could follow up on that. Wes Patrick. I
think that is a dichotomy, but I don't think it's a dilemma.
MR. WYMER: I see. Okay.
MR. PATRICK: And make a fine distinction there. I think
the comments that you made earlier, Dr. Wymer, are correct from a
scientific point of view. We won't probably ever have the uncertainties
narrowed down to a level of comfort from a scientific or perhaps even an
engineering point of view.
But we can get there and I think that's really the thread
that we're trying to weave through this, is that on the one hand, we try
to have as much as possible of the work be risk-informed, as Bill has
pointed out, but on the other, we know that not every member of the
public, not every member of the scientific and engineering community
buys the risk paradigm, for a number of reasons, one of which is that
they don't have any say and there is a whole body of literature dealing
with that.
So what do we do in the face of that? We try, first, to
bound as much as we can and if, with a conservative, perhaps even an
unreasonably conservative bounding solution to a problem, we still see,
as from a regulatory point of view, that the repository will function
within the requirements of the regulation, from the regulator's point of
view, we're done.
Now, the Department of Energy may want to come in, because
they have to pay for this thing, the utility owners have to pay for this
thing, this repository, they may want to move that boundary a little
closer to reality.
We may want to move that boundary a little closer to reality
so that we can get a better understanding of those levels of performance
that we would truly anticipate. It may be that our bounds are so broad
that we're not comfortable with them. In those cases, we take a more
mechanistic approach.
And in each of the KTIs that you're going to hear from as
the next couple of days play out, they're going to be giving you some
examples of where that happens. It will be risk-informed. In fact, I
think as Budhi pointed out this morning, we're very close to being
risk-based, although the Commission has a position not to make that
final step to a truly risk-based situation.
But, again, just the short story is we start by bounding and
then if that looks to be a very expensive solution, if that looks to be
something that we feel the Commission would not ultimately be able to
say we have, quote, reasonable assurance, unquote, then we begin to pare
away by taking a more mechanistic approach to whatever features, events
and processes might be involved, keeping always in our mind, as Bill has
pointed out a couple of times here, that there is a large body of
stakeholders out there and they may want additional proof. That's what
he calls a licensing vulnerability.
In those cases, we would go beyond what a risk calculation
or a visceral determination, as a scientist or an engineer might suggest
is necessary.
I don't know if that helps at all, but you were asking for a
philosophical underpinning.
MR. WYMER: I just wondered if you had taken a -- I wondered
if you had adopted a philosophical stance.
MR. PATRICK: Well, that's it. Start with the bounding.
Tim may want to chime in and clarify that, but we start with bounding,
because we are resource limited, and then come in with a more
mechanistic approach as we feel is appropriate.
MR. WYMER: Okay. Well, I'll wait till I hear the
discussion on KTIs.
MR. McCARTIN: May I make a comment from this end? One of
the points of the whole question, which I think has been kind of
ignored, is the fact that on this project, we really have used a
tremendous reliance on natural analogs to get some of the data. This
goes back to some of the original concerns on the exceptionally long
timeframes, spatial concerns and that.
It's the whole aspect of the natural analog that has played
a tremendous role in this program.
MR. WYMER: Okay. Fine. That's all I have.
MR. GARRICK: George?
MR. HORNBERGER: Bill, you started, what you pointed out in
one of your early slides was some of the recommendations that we made,
and, to paraphrase, you had said that PA is used in prioritizing, that
risk does inform the TA program. And the third one I wasn't quite as
clear on, but I think that what you basically meant to say is that you
do have a formal and transparent process and, of course, this is the
difficulty that we perhaps have been having, and that is the
transparency of the process.
And I recognize it's difficult and I don't know even know if
this is a fair question, but I'm going to ask it anyway. Bureaucracy,
and I'm not using that in a judgmental way, organizations tend to be
self-sustaining. That's just the nature of organizations.
If you look at universities, they are the worst example of
bureaucracies. To get rid of a department at a university is just
unthinkable. It doesn't matter what the subject matter is. The experts
are there and they will claim that it's important.
So you don't have the luxury of being able to continue a
department of classic studies long beyond its useful life.
On the other hand, you do have the management difficulties
of saying, look, we have a very good group of people here and we know
we're going to need them in the future and just because this year the
priority is low in terms of risk, we certainly can't just disband the
organization.
So I see this as creating some kind of tension for you and
it's under your resource allocation, parenthesis, apart from risk
bullet. The difficulty that I guess that I have is how you work to make
that transparent in your decision-making process, so that people know
why you're making these decisions, how you're making them, what your
long-term view is.
I think that this is perhaps what we haven't been able to
come to grips with on the transparency issue.
Do you have any thoughts on how you could help us?
MR. REAMER: When I looked at the third recommendation,
that's where I felt kind of like, gee, I think I need more from the
committee kind of telling me what is it that we're not doing that you'd
like to see. To some extent, the discussion may have to take place in
the context of specific KTIs, where you see the priority that we've
given it, you see the money that's being spent, you see the activities.
We are providing you that information and then perhaps a give-and-take
can help elicit, well, what really is behind this funding decision, what
really is behind that priority, why are you spending money in this area
given what we understand to be the effect on performance.
So I don't know, in a sense, I'm sharing -- I'm agreeing
with you that I think as to the third recommendation, we're not as
confident that I know that we are responsive to that recommendation, as
I feel like, as to the first two, I think we may not be doing well, but
are trying to do it.
So I kind of -- a favored lawyer technique, turned the
question back to you.
MR. HORNBERGER: And I understand that trick. I use it all
the time myself.
MR. REAMER: Is that a trick?
MR. GARRICK: It's a ploy.
MR. HORNBERGER: That's all I have.
MR. GARRICK: All right. Just a couple things. On
viewgraph four, you talked about the factors influencing scope and
priority and the first bullet was lack of available failure risk
statistics.
I guess I just have more of a comment here than a question,
in that it is true that at the total system level, there is a tremendous
lack of information -- excuse me -- there is a tremendous lack of
information of the type that would allow us to do specific subsystem
analyses.
On the other hand, one of the things that the field of
reliability engineering has taught us is that using that argument is
sometimes over-used and sometimes even a copout on the basis that most
systems, once you take a -- even if they're one-of-a-kind or
first-of-a-kind, once you've decomposed them into their component parts,
you see all kinds of opportunity for data collection and data analysis
that you don't see at the total system level.
So my comment is, really, I hope that somebody has looked at
this system phase by phase or component by component or in groupings
that make sense, from an information and data standpoint, to really get
a handle on where the data is missing and what -- how that is influenced
in the uncertainty as a function of the component of the total system.
Again, I'm reminded of about 30 years ago being asked to do
a reliability analysis of an 800-megawatt turbine generator, for which
there was none in the world, and, on decomposing the system into its 26
subsystems, found an enormous amount of relevant information.
So I hope there is a lot of that going on. I hope that
people are looking at the environment, for example, or the waste package
on the basis of what we do know, which is probably a great deal, and
trying to pinpoint the data information rather than being too casual
about that as a major obstacle.
The point of the whole comment is that data is often used as
the principal reason for not doing something and seldom is the real
reason for not being able to do something.
Let's see. The other thing I wanted to ask is, on viewgraph
seven, step one, still, you talk about analysis of Part 60.63
uncertainties as a factor in evaluating issues most important to
repository performance.
Can you elaborate on that a little bit? I think you have
partly.
MR. REAMER: Yes. I was really intending more a historical
comment here, let's remember where the KTIs came from initially. They
came from an analysis of Part 60. They tried to identify from Part 60
kind of key issues affecting compliance, affecting a determination of
compliance, key technical issues. So there was already a prioritizing
process that went on in developing the list of the ten KTIs.
So what we're dealing with today is more based on
performance assessment information, how do we make adjustments there.
In theory, we could add or subtract KTIs. In reality, what we're
talking about are adding or subtracting subissues in certain KTIs.
MR. GARRICK: Is that why that's on the list on viewgraph
12? Is it now a KTI in and of itself, Part 63?
MR. REAMER: Part 63 is a KTI, yes.
MR. GARRICK: Any other comments, questions? Yes, Charles.
MR. FAIRHURST: One that relates to what you were talking
about here, the 800-megawatt reactor. I'm trying to see how to phrase
this. Somewhere in these, there is the problem of maintaining
competence of people, keeping people actively involved and the priority
of their particular interest may go from high to low to medium and
you're talking about having cut back at one point and realizing later
you needed to give added emphasis. It's not so easy to get back up to
speed.
Then another -- what is NRC's role in this? On the one
hand, you want to bring people in who are competent and bright and you
want to give them things to do, so that their mental abilities don't go
to sleep.
At the same time, NRC has a role as reacting to what DOE
proposes. Where did you find -- maybe it's not a question to ask you.
Where do you find the balance between actually coming up with your own
ideas of how things should be done in Yucca Mountain and, on the other
hand, letting DOE make all the calls?
You see it's different in different areas. In some places,
you see very active contributions coming from the NRC staff.
MR. REAMER: I guess, conceptually, I don't see how we can
be an impartial reviewer of a license application, where we have been a
sponsor of a particular approach in the proposal. So I would see our
role very limited in the pre-licensing phase in terms of suggesting and
much more defined in terms of preparing ourselves to review a proposal
and responding to what we see as a proposal or a potential proposal or
whatever comments we have that would be helpful to the potential
applicant.
MR. FAIRHURST: It's difficult.
MR. GARRICK: Any other questions? Staff?
[No response.]
MR. GARRICK: Thanks, Bill. Welcome aboard.
You need no introduction, but will you introduce yourself
anyhow?
MR. PATRICK: I will indeed. I'm Wes Patrick, President of
the Center, and have the task of explaining to you a little bit more
about the development of capabilities here at the center. That's going
to be our predominant focus. My understanding is you have ample access
to information about the staff of the NRC, so as Bill and I have
discussed this in preparing, we're going to focus primarily today on the
development of the capabilities here at the center.
Having said that, though, I would point out that we have a
number of features of the way we operate that I think are very
beneficial to the program overall and a number of NRC staff have taken
advantage of. Most specifically, we have a very active staff exchange
program, where our staff go up and spend two to six weeks at NRC and
they likewise come down here.
It's important for both of our staffs to be able to do that.
We benefit most from learning more about how NRC operates as an
organization and learn some of the more programmatic things that we need
to know to be able to serve them effectively.
They, on the other hand, have opportunity, and it speaks to
a point that was made just a few minutes ago, they have the opportunity
to come down here to engage in work in our laboratory areas, to
accompany our staff members out for selective field work for
confirmatory studies and the like, which I think has been very, very
helpful and very beneficial.
The presentation outline I'd like to follow today is going
to focus predominantly on organization and staffing. We're going to
look a little bit at our approach to problem-solving, as well. Let's
see here if we can find a bullet here. The approach to problem-solving
throughout the NRC program.
We're going to be looking at capabilities here at the center
within that overall problem-solving approach. It's a four-step process
that we follow there.
The overview of the programs that we support at NRC is going
to be given predominantly to give you a little bit of background of the
breadth of the activities that we do and you might ask why is that
important within the context of ACNW's review. It's important, as I
hope I'll be able to explain to you, because it exposes our staff to a
broader range of programs at the NRC and enables us to staff up in some
areas where fractional full-time equivalents would be otherwise
available.
The first two presentations that you heard today focused
predominantly on what we do and why we do it; priorities of the work and
so forth. We're going to focus now on the people, the process we use,
some of the aspects of facilities, hardware and software and so forth
that are made available to be able to execute the program of work that
has been ranked according to the prioritization system that Bill Reamer
just described.
In covering these topics, I'm going to attempt to address
some of those issues that are noted in your self-evaluation and in some
of the letters that you have provided to the Commission and to the staff
over the last year or so.
There are several of these topics that have been recurring
themes and I'm hopeful that as our discussion plays out here, if the
questions aren't answered, at least we'll be getting closer to answers
in some of those areas.
It's important to understand, I think, a couple of things as
we start out. One, the organizational role of the center. This is not
particularly new material that's shown here on this slide, but I think
it's important for us to go back over it once again. A number of
committee members have changed since we've last had an opportunity to
brief you on the subject, and I just want to hit on them quickly.
The first is that we focus explicitly on NRC's mission.
These, by the way, these first bullets are things that are in the
charter, in the official statement of how the center is permitted, on
one hand, and constrained, on the other, in its operations.
That explicit focus on NRC mission is extraordinarily
important, both for selection of center staff and maintenance of center
staff, and also in the selection of experts, consultants, subcontractors
and the like, and we'll talk about that a little bit as things go on.
Second, we're charged with assuring that there is a
long-term maintenance of technical assistance and research capability
provided to NRC. In all of the program areas that need to be supported,
the charge of a Federally-funded research and development center is to
be essentially self-sufficient and self-sustaining as we play out over
time.
That is a process that is regularly reviewed, audited by a
variety of organizations, and is a requirement under the Office of
Federal Procurement Policy. So we take it very seriously. It is a
foundational aspect of our work, both legalistically and to make us
programmatically responsive to what NRC has charged us to do.
The fourth bullet there, we are also charged with providing
a very centralized capability. We found in the early days of the
program, and for those who aren't familiar, we were originally set up in
October of '87, so we're about to celebrate our 12th birthday as an
organization. From those very early days, we saw that bringing the full
scope, the full breadth of capabilities within one organization had a
very powerful effect on integrating.
People talk about integration a lot, but we find that as
programs get bigger, as they become more geographically dispersed, they
become intrinsically difficult to organize, to integrate, and to
operate.
Now, my friends at Rockwell who were involved in developing
the space shuttle, of course, always challenged that such a simple
little program like the repository program would present such great
challenges to us, but I think for those of us who have watched how the
regulatory part of the program has operated over the years and certainly
-- and, again, no criticism intended here -- I think it's really a fact
of life with regard to the source of people who work on this kind of a
program.
When we look at the Department of Energy side, we see the
tremendous challenges that they have had in integrating across their
organizations as time has played out. So that fourth bullet is one that
we do not take lightly. It takes an extraordinary amount of work on NRC
staff's time and managing us and cooperating and working with us and
also on our time.
Key aspects, and I'll be talking about those a little bit
later, the state-of-the-art laboratories and the unique field and analog
test sites, we think, are a very important aspect of those capabilities.
Much of our discussion so far this morning is focused on
performance assessment and people usually push the fast-forward button
there and get to the point of doing the actual numerical calculations.
Well, there is a lot of data, as Dr. Garrick pointed out in
his recent remarks, there's an awful lot of data that has to underlie
the development of those models, the populating of the databases that
are going to be used for running the ultimate calculations.
Then, of course, as was discussed a little bit earlier on in
Budhi's presentation this morning, there are the more mechanistic
process level modeling activities that also require a good deal of data.
It's equally important to understand what the limits on our
role are as well. We evaluate, we probe, we challenge, we, where
possible, confirm what the Department of Energy is doing with regard to
site characterization. We don't do site characterization. We probe, we
challenge, we, where possible, confirm what the Department of Energy is
doing with regard to design. We don't do design.
Our senior management on the NRC side continually poses the
question to us, are you carrying the Department of Energy's water. It's
an extremely sensitive area. They're concerned. Another statement we
often here is are you getting out in front of the license applicant, are
you ahead of the Department of Energy in this regard.
I think Dr. Fairhurst hit on it in his remark just within
the last ten or 15 minutes here, that is a challenge, that's a balance
point that we have to strike.
Why do I mention it here in the context of capabilities? It
affects, in a very direct way, the kinds of people we are able to
attract, the kinds of people we need to attract, and that we need to
maintain within the center to be able to support the NRC.
Hot design people want to continue to be in hot design
positions. We don't do design. NRC doesn't do design. Consequently,
our tendency has been to move more in the direction of design analysts
rather than people who have fundamental backgrounds in design.
There are some things that we have been able to do and are
on the verge of doing that try to strike a compromise there, try to
bring in more of the direct design expertise to be able to, within the
context of this organizational role, try to be able to go a little bit
further in the direction of design, and I'll be touching on those a
little bit later on and I'm expecting at least a few questions and
comments during the question and answer part.
There are a variety of areas also where controversy is
expected and in those areas, we do probe the limits. We not only look
to what is often called confirmatory work, but we move into more of an
area that one might call exploratory work. These would be areas where
we feel, based on our calculations, based on perhaps some limited
evaluations of the literature, that there may be a significant risk in
an area.
We don't have the data to run the risk calculations.
Perhaps we do some bounding calculations that give us some further
insights that this may be an area worth exploring.
In cases like that, where we're not able to carry the day
with the Department of Energy, we're not able to convince them that this
is an area where work is needed, we'll move a little bit further. We'll
move a little further into the design analysis area. We'll perhaps do
some laboratory testing on a phenomenon that people have not evaluated
as thoroughly as we would think appropriate.
We may move to the field and do some studies there. This is
what we would call exploratory type of work. Both of those are
recognized by NRC as being appropriate, as being necessary.
Interestingly, they are two items that are in concert with what NRC has
done in the reactor program; primarily, to focus on confirmatory,
checking out, seeing whether what the applicant says is so, but, second,
in those areas of potential safety vulnerability, to probe, to explore
some new areas and to determine whether those are, indeed, risk
significant before closing the door on them or going to the Department
of Energy, making the case that this is an area that needs some
additional study.
So this is also an area where we risk-inform the process and
we use those risk insights to somewhat modify the organizational role
that the center normally fulfills with respect to the NRC.
The second chart, probably unreadable on the screen, of this
resolution, shows our basic organization here at the center. You will
note that there are five what would be called traditional staff level
sorts of positions. These are director type positions that are staff
functions.
Unlike most organizations, however, those staff individuals,
with the exception of quality assurance and the administrative area,
those staff level positions are directly involved in managing projects.
The lower tier shows the five elements, as we call them
here, that are involved and responsible for conducting the preponderance
of the work that's undertaken here at the center in support of NRC.
In the next several slides, I want to talk about the sources
of expertise that we do bring to bear on the overall NRC program. I
will do this kind of in layers. First, to talk very broadly about the
three sources of labor that we use in undertaking our studies, and then
I'll peel away a little bit and look first at the center expertise, and
then I'll group the consultants, subcontractors, and SwRI, Southwest
Research Institute, our parent organization.
You will notice that almost one quarter of the labor that we
bring to bear on supporting NRC in the high level waste program comes
from outside the core center staff.
We have listened to your comments in this area, we have
tried to be responsive to them and bringing support staff in in a
variety of areas, and I will speak to that in more detail later.
This is about double our historical utilization of
consultants, subcontractors and institute staff. By the way, I will
just call those three consultants from here on out. Those are
distinctions that we made for purposes of how we bring people under
contract, but otherwise they should be transparent to you.
Let's take a look at the center expertise in a little bit
greater detail. Hopefully, your color rendition there on the chart is
adequate for you to follow along here.
I just want to make a few points here. First, that none of
these areas of concentration, these eight areas of concentration, as we
call them, are remarkably low. That reflects our basic philosophy of
trying to have a critical mass, for lack of a better term, in each of
those areas. Because of budgetary limitations, we almost never have
quite as many as we'd like to have in most of the areas, but we have
tried to achieve a minimum critical mass in each of those areas.
When we say that, what we strive to do is to have at least
one quite senior individual in each area, then intermediate and more
junior level people to balance that out. That gives us a good mix in
terms of seniority of staff and, also, looking at a program of this
duration, it gives us the potential to bring people along, to nurture
them, to help them grow both programmatically and technically, so that
they are there as other members of the staff may move on to other
positions, be promoted out of this particular organization, or retire or
whatever might be the case.
Second, I would point out that there is an emphasis on those
areas that we judge to be of greatest need. Now, you can't walk,
cross-walk these eight areas of concentration directly into the key
technical issues, but I think you can see the general sense of things
there. The largest number of staff, the largest percentage of the total
staff, and I got this right, by the way, Dr. Wymer, geochemistry is
right on the top there. And it would maintain its alphabetical
position, I might note.
Hydrology, material sciences, performance assessment, and
the general area of geology. I've listed structural, tectonics,
volcanism there, but being a geological repository, as one of my NRC
colleagues likes to say, we do have a strong emphasis in geology.
Some of the smaller groups, the mix of chemical, mechanical
and nuclear engineering, there has not been a long-term large demand for
those skills on this particular program. So that's a smaller
percentage, about six percent, less than -- well, about a third of the
PA number.
Rock mechanics, mining and geological engineering has been a
smaller area of demand over the last three years and we're just now
beefing up in that area, as has been mentioned before.
The systems engineering administration also included in that
group is quality assurance and also our computer sciences information
management skills. So that's why that one may look a little bit bigger
than you might perhaps otherwise anticipate.
A key point I'd like to make, though, is that this mix has
indeed changed as programmatic needs have changed and it has changed as
technical needs across the program have taken place.
By that, I mean, if we back up two or three years, a
combination of programmatic demands and budget changes came into play in
fiscal year 1996 and as a result of that, we took that opportunity, very
negative opportunity, to adjust. We selectively dealt with trimming our
staff during that period of time to match the priority rankings that you
saw Bill Reamer present just a few minutes ago.
Likewise, then, as the budgets began to be restored in '99
and 2000, we were able to tailor the restaffing of the center to match
what we now believe to be the most critical issues and we're doing that
in a forward-looking manner.
MR. GARRICK: How do you distinguish performance assessment?
Asking it another way, how much of the performance assessment would be
earth science, for example?
MR. PATRICK: Earth sciences in what sense? Having an
initial degree in that area as opposed to some engineering area?
MR. GARRICK: I'm assuming that earth scientists model
things as other scientists and engineers do. So I would assume that the
performance assessment team has a considerable number of those kinds of
people that model ground water travel time and what have you.
I'm just trying to really understand -- I'm trying to turn
up the microscope on this. It looks like about maybe 70 percent of this
is earth science and 30 percent is everything else. Is that right?
MR. PATRICK: In fact, I would encourage you, with that
analogy, to turn down the microscope and back off the --
MR. GARRICK: Well, I'm not saying that's bad, because we're
trying to design a repository or license a repository.
MR. PATRICK: But you're raising a very important point and
it's one that is often answered with a rather superficial response that
we're all performance assessment. But that is truly part of the answer.
We're all performance assessment. We all are looking at -- if you turn
the microscope up, you flip to the last chart in this sequence, chart
22, which I am not going to project, but it's there, in your packet,
this details out a little over two dozen specific areas of expertise
that are on the staff, and in each of those areas, there are from one to
seven or eight people on staff.
Neither this chart nor the pie chart with eight areas of
concentration is breaking people -- breaking the mix of staff down
according to the KTI they're working on, and that's the important point
that I'd like to make here. That's why I said back the microscope out a
little bit further, because in performance assessment, there is heavy
contribution, day in and day out, charging those accounts, if you will,
helping build those models, abstracting geochemical, material sciences,
hydrological processes into the TPA code.
There is very heavy contribution from those individuals. So
these are more a disciplined or a technology-based grouping of these
areas of expertise. They are not cross-walked into the KTIs, per se, in
the sense of a total resource loading.
Stated another way, if you look at that slide 22, there are
very few of the individuals listed in that chart who, in any given
month, wouldn't be doing some work on performance assessment.
MR. GARRICK: So having said that, having accepted that
philosophy, how many people in the PA 17 percent have a terminal degree
in the earth sciences?
MR. PATRICK: Gordon?
MR. WITTMEYER: Two. Two people.
MR. GARRICK: And how many are in the group?
MR. WITTMEYER: Seven, currently.
MR. PATRICK: A group of seven. Nuclear engineers, health
physicists, there is one with a couple of degrees in environmental
sciences that's in there, that's done transport and dose calculations,
risk assessments, including prior life risk assessments related to the
Hanford reservation and the disposal or non-disposal of waste at that
location. Quite a mix of individuals.
Because performance assessment draws so heavily upon systems
thinking and systems knowledge, you will also find people with
backgrounds in petroleum and chemical engineering or mechanical
engineering with an emphasis on fluids that will also be in that group.
And our staffing profile reflects that, as well. The
positions that remain open, Gordon, as I recall, are health physics,
dose risk assessment, and --
MR. WITTMEYER: Two more performance assessment engineers,
modelers.
MR. PATRICK: Right. And those two PA engineer modeler
positions are advertised very broadly to be in essentially any field of
engineering. We're looking for numerical skills, risk assessment
skills, to the extent that they may have them, and process thinking,
systems thinking.
MR. GARRICK: Not to push this too far, because we've
probably pushed it further than we should, of the seven, who would be
classified as the expert or what expertise is there on the waste package
design itself?
MR. PATRICK: One individual actually in the seven, plus
there is the larger group of material scientists who are heavily
involved in developing the process models, getting sufficient
confirmatory data to evaluate whether those models are adequate or not.
MR. GARRICK: Thank you.
MR. PATRICK: Any other questions on that? I know this is
an area of continuing --
MR. GARRICK: The important point here is that these are not
mutually exclusive.
MR. PATRICK: That's correct. That's correct, and that's
why I say de-focus that microscope.
MR. GARRICK: And systems engineering, we could have the
same kind of questions. I don't know why you had lumped systems
engineering with administration, but that's another question.
MR. PATRICK: The individual with the degrees in that count,
he's maybe wondering that, as well.
I will be making, I think, a couple of comments later on
that may come back to that. We may want to explore that a little bit
further, as well. And by the way, I realize the first couple
presentations, we did the presentation and then broke in with the
questions. I'm comfortable doing it either way. In fact, I'm probably
more comfortable getting questions as they go along, because, if nothing
else, then you share the responsibility for running over the schedule.
But let's go to the next slide here. For each of these --
for both of these areas now, since I'm going to break them into
consultants and center core staff, now I want to give you a little bit
additional background.
With respect to our core staff, those eight technical areas
that I mentioned, a little over two-thirds are Ph.D.s, terminal degrees,
about 21 percent with master's degrees. So you see there's relatively
little residual there with bachelor's degrees. We average about 19
years experience in bachelor's, which is one of the ways that we -- one
of the statistics that we count here at the institute.
The core center staff's involvement is very broad and very
deep. I'm personally very proud of the staff that we've been able to
assemble here and maintain largely over the 12 years that the center has
been in existence. Notwithstanding the rigors of changes in program and
budget and what have you.
A number of our staff are themselves called upon to function
in peer review capacities. They're members of national and
international communities and peer reviews and I think that speaks for
itself with regard to the regard that the international waste management
community has for these individuals.
I mention international waste management community, other
international groups have called upon our staff also to be involved; for
instance, with regard to reactor activities, with regard to siting of
critical facilities broadly and the earth sciences hazards, seismic,
volcanological and so forth that are presented to those. So those are
aspects I think that are objective measures of the staff expertise.
Staff is widely published in peer-reviewed literature. We
haven't provided you with listings of those in some time, but if you're
interested in some more paper on that, we can certainly do that. And,
of course, patenting and copyrighting is just kind of a normal aspect of
the activities we do.
I think most of you are aware that our earth scientist team,
our structural geologists won an RD-100 award last year. Those of you
who are familiar with that process, it's a rather rigorous evaluation
where the top 100 or the 100 most technologically significant
developments brought to the marketplace in any given year are granted
this award.
Some large DOE laboratories, with thousands of employees,
might knock down two to five of these in a year, just to kind of put the
thing in context, where we're sitting here with 50 or so people.
So I'm not promising you we'll have one next year or the
year after, but it's, again, a reflection of the quality of work and how
the marketplace, as well as our peers, use the work.
MR. HORNBERGER: Wes, since you invite it, let me break in
here. There is widely published and peer-reviewed literature, I think,
that we are probably on record somewhere as saying we think that this is
an important aspect. Do you have a list broken out of just
peer-reviewed, and not abstracts and not presentations? Because I don't
need a volume this thick, but if you have a -- that's good. I would
appreciate getting it.
MR. PATRICK: Yes.
MR. HORNBERGER: And the other question that I have along
this line is in the technical assistance program that we're here talking
about, it's not clear that NRC would necessarily view publications in
peer-reviewed literature as being as important perhaps as the ACNW would
see them.
I'm just curious, do you have difficulty in satisfying the
customer here, NRC, on one hand, and still doing work that leads -- that
is reviewed -- the important part is having it reviewed by people
outside.
MR. PATRICK: Right. Well, I -- again, Bill can chime in
and correct my thinking here or my perceptions. I have been very, very
pleased with NRC's openness in this regard and, again, I look for
objective measures of these things and I try to provide you with
objective measures so that you know it's not just Wes Patrick talking,
but it's something you can go back and touch base on.
NRC, about three years ago, decided that this was important
enough that in our operations plans, which is one of the steps in the
prioritization process and in the assignment of activities, in those
operations plans that we prepare annually and update usually once or
twice during the year, specifically call out as milestones peer-reviewed
publications and that's -- to me, that's really putting management
attention that these things are important enough that they deserve to go
into the count; that, among other things, not only gets it out into the
literature, but we function under a very rigorous evaluation process by
the NRC and they factor those into our award determinations.
So they have really put teeth into showing that these are
very important aspects to the program. We're very pleased to see that,
particularly after -- when the research program was there, the research
philosophy is a little different than the licensing philosophy, and that
was an important thing for us as a center to see maintained in the new
program and eventually seen it strengthen.
MR. HORNBERGER: Good.
MR. PATRICK: Anything else before we go forward? The next
couple of charts give you a little bit of a timeline with respect to
center professional staff profiles. The top curve there is the
staffing, the bottom one is the anti-staffing, the attrition part of the
equation. You will note that we came near a peak just about the time
the budget was cut. There is something about that process.
We were within -- we actually had two more people on staff
and the authorized NRC funding level at that time. We had taken a
little risk and were rewarded accordingly by Congress in the fiscal '96
budget.
It should not go without noting that that reduction in staff
was substantial and not without considerable pain and upset. Those
kinds of reductions cannot be sustained very many times in an
organization like ours, and we won't be able to attract and maintain
staff over the long haul.
We are aware of that, NRC is aware of that. We kind of wish
some folks in Congress were a little more aware of that, so that those
sorts of upsets don't occur.
The good news here is that you will see we're coming back up
to those authorized levels rather quickly, as I will show in the slides
a little bit later on. We've been able to do that not because the
entire budget has been restored, but because NRC has found other areas
where we have been able to use that expertise across other programs and
we have, at the center, also been able to do some business development
outside of NRC, at NRC's authorization.
That has enabled us to have a larger staff and one that can
be supported by the repository program, so we're able to have a broader
mix and greater depth of expertise than what we would be able to have
otherwise.
MR. WYMER: This tells us about your professional staff.
How much support staff do you have?
MR. PATRICK: There are about -- across all areas of
support, including computer technicians and lab techs and things of that
nature, there are about another 15-16 people that are involved. That
brings up another point, though, that I should also make. We have found
it very productive to have students from local universities involved
throughout the year and from distant universities to be involved in the
summer session when they are out of school to support our program.
If you look at that list of publications that we give you,
you will find that it comes in blocks of publication. We will typically
get a lot of field and laboratory work and design analysis done during
the summer months and then finish that up as publications in the
October-November-December timeframe. So that's been very productive for
us, very productive for the students, as well.
We're just kind of breaking it up by quarters. You can --
not to go into it in any detail. You will just see that it's a
continual readjustment process that we go through, both in setting the
goal for staffing and trying to achieve that goal. As we stand here
today, we're at about 53 people on staff. We have a couple of offers
out right now that we hope will come to fruition, as well as a number of
interviews taking place. In fact, I think there are three out right now
officially.
MR. FAIRHURST: When you have a number like that, the plan
and the actual, in the plan, are they all authorized? You have money
for them if you can find them.
MR. PATRICK: No. Actually, I have authorization from my
boss at the institute to exceed what we currently anticipate NRC will be
able to fund in the high level waste program.
Our anticipated funding for the high level waste program is
46, possibly as high as 48, depending on how the numbers come through,
and we have other NRC programs that are possibly available to support
some of these other individuals, and commercial that I alluded to
earlier.
So there is -- we take some risk in hiring and the attempt
to be able to account for accommodating for attrition and also to get
the breadth of staff that we need to have available.
Anything else on that?
Let's turn now from the center staff to look at the external
experts and the role that we call upon them to play.
You can break these sets of diamonds down into several
categories. The first one is where we use more external labor than in
any of the other areas. I'll define three areas here for you. Those
skills, augmenting our core staff, those are basically the same kinds of
skills, the same level of expertise, with some exceptions, as what we
have within the core staff. That's where we bring in people, outside
expertise, because they either have a special area of competence that we
don't have on staff or we are just short of labor in that area. We may
need -- we may have lots of geochemistry and chemistry to do, so we may
bring in some external labor in that area to augment that.
So those are, I guess, what I would call like kind sorts of
people coming in. Maybe a hydrologist, we have lots of hydrologists,
but this person's expertise may be in a particular aspect of hydrology
that we need, we need for a short period of time, for something less
than a full-time equivalent, and we'll gain access to their expertise
accordingly.
The next three bullets, those are typically either greater
depth of competence or experience in the area or -- and/or different
skill areas would be brought to bear, and I will be giving an example
here in just a few minutes in that regard.
And the final one, we have a variety of organizations that
give advice and oversight. Some of them, like yourselves, that are
functions of the NRC. Others which we bring in to provide us with
independent oversight and advice.
In the case of the center review group, these are people who
have great longevity in a variety of programs, come from industry and
academia. They do look at our technical programs, but they're there not
primarily for their technical expertise, but because of their
organizational development and managerial expertise and we do have an
oversight board for the center that fulfills that role.
Before turning to a specific example or actually two
parallel examples I'd like to give you, I think it's important to spend
a little bit of time on some of the constraints that the center operates
on with regard to accessing external experts.
I guess the ones that I would highlight among these are the
first, the fourth and the last. Conflict of interest considerations is
a major topic here at the center. We have procedures in place, very
rigorously applied, that constrain who we can gain access to.
A second thing, under the fourth tick there, is that people
may have great technical expertise in an area, but we find that we need
to calibrate that technical expertise to make it directly applicable to
things that are meeting the regulatory needs of the NRC.
So there is a time, a role involved in training and adapting
both our core labor and the external labor to the regulatory culture
that's involved. We're going to be very interested to see what kind of
feedback we get when we go through this total system performance
assessment code review for just that reason.
We have conducted reviews like this a number of times in the
past. We have reviewed our rocks mechanic program, our waste package
program, geochemistry program, and perhaps a couple others that aren't
coming to mind just now, and invariably, when we bring in the outside
experts, we get lots of good ideas, we get very good critique of the
program, and those are the positive factors.
But along with that, we get some things that, from a
regulatory point of view, are a bit off the mark; in fact, sometimes
very much off the mark, recommendations to pursue an academically very
interesting avenue, but back to the discussion we had earlier, Dr.
Wymer, may not really be within the regulatory context. It may be far
too detailed in terms of seeking a mechanistic answer that is
appropriate for making or supporting our regulatory decision. So that
comes into play.
The last one, and this is one that speaks to a particular
comment that the ACNW has made in one or more of its letters, and that
deals with a concern that you have that we would have these experts
available to decrease our vulnerability during licensing.
That coin has two sides. The thing that we see, and we have
experienced over and over again, is those outside experts are
extraordinarily difficult to keep tabs on. In fact, our preponderant
result here has been that we'll bring in a team of experts, they'll do
an extraordinary job. The peer review report will be published and
usually within a year, most of all of those peer reviewers have been
seen for their value by the Department of Energy, picked up by the
Department of Energy, and from that point on, cannot work for us in any
capacity. That has been repeated over and over again.
I think they got four out of five that did the volcanism
peer review. I think they got three out of five from the materials and
waste package peer review. So it's -- we look at that in two ways; one,
those in-depth, detailed, far more experienced experts will undoubtedly
be needed in the licensing process, but our capacity to keep them
available is almost zero. It's hard enough to keep core staff for long
periods of time. We think we've got a good handle on that. That's been
going very well. But outside experts, maintaining that access is just
not there. We just haven't seen it.
MR. GARRICK: Wes, I guess I can really appreciate your
arguments for why one and the last one are especially important. I'm
struggling a little bit with the fourth bullet, as to why that's so
important, given that the regulatory implications are something that
somebody else can manage. The technical expert doesn't necessarily.
If you put the focus on I want the best technical expert
possible, and I don't want to burden that technical expert with trying
to understand the regulations or the regulatory culture, I would think
you would want a lot of those kind of people.
MR. PATRICK: We do, and, in fact, the regulatory culture
does not come into play in the -- really in the selection. So perhaps
from the standpoint of access to the external experts, it's not that big
of a play. But what do you do with their results? You know, their
individual comments become part of a very public document and now we and
the NRC have to deal with those comments.
Expert A said that you need to do 15 years of study on
whatever. There may be no risk significance to it. It may be a
technically very stimulating, very perplexing problem. Within the
regulatory context and the regulatory culture, we end up needing to deal
with that, just as a fact of life.
MR. GARRICK: I understand that, but I guess I'm suggesting
that that ought to be something you can manage, that you can put in
context. We see that debate going on all the time of adequate science
versus best science. We're not in the science business. We're in the
business of regulating, DOE is in the business of building and
operating, and the whole idea of the risk-informed approach is to give
us a template within which to address the question of how important is
it.
So I would think that's something you could manage in such a
way that it doesn't become as difficult as at least my impression is
you're suggesting.
MR. WYMER: I can relate to that one very well personally.
I'll give you a little bit different perspective on the approach you've
been talking about
I've had a hard time changing my approach. For about 40
years, I was in the problem-solving business; how do you go about
solving. I want to design the darned repository. I don't want to --
MR. PATRICK: Comment on it.
MR. WYMER: And it is a culture thing and it's very
difficult and I'm sure you run into that.
MR. GARRICK: Well, this whole committee has had that
problem. Thank god we have a staff that tries to keep us on a
regulatory perspective. But I don't think that takes away from the
effectiveness of the committee is my point. I think that it's probably
to our advantage to not be encumbered with the process of licensing
unduly. That's not what really what we're necessarily involved in here,
being experts on regulations.
We're here to answer and deal with fundamental questions
about science and engineering and what have you.
MR. PATRICK: But the more focused the commentary is within
a regulatory context, the more helpful it is to us and to the NRC staff.
MR. GARRICK: And that's exactly why I'm such a great
believer in the risk assessment thought process, because that's the
great focuser. If we do it right, that provides us with the template
that deals with the question of how much is enough, how safe is safe and
so on.
And we haven't quite arrived at that stage of application,
but we're moving in that direction. So that's -- my only point is that
I think you could scare away some extremely competent people by putting
too much emphasis on you need to understand the regulatory culture
because that -- the same kind of people that can reach very deep for you
into the world of abstract science and what have you are often turned
off by that kind of suggestion or constraint.
That's my only point. And you made the point yourself that
you want to get the best people possible and you're a technical
organization and you need those kind of people that can bridge the gap
between theoretical abstractions and useful applications.
MR. PATRICK: Just to clarify, as we go to the next slide
there and start looking at a particular example, the -- when we select
external experts for the review process, their familiarity with
regulations does not come into play. In fact, the second tick here --
and by the way, let me start, before I go to that.
These are -- I selected these two examples because they are
parallel in many regards. They're both dealing with total system
performance assessment code, but with two quite different aspects of
that code.
The first, the development of the code, where we are short
of horsepower, short of expertise in particular specific areas, and we
have brought in staff that are junior or on par with our own staff. And
the other case, where we want to bring in top-flight technical people in
about eight different areas, to come in and review what we've been
doing.
And you can just walk through those comparisons and
contrasts that we have here. The key point with regard to the earlier
comment is that this -- reviewers are selected or self-selected by a
process that we follow. We set out a very broad polling and we define
the areas where expertise is going to be needed for this particular peer
review and then we ask those individuals who are polled to give us their
top one or three or five, however many they want, candidates in that
area.
Then we take that collection of votes, if you will, plot it
up. It's amazing how neatly it breaks out. Then we do an evaluation
then of conflict of interest, to make sure that they can work for us,
and we begin the selection process or the contracting process based on
self-selected peers.
We do that for a variety of reasons; to try to get the very
best, people who are recognized by their peers as being the best, and
also to take ourselves out of the process and thereby to avoid any
unintentional biasing that might come in, contacting our favorite
professor or our favorite industry expert.
And you will see that the results, again, they tend to be
much more senior, they tend to be more than the 30 years or so level of
experience instead of 20 or so.
We can point out that those who aid us in co-development at
the level that we're talking about, we don't anticipate that those folks
will be involved in the regulatory process in terms of licensing, they
operate under our direct supervision and probably would not be key
figures in any hearing that might be held.
That's probably not the case for the peer reviewers and,
consequently, we don't have joint products developed out of our peer
review processes. Each peer provides his or her own unique comments and
that was built into the process at the direction of the Office of
General Counsel, because one cannot query, one cannot probe a joint
finding. One wants to be able to go in and find out what did this
expert say, why did they say it, what was his or her basis for doing it.
But that's just a little example of how we bring external
experts into play. In neither case is selection driven by familiarity
with the regulation, but to the extent necessary, those who are involved
in development would obtain necessary training, so that their work is
focused as appropriate.
I want to just make a couple of brief remarks about this
overall approach, because it is central to the capability that we are
developing here at the center. We use this process in all of our work,
not only the high level waste program, but the other NRC areas that we
support. We've also carried it over into the commercial work that we
do.
The starting point, not too surprisingly, is trying to
define the problem and not just a simple statement of the problem, but
really to draw on and to draw out all of the information that's
available.
We find that that is always important, but it's particularly
true in the NRC licensing arena, because the applicant, not us, is
required to make the safety case.
Having done that, we start with a systems analysis. Our
notion, our concept of what systems engineering is. And then to
implement a solution, we draw upon a combination of the lab experiments,
numerical analyses, and field investigations and inspections to provide
the information that's necessary.
I'd like to spend just a little bit of time on this systems
analysis question. The first two blocks, we're looking at physical
systems, trying to understand, to decompose the elements of the physical
system that we're dealing with.
We have done formal functional analyses of a variety of
those components. The most rigorous analysis that we did and published
in those two first steps was done under Part 60. Part 63, as currently
proposed, is quite a different regulation. The third step is to tie
physical systems to the regulatory context; how do the functions of
these systems and components affect or potentially affect health and
safety, and that's where we begin linking the physical system into the
regulatory context.
Retrospectively, our analyses of the regulations were done
in greatest detail on Part 60. As we began to approach drafting Part
63, we took a prospective approach there instead of writing the reg and
then trying to sort out how many regulatory, institutional, and
technical uncertainties it had. We had that framework in the backs of
our mind as the regulation was being drafted.
Historically, Dr. Garrick, we drew upon aerospace and
military systems, systems engineering techniques, and that's part of the
answer to your question of how did the systems engineering folk end up
over there with administration and information management systems and
the like.
We had a number of people from aerospace and from Naval
systems that were on staff up through fiscal '95. They were heavily
involved in conducting what we called systematic regulatory analyses,
developed the original suite of, I don't know, 100 to 150 key technical
uncertainties, which were subsequently consolidated into these ten key
technical issues that we're currently working with, the 14 integrated
subissues that Budhi presented earlier today.
When the budget-cutting process took place, a management
decision was made jointly here and at the NRC that that mode of
regulatory analysis had been sufficiently completed, that those staff
largely would no longer be needed within the program.
What was needed instead was to bring in more of the reactor
line of thinking, the PRA kinds of thinking, and that that would be
accomplished by maintaining a high level of staffing in the area that we
labeled performance assessment, back on that pie chart.
So it was through that transition process that most of the
systems engineering expertise, again, derived from aerospace and Naval
systems, was phased out of this effort. The one individual who remains
in this area is heavily involved in related regulatory development
activities and licensing activities supporting other parts of the NRC
program; for instance, development and analysis of regulations for
uranium recovery facilities and the like.
But that's historically what happened there and why that
perhaps is a confusing artifact in terms of systems engineering versus
performance assessment that exists to this day. I don't know if that
helps.
MR. GARRICK: Well, I have a philosophy about systems
engineering and it may surprise you. My experience has been that the
best systems engineers are what one professor once called the T-shaped
engineer, and that's an engineer that was an expert, a specialist at one
time and had all of the advantage of the kind of discipline you have to
go through to be a genuine expert and to really be an authority on some
engineering discipline or some specific application of an engineering
discipline.
Then the T comes from having done that and having felt
they've done that long enough, began to broaden out.
I found those systems engineers to be far more effective,
far more knowledgeable about the complex systems than the so-called
universities that issue a bachelor's degree in systems engineering, most
of which I found to be pretty useless.
And I think that's what the committee is struggling with.
We're sort of of the opinion that what we're looking for here is a great
honorable profession here of systems engineering that comes from having
made your mark somewhere, but beginning to broaden your perspective of
problems and context of those problems.
The aerospace is a reasonably robust resource for systems
engineering, but you'll find that most of the really top-notch aerospace
systems engineers also are T-shaped engineers rather than the engineers
that have come out of undergraduate school with a label of systems
engineer.
So I think that there is probably -- I don't know of many
really good young systems engineers. This is something that comes from
reputation, from time, from experience and then broadening of your
interests and your activities, more than just being able to label and
train somebody to be that.
MR. PATRICK: I think you'll find no bachelor level systems
engineers at the center.
MR. GARRICK: Right.
MR. PATRICK: That gives you some modest --
MR. GARRICK: In fact, I think the nuclear industry is
probably a better resource for systems engineers of the type we're
talking about than the aerospace engineers, because the nuclear industry
is very interdisciplinary. It's not nearly as requirements-oriented as
the aerospace industry is. If it isn't a requirement, they tend to not
do it. That was not the way the nuclear industry evolved.
It evolved as an extremely interdisciplinary industry and
some of the most distinguished systems engineers on our planet have come
out of the nuclear industry as a result of that. I don't know of
anybody that was a better system engineer than Eugene Vigner, for
example, and he understood the whole concept of interrelationship of
complex systems and hardware and what have you.
So that's where we're coming from, I think.
MR. PATRICK: Well, you got off path, Naval nuclear.
MR. GARRICK: Well, that's a classic example. Another
honorable discipline, in my opinion, is what you called earlier design
analysis. The best design analysis analysts that I've had in my employ
were people like Nuclear Navy people who had a flare for analysis when
they got out of the Nuclear Navy, were not satisfied, and decided to go
to graduate school and either get a master's or a Ph.D., and those
people were extremely effective design analysts, because, first, they
understood the design and then, second, they understood how to model it.
MR. PATRICK: Let's move forward to slide 19. The ones I'm
skipping are ones that you're going to have an opportunity to see
tomorrow afternoon during your lab and other facility discussions.
I'd like to just come back to these slides 19 and 20, to
emphasize the point that I had made lightly before. This indicates the
breadth of areas where we're supporting NRC, a range of programs that
the staff is involved in.
Why do I mention these? I guess a couple of reasons. One,
it has an awful lot to do with the continuity of support that we're able
to provide to the NRC and, second, tied in directly to that, it gives us
an opportunity for our engineers and scientists to be involved in a
broad range of problems, broader than if they were limited only to the
repository program.
We find that to be stimulating, to work in these different
areas, stimulating from a couple of perspectives, an important one of
which is that most of these other areas are items where licensing
activities, licensing actions are taking a very short timeframe. So
people are able to do something, bring it to closure, see the results of
their work, and that's a good thing to include within the overall mix,
from our perspective, and that's really the only points that I wanted to
make there.
Just to wrap it up, we feel we have developed a rather
substantial independent capability and that that capability is an
essential part of an effective regulatory program.
The technical capabilities that we have developed here are
complimentary of those of the NRC staff and together I believe we have a
very strong capability for independent evaluation. Through
peer-reviewed publication, we are substantiating that we're in a
position that I think our staff can stand toe-to-toe with those that we
are commenting on, that we are critiquing in some cases.
Important to that is the truly exceptional lab here and
numerical modeling facilities that we have, that you'll have an
opportunity to hear about both in presentations and as you visit the
facilities over the next day or so.
That completes my remarks, at least this first phase of
interactions.
MR. GARRICK: It's very interesting. We appreciate it a
great deal. Are there any final comments or questions from the
committee? Go ahead.
MR. HORNBERGER: I've tried to learn subtly in making points
in front of Eric. At any rate, Wes, in that spirit, it strikes me that
the stringency of the conflict of interest rules that you operate under
are stupid.
MR. PATRICK: Could you be a little clearer?
MR. HORNBERGER: I mean, it's just absurd to me to think
that you could have an expert on volcanology or anything and that person
would go off and do a program review for DOE and forever be unavailable
to you. It just doesn't make sense to me.
So can you defend this to me?
MR. PATRICK: That's quite a spot to find oneself in. But I
find the only way that I rationalize -- and I don't use rationalize in a
pejorative sense. The only way that I rationalize this is we should all
anticipate a particularly contentious process playing out in the
licensing arena and because that's true, I take that as a given, because
that's true, the knights can't have any chinks in their armor, not even
little bitty ones.
I think time will tell, but NRC has taken, I think, a very
conservative position here, but it's not a position that they would ever
be able to back off from having moved a little bit into the conflict of
interest arena.
That's how I've satisfied my mind. That doesn't mean that I
don't chafe at it from time to time. It doesn't mean that -- by the
way, one thing that the committee should be aware of, there is a
provision, I'm not sure quite how to make it work, but there is a
provision for exceptions to be made. We have petitioned one time for a
core staff member and received permission.
We have petitioned a couple of times for external experts
and have been denied. One of those denials was based on a person having
a graduate student once involved in work in the Yucca Mountain vicinity,
about 12 to 15 years before.
So it's a very strongly reinforced or enforced proscription.
But I've satisfied my own mind that it's one that has to be in this
context.
MR. GARRICK: Any others?
MR. FAIRHURST: I just endorse George's opinion, and I
recognize it's difficult.
MR. WYMER: I just have one observation. It's not a
question. It's another example of the kind of difficult positions you
get into.
On the one hand, you're not supposed to get out too far
ahead of DOE; on the other hand, you want to do exploratory work. It
seems to me that, once again, you're walking a very fine line.
MR. PATRICK: It is a fine line and I think the process that
Bill Reamer outlined, that four-step prioritization process, there is a
lot embedded in that. If you spend some time looking it over, the
management check points in there really play a vitally important role,
because the staff can come in, given their perspectives, not just risk
perspectives, but concerns about vulnerabilities, they can come in and
propose any activities, basically, but they have to make the case with
the High Level Waste Board and then having made that case, assuming they
do, then they have to make the case with the office director.
And if those folks are convinced that this is an area where
exploration is important, it's permitted, and there are several cases
that we could point out where that has been allowed.
I think it's a good check and balance.
MR. GARRICK: One thing I want to learn more about while I'm
here, but we don't need to take time to do it now, is to get a little
better appreciation for the difference between technical assistance and
research and whether that's just a budget classification or a labor
classification, because we struggle with that every year when we're
trying to prepare our annual research report.
We think that the waste field is doing a lot more research
than is coming out in the record because of that distinction. So we'll
want to maybe try to better understand that, the rational for that.
MR. PATRICK: Well, quick answer and then we can perhaps
either talk about it at lunchtime what our understanding of it is.
Almost every organization has its own set of definitions. I've just
answered a National Science Foundation poll regarding research that may
be of use to the Department of Energy in managing waste at its various
facilities.
They have a completely different set of definitions and
under that definition, a lot of the technical assistance would fit. But
within NRC, if it's site-specific and relatively short-term, and that's
often one to three or so years, if it meets those criteria, then it's a
licensing office function. If it's generic -- in other words, not
site-specific, and/or has a very long-term duration, then historically
the agency has funded it out of the Office of Nuclear Regulatory
Research.
MR. GARRICK: It seems pretty ridiculous, though, when
you're talking about a project that's 30 to $50 billion in size to make
that kind of distinction. It just doesn't have much logic associated
with it.
But as I say, we can talk about that.
MR. PATRICK: For the record, if somebody knows about 30 to
50 billion that's running around on this side of the fence, I'd like to
know about it. It's a much more modest program, it is a much more
focused program.
MR. GARRICK: Yes. Thank you. Very good.
MR. PATRICK: Thank you.
MR. GARRICK: I guess now we can adjourn for lunch.
According to the agenda, we're due back here at 1:30. So we will now
adjourn for lunch. Thank you.
[Whereupon, at 12:10 p.m., the meeting was recessed, to
reconvene at 1:30 p.m., this same day.]. A F T E R N O O N S E S S I O N
[1:30 p.m.]
MR. GARRICK: The meeting will come to order. Budhi Sagar
wants to introduce our next topic. So I'll trust you to do that.
MR. SAGAR: Thank you. The next topic is actually the main
theme of the meeting, which is evaluating and explaining contribution to
risk.
All I want to do is take a few minutes to introduce the next
three speakers which fall under this main title. Even though we have
requested all the other presenters, following presenters, to touch on
this subject, to explain the elements of their work and why certain work
is being done based on the evaluation of risk.
One of the major methods that we employ to learn about risk
and explaining how that contributes to the total system performance is
through general category of methods called sensitivity analysis. I have
included even the analysis of uncertainty in parameters and so on under
that main title.
The main performance measure that we are interested in the
sensitivity of comes from Part 63, which is the peak expected annual
dose over the compliance period, which is 10,000 years, and longer
periods, because we are interested in how the system behaves in longer
periods, too, but definitely in the 10,000 year period, considering all
credible disruptive scenarios and their associated probabilities.
So the risk triplet is embedded in this definition here with
the scenarios, what can go wrong, and their probabilities and
consequences, all three are required to be evaluated, and we are
interested in the sensitivity of this measure to any changes of
different kinds, as I will explain later.
Part 63 explicitly states that parameter uncertainty has to
be factored into this calculation or this estimation of risk. The
uncertainty models or conceptual models are not explicitly stated as
part of the performance requirement, but we know that that would also be
evaluated and you will hear about that in some of the presentations that
will be made to you.
The contribution to risk or ranking of, and people may be
interested in different things, and, in fact, we are interested in all
of them, could be the parameters of the models, could be the events, the
disruptive events, could be processes, retardation and so on, or could
be components of subsystems. All of them, as I said, are of interest.
So the sensitivity analysis essentially ask this question;
what is the change in the expected dose in case -- referring back to the
performance requirement in Part 63, or risk, or sometimes in the absence
of uncertainties being considered -- that is, being avoided -- the
change itself, in deterministic studies, as any one of the above list
that I have mentioned changes in some way; what are the parameter
changes, what if a process didn't happen or happened, and components of
subsystems that make up the system; what does or how does it affect,
what is the contribution of these things to the expected dose is of main
interest to us.
The basic tool we employ are two; one at the top level is
the integrated flexible system model. It has to be flexible to be able
to do the various kind of sensitivities that I just mentioned to you.
Right now, the latest version is TPA Version 3.2, as explained to you.
We do intend to upgrade it to Version 4.0 after the peer review is
completed.
It has the capability of sampling parameters to take care of
the uncertainty in parameters, both correlated and uncorrelated
parameters. It has modules for the undisturbed system, to calculate the
consequences, and it has modules to calculate for the disturbed system
under disruptive events, et cetera.
And we have especially designed the code to provide
intermediate outputs, not just the end result of expected dose, but the
travel time, the release values, and all those other things that we are
interested in, more trying to understand how the model behaves, as well
as we try to understand the uncertainties and sensitivity of even the
intermediate outputs to changes in parameters and so on.
The detailed process level model is the detailed level that
we do sensitivities on and this is the process level sensitivity
analysis, which provide us not the sensitivity in terms of the dose, but
the sensitivity in terms of some intermediate output, but at a much more
detailed level than TPA code would consider; again, to try to understand
at a more detailed level how things change.
Also, we use the detailed process level models to provide
technical basis for some of the simplification and abstractions that are
coded into the TPA code.
The sensitivity analysis then are carried out and we have
just completed a report which is undergoing review at this time at the
center, will be submitted to NRC. It is a joint report between the
staffs of NRC and the center. Some of those results will be presented
by Dick Codell today.
At the system level, most of the sensitivity, but not all,
most of the sensitivities are obtained through post-processing of Monte
Carlo runs, even though some of the methods, as we will see in Dick's
presentation, do require that we modify the execution of the code in
some way in the sense of prior selection of groups of parameters.
At the process level, most of the sensitivity analysis are
deterministic in nature; that is, we don't have a probabilistic wrap
around those process level codes and they are basically done through
variation of parameters one at a time, generally speaking.
We have learned through our previous years work on this area
that a single method of sensitivity analysis or a single set of
sensitivity coefficients is not sufficient to make you understand or
give you insights about your model and how the behavior occurs. So we
have tried different methods. Dick will talk about several of those
methods that have been employed.
Some provide one kind of information, another method
provides another kind of information and so on. But the one thing you
might notice is that in general, you can group these methods into either
those that give you local sensitivities, that is, specific value of a
parameter, so on and so forth, or in a narrow range, and the other you
can classify into the global sensitivity methods, which kind of look at
the entire range, entire variation of a certain entity and see what the
sensitivity is.
And whatever advantage and disadvantage, you can say one is
better than the other, you have to seek the information that you are
looking for.
Given that you employ so many different methods, the
synthesis or final conclusions out of the sensitivity analysis becomes
an issue. So you have to kind of figure out, okay, now what are all
these things telling me, what's really important here, what's the one or
two or five or ten things that contribute most to risk, because that's
where the emphasis will be in further studies.
The future outlook is that we would develop TPA Version 4.0.
We do intend to continue refining -- and I'm saying refining because
some of the methods that we have used, as you will hear in the following
presentations, need refining in the sense that we are not entirely
satisfied that the results they are giving us are okay at this point.
And keep applying the various methods, not just stick to one
method, to try to gain insights into the behavior, model behavior.
And we're interested in developing innovative approaches,
and this is, again, the desire that we should be able to present the
results of the complicated model which is being run in a Monte Carlo
mode. We have a mass of data that it produces, but be able to show to
whoever is reviewing it or looking at the results what really is causing
the result that we are producing, the net result, the expected dose,
which one of the realizations of the Monte Carlo really contribute.
So the post-processing and transparent, hopefully, and it is
not really possible to be completely transparent in this complex model,
but to the extent possible, present it in a clear and transparent
manner, make it obvious to people what really is contributing.
So we intend to do that. Then the next three speakers, Dick
Codell, who will take the majority of the time allocated to this
presentation, will talk about the results that we obtained and that's
been documented in the report I referred to earlier, ranking of
parameters or sensitivity to parameters and integrated subissues. Those
are the 14 subissues I had identified in one of my charts in the
morning.
The second speaker will be Gordon Wittmeyer. This is the
ranking of parameter sets, not just one parameter at a time, but two,
three, four or five parameters, they're sets in a correlated fashion.
This is something that ACNW gave some of their dollars to us
to look at as a post-processor. Gordon will take about ten to 12, 15
minutes to present that.
And lastly, Norm Eisenberg kindly agreed to present his
sensitivity analysis in a much more simpler fashion than we ever did
before. So he would be the last speaker for this.
Thank you. Are there any questions?
MR. GARRICK: Any questions? Thanks, Budhi.
MR. SAGAR: Dick is next.
MR. CODELL: Good afternoon. Budhi summarized my talk
rather thoroughly, and I may have to skip over some of the things -- can
you hear me?
MR. GARRICK: Yes.
MR. CAMPBELL: You need to focus the document camera,
please.
Not an easy task. Much better.
MR. CODELL: Okay. Why do we need sensitivity analysis?
The TPA code is complex, with many interactions among modules that you
can't understand necessarily piecemeal.
We want to show sensitivity of the performance measures to
the parameters and alternative conceptual models and scenarios. The
sensitivity analysis focuses our review of DOE's analyses on the most
significant factors. It continues to improve the staff's review
capability for upcoming license application.
Also, last point, it helps to direct the technical areas and
attaches importance to them.
Most of these analyses were conducted on the base case. The
next slide talks about what the base case is. It probably has changed
somewhat, according to the latest DOE revelations about their design.
But since we're always a few months behind DOE's design. But this
describes the list of what we consider to be the base case.
In addition to the base case, though, we looked at the
volcanism scenario and a faulting scenario, which I will discuss
briefly. But most of the discussion will be on the base case.
Now, we made quite a bit of improvement over the last year,
from the last time I addressed the ACNW on sensitivity analysis. We
were able to extract a lot more information with traditional techniques
and tried a suite of new techniques, which we weren't very familiar
with, and, in some cases, gave better results, more interesting results.
We used statistical analyses, classical regression analysis.
The FAST method actually belongs in the non-statistical category -- my
mistake. The parameter tree method, which Gordon Wittmeyer will talk
about, and then one test on the means of input parameters for two
classifications of doses that I will get into a little later.
For non-statistical sensitivity analyses, we had a
differential analysis and a variant of the differential analysis called
the Morris method, and the FAST method, which is foray amplitude
sensitivity technique.
We looked mainly at the 10,000-year compliance period, but
also evaluated 50,000 years in order to follow what DOE was doing
looking at longer time scales and to address issues which are likely to
be raised about climate change and the long-term viability of casks.
The first set of results here are simply the evaluation of
the radionuclides that turned out to be important for 10,000 years. The
bar on the left is the total and then the other bars are for each
radionuclide.
These are now the mean values. This is based on the peak of
the mean dose, which is the method by which we're evaluating the Monte
Carlo run. We take the mean at any point in time over all runs and then
we determine the peak of that mean.
So iodine and technetium being mostly unretarded
radionuclides, with long half-lives, so ought to be most important in
10,000 years, as you might expect. Neptunium is the next one, the next
most important. However, while not shown on this figure, neptunium
usually accounts for the biggest doses for the few cases where there are
big doses, even at 10,000 years.
If you go to the next figure, it shows the 50,000-year
results. In this case, neptunium overwhelms the other radionuclides,
being a large contributor and having a long half-life, having a large
inventory, but being somewhat retarded, doesn't show up influencing the
results for quite a while.
The next figure, the next slide talks about regression
analysis and how we're able to squeeze the most information out of
regression. I'd like to just cover some of the techniques.
I think regression is really an art form rather than a
strict discipline, as you try a number of things to find some things
that work. In this case, we looked at 246 input variables and a
thousand vectors. So we had a large enough statistical database to
start getting some significant results out of regression analysis.
The first thing we did was we screened the input variables
using a variety of statistical tests, some of which were regular
regression, others were non-parametric, as listed here.
Any one of these tests that showed a variable as being
possible significant was kept in. The others were discarded. So we
were able to winnow down the list of 246 variables to a more manageable
size.
Then we used some other more sophisticated regression
techniques to extract the information from the smaller list.
In order to treat the regression, we did variable
transformations. Some of these transformations were rank
transformations, where we reduced the variable to its rank in a sorted
list; the normalization, which is simply dividing by the mean; log
transformation of the variables, the independent variable and the
dependent variable dose; and then a variation of a log transformation,
which is a scaled power transformation, where we chose a power law
transform that took the original distribution and made it the closest to
being normal, normally distributed.
The next figure shows an example, this being dose for 10,000
years. The figure on the left side shows a very skewed distribution,
but when you apply a range of transformations, you will pick one out of
the many that gives you a straight line and the one that's the normal
distribution coordinates.
So what this and other transformation does is it reduces the
influence of the extremes of distribution. That's a good thing and
maybe a bad thing, as I will talk about next.
One of the things that I think evaded me and other people
who were doing regression analyses early on was that we're always
looking for the best fit. It was somewhat reassuring to get your data
lined up in a row and making a nice plot on the graph. But this isn't
always what you want.
For example, if you took just the raw data or maybe the
normalized data, where you divide by the mean, you'd get a poor fit in
those cases. That is, a small R-squared. This result weights all of
the doses equally, but doesn't give you a good fit. So you're apt to
try something else; for example, a lot-transformed sensitivity, where
you're now taking the log of everything, before you do your regression,
and you'll get a much better fit, a higher R-squared.
The problem is that when you do this, it weights the small
doses disproportionately. So you're giving the tiny doses as much
weight as -- a bigger weight proportionately than they deserve. This
tends to give you a better fit for the very reason that in this kind of
total system model, the processes, the sub-processes multiply each
other, so you get -- taking a log actually makes the most sense in terms
of getting a good model.
The next slide talks about the sensitivities. When you
scale sensitivities by the means -- that is, you normalize them by
dividing by the mean -- you're showing a fractional change in dose to a
fractional change in the input variable. Another approach is what we
call standardization is dividing or scaling by the ranges of the input
distribution, which includes the notion of uncertainty in the input
variable.
This is important to do because this will change your order
of the most important variables.
The third bullet, I talk about the method of sensitivity
analysis that emphasizes the largest dose. You may be interested in
doing this, for example, to attach importance to integrated subissues.
So maybe you're interested in the biggest doses because those are the
ones that are causing the most problems.
The parameter tree method and the t-test on means method,
which will be discussed a little later, emphasize the largest doses.
The last bullet talks about something Budhi mentioned
briefly, that the proposed regulation deals with the peak of the mean
dose. In order to use this measure of compliance, the most
representative sensitivity is the non-transformed variable. In other
words, you don't want to emphasize the small doses or necessarily get
the biggest R-squared in your regression analysis. You want to weight
the doses fairly, so you'll see what is really -- what really conforms
to the compliance measure.
We tried a few other sensitivity methods, some of which are
entirely new. Differential analysis we used last time, but now we have
seven local points in the parameter space instead of only three in the
last one, trying to get a better coverage of parameter space, which is
always a problem in differential analysis, which are completely local.
The Morris method, as I understand it, is a economical way
to conduct differential analysis. It gives you good coverage of the
parameter space and a promise of about a factor of two improvement in
efficiency, although I'm not entirely sure that's true.
The FAST method is a non-statistical method that is useful
for non-linear computation, allowing the exposition of multiple
interactions among the independent variables.
However, this method is limited to a very small number of
independent variables. So you have to do some pre-screening first.
Otherwise, the computation costs would be excessive.
The T-test on the means, which I described before, we
segregated a thousand vector run, this is a statistical technique. We
segregated it into doses less than ten millirem and doses greater than
ten millirem for 50,000 years only and then looked at the mean of the
independent variables to see whether they are statistically different,
and, of course, they were. So this was one of our sensitivity measures
that went into the mix.
Now, how to take the sensitivity results and make sense out
of it. As we said, there is no one best measure. This may not be the
best way, but this is a way that we decided to try to deal with the
great amount of information we had on sensitivity. We looked at the
list here, in the first bullet, of the seven methods we consider for
sensitivity.
Recognize that each method provides different information
about the result. Then no single method is a unique identification of
what is an influential parameter.
What we did was we looked at each variable and saw how many
times it appeared in this list of seven. Actually, only six were used
at a time, six for 10,000 and six for 50,000. So the ranking really was
done on how many times the variables appeared.
You can argue with this, I'm not sure it's the best way, but
it's a way.
What we discovered was that from doing it this way, that
five of the variables appeared in both the 10,000 year and 50,000 year
time periods of interest. Those five are listed here; fraction of
repository wetted, well pumping rate at 20 kilometer location; the
average mean infiltration at the start of the computational period to
determine how much water infiltrates ultimately; the alluvium matrix
retardation coefficient for technetium and iodine.
And several parameters appeared only -- significant only in
the 10,000 year time period of interest. Those were the flow focusing
factor for wetted waste packages, the fraction of initially defective
waste packages, and fraction of water diverted from the waste packages
and not getting inside the waste packages.
There were two parameters that were important only for
50,000 years, and those were the alluvium matrix retardation for
neptunium and uranium. So those are --
MR. HORNBERGER: Dick, could I ask you a quick question?
MR. CODELL: Yes.
MR. HORNBERGER: How did you determine whether a single
parameter was influential or not? Is it the top ten or how did you do
that?
MR. CODELL: Yes, actually it's right. You guessed exactly
right, it was the top ten.
MR. HORNBERGER: Okay.
MR. CODELL: There were -- in some cases, there were -- like
the parameter tree method, there were only five parameters, so it was
the top five. But others, there were 12 to 20 parameters and we just
took the top ten.
Next, I'd like to talk about the alternative conceptual
models, moving away from the strict definition of statistical
sensitivity that we just heard about.
Here we are defining alternative models of performance of
waste package, waste form, and geosphere, both alternative models and
alternative understanding of the models.
We compare the alternatives to the base case and we look at
10,000 and 50,000 years.
We chose nine new alternative models. There's nine and the
base case. So the nine alternative models were no retardation, NoRet,
which is no retardation for plutonium, americium, and thorium. The
intent of this one was a gross -- what I consider a gross simplification
of the model to consider the possible effect of alloy transport not
being retarded.
The next one is Model 1, which is the alternative
dissolution model, which is much faster dissolution of the U02 waste
form than what we're using. That is based on carbonate water only.
The third one is matrix diffusion in the legs of the
fracture flow model. We don't normally consider matrix diffusion.
The Flowthru model looks at a different representation than
the Bathtub model for the wetting and dissolution of the waste form and
the waste package.
Focflow is focusing flow. That is four times the flow to
one-fourth the number of wetted waste packages. So to look at possible
short-circuit in the pathway.
The next one is cladding credit. This is to get at the
credit that DOE has taken for many of their performance assessments.
Here we looked at 99.5 percent coverage of the fuel by cladding. Also
combined in this model is the faster fuel dissolution rate, because this
is the model that DOE intends to use or used in their TSPA/VA.
The next one is natural analog, where we tied the release
rates from the Pena Blanca natural analog site.
The next one is a new model that's in our latest TPA code.
It's called the Schoepite model that Bill Murphy and I worked on. This
ties the release rate to the dissolution of the secondary mineral,
schoepite. That is, we're assuming that all of the radionuclides
released from the primary waste form end up in schoepite and then
dissolves at a much slower rate than schoepite dissolves.
The final one is Grain size model, rather than particle size
model, with a faster dissolution rate. This is about the worst
dissolution rate we can have in our model.
The next two figures show the results for the comparison of
the alternative models. We have the -- these are ranked in order of
10,000 year results. Remember, now, the only -- that the entire dose
for 10,000 years comes only from the premature failure of the waste
packages. So this is just a handful of waste packages.
So you won't see any corrosion parameters in these. The
waste packages don't corrode until much later. Except for seismicity
failure. But there weren't any.
The no retardation model is the biggest dose. The Flowthru
model is second and the reason for that is that the waste package
doesn't have to -- you don't have to wait for the waste package to fill
up before you get release. So this gives it a little jump on the
Flowthru, the base case model, which requires the waste package to fill
up for sometimes thousands of years before you have any release.
The base case is in the middle. Focused flow is an
interesting one. You see a larger release, somewhat larger release for
focused flow at 10,000 years. You will see virtually the same -- you'll
see virtually no difference from the base case for the 50,000 year
result.
Then the clad model and the two natural analog -- the
natural analog model and the schoepite model are very small releases
relative to the base case.
For 50,000 years, we have the same order now as the 10,000,
and you see there is quite a difference in the result if you kept the
same order.
The natural analog and schoepite model are once again the
very smallest doses.
The matrix diffusion result turns out to be the same or
slightly smaller than the 10,000 year ranking. So matrix diffusion may
make a difference at 10,000 years, but eventually doesn't make much
difference. This is because it eventually -- it's just that it retards
the release, but it eventually will show up at a later time.
Moving on. Now, we tried to use the results to evaluate the
integrated subissues. We looked at the influential parameters and the
alternative conceptual models and did a crosswalk to the integrated
subissues, to try to make some -- try to place some ranking on the
importance.
But we have to remember the context of the comparison and
view it very carefully. It's based on highly abstracted models and no
credit for matrix retardation abstraction. A sampling of the dose
factors, they're always held constant. A single receptor group at
20,000 meters from the site, there is no geographic variation, no
closer-in sites.
And we have to note there are important differences between
the two time periods of interest, although we're trying to base most of
our -- most of the influence on the 10,000 year time period.
For 10,000 years, we found that the TSP results were most
sensitive to -- listed here are the subissues that are most sensitive.
Waste package degradation. For 10,000 years, there was no real
degradation other than initial failures.
The quantity and chemistry of water contacting the site,
particularly flow focusing and a number of waste packages that were
wetted. Those were the key factors.
The spatial and temporal distribution of the flow and
retardation in the alluvium and the production and where the water is
being produced, and the alluvium.
When you included disruptive and volcanism, disruption of
the waste packages and airborne transport of radionuclides and the
volcanic, are most important.
For 50,000 years, once again, we show the quantity and
chemistry of water contacting waste packages, particularly flow focusing
the number of waste packages wetted. Radionuclide release rate from
solubility limits, spatial and temporal distribution of flow,
retardation in water production zones and alluvium, particularly
retardation in the alluvium, and we did not consider volcanism for
50,000 years. We only looked at it for 10,000.
So we don't have any volcanism results there.
MR. WYMER: Dick, are these in order of importance?
MR. CODELL: In addition to the formal sensitivity analyses,
we wanted to do a little -- a few ad hoc sensitivity studies. These
were pretty much using the existing code or, in some cases, just
back-of-the-envelope analyses using a hand calculator, but nevertheless
are interesting.
The two studies we're talking about are on the glass waste
form and colloids. They're not directly tied to evaluation of the
subissues. There is a scoping analysis, a very simple one. However,
models of the two phenomenon are slated for TPA Version 4.0 that Budhi
spoke of.
The first one is the effect of the glass waste form, and
DOE's TSPA-VA, they proposed some model for release from the glass waste
form.
What I did was I took results of these models and tried to
adjust the parameters in our TPA 3.2 code to emulate the glass waste
form only, looking at the many considerations of glass, like the waste
packages are somewhat different and temperatures may be different.
What I discovered was that for a 10,000 year time period of
interest, where doses were very small indeed, you got up to a 15 percent
increase in dose. For a 50,000 year time period of interest, you got a
smaller change, but the doses were larger, but it constituted only five
percent.
There are many uncertainties in the modeling that we did.
So these will be followed up on thoroughly later on.
The second study was the effect of colloids. This was a
back-of-the-envelope. This looked at taking actual lab data on
plutonium colloids from Argonne and from the Pacific Northwest Lab
experiments, where they actually immersed or dripped on the spent fuel
waste form, took the concentrations they discovered in those
experiments.
I took as representative of a wide range of experiments 300
picocuries per milliliter plutonium, assuming them to be 100 percent
colloid, assumed no retardation in the geosphere, and then the total
colloid release was mixed into the average water intake by the 20
kilometer well. I calculated, with a very simple, extraordinarily
simple calculation, 1.25 millirem for the plutonium in drinking water.
It would have somewhat less than a factor of a ten increase
in this dose if you looked at all pathways and all radionuclides.
So that my point being that even though these may be bigger
than the doses we're showing from the other models, we're still safely
below the 25 millirem in the standard. This gives me somewhat of a warm
feeling that we're on the right track and that colloids may not be
terribly important.
Also, the preponderance of literature that says that
colloids don't seem to move very far, at least not in the ranges of
sizes that we're considering to be most important at Yucca Mountain,
could probably be filtered out in short order.
So in summary, we looked at a number of sensitivity
analyses. The analyses, in general, emphasize the importance of factors
like water and water infiltration, fuel cladding, especially at 10,000
years, but the important radionuclides for the compliance period were
iodine, technetium, and neptunium, recognizing that these are retarded
very slightly.
At 50,000 years, neptunium was the overwhelming dose
contributor.
In terms of the alternative conceptual model, the largest
doses came from an assumption of no retardation, which is both to
emulate the colloid model, but, as I suspect, the colloid model is not
very realistic, assuming it won't get filtered out.
None of the alternative models that we considered exceeded
the 25 millirem proposed standard. The assumptions about waste form
dissolution, cladding protection and wetting of the waste packages and
fuel were demonstrated to be very important.
The doses were very small for what we consider reasonable
alternative models for release from the waste form based on natural
analog and dissolution of secondary waste forms.
We use the results to indicate the direction for future
model and code improvement. The two ad hoc studies indicated that
probably colloids and glass are not -- won't be big factors in our
assessment.
Now, we use the results to try to rank the integrated
subissues, but the conclusion here states that we have to be careful.
There were nine of 14 subissues were found to have at least one
influential parameter.
But most of the important issues from the crosswalk of the
sensitivity analysis and the alternative conceptual models and the
integrated subissues related really to factors that were implicit in the
model. We put them there; for example, many of the -- the waste
packages don't fail from corrosion before 10,000 years. This is an
outcome of the model, but there is a very significant part of the model.
We don't have any failure for 10,000 years from corrosion; then whatever
is left is contributing to the result.
If we're completely wrong about this model, then we might
reach utterly different conclusions.
The thermal reflux delays the onset of flow into the
repository. That's what our models tend to say, but it's really not a
completely well founded conclusion or assumption.
The Bathtub model says that the significant delay of
radionuclide, because of the long fill-up time, but DOE doesn't use this
model and this model has been criticized by people saying that you
wouldn't have water filling up the waste package.
The sorption in the alluvium between the site and the 20
kilometer location significantly delays the arrival time of the
radionuclide. However, if this alluvium is not there or is not
effective for some reason, we would be likely to have quite different
conclusions.
That concludes my talk. Thanks.
MR. GARRICK: Thanks, Dick. Questions?
MR. FAIRHURST: Rich, your last four points about no failure
of the waste package before 10,000 years, are you really summarizing the
differences between what you see and what the DOE VA analysis shows?
Are they the main differences?
MR. CODELL: Are you talking about the waste package failure
within 10,000 years?
MR. FAIRHURST: All four of them, because the DOE's analysis
shows significantly greater releases, I think, in total, right, towards
50,000 years? Are you with me or not?
MR. CODELL: These last four bullets talk only -- well,
there are differences. I think our doses turned out to be remarkably the
same, but probably not for the same reasons. I know that the DOE has
waste package failures by corrosion within 10,000 years.
MR. FAIRHURST: Is it one package or one percent? It's one
package that fails, right, in 10,000 years?
MR. CODELL: Yes. That's the initial failure. They're
assuming only one waste package is initially defective. We have up to
30, is that right, Tim? On average, 30, as high as 62. That's a major
difference.
They did consider this last tick, sorption in the alluvium.
I think I was told that ten percent of their Monte Carlo runs, they
assumed there was no --
MR. FAIRHURST: No alluvium.
MR. CODELL: -- alluvium. So in other words, that would
short-circuit, and that was a significant difference.
They don't use the Bathtub model. There is a big difference
in the pumping dilution, too.
MR. FAIRHURST: All right. Thanks.
MR. GARRICK: Dick, since juvenile failures dominate the
performance for the 10,000 years, what is being done to provide
confidence in what that number should be?
MR. CODELL: I'm not really the person to ask. I think you
probably, at the center there, you would find a volunteer. Maybe
they're not here today. They may show up for one of the later
briefings. I'm not really too keen on how they came up with that
number, the initial failures.
MR. WITTMEYER: This is Gordon Wittmeyer. Dick, I think the
numbers we're using right now come from a report that was done at the
center in about 1995, as I recall, the survey, literature survey was
done looking at defects in manufactured materials, as I recall.
I don't recall the numbers, off the top of my head, but I
think initial defects on the order of ten-to-the-minus-five,
ten-to-the-minus-three range, but I'm sorry, I cannot address what we
are doing now or in the future to further refine those estimates.
Perhaps with Sridhar comes back into the room, we can get
him to address that.
MR. GARRICK: Were the numbers based entirely on
manufacturing defects or did transportation and handling enter into the
analysis?
MR. WITTMEYER: I believe -- and Sridhar is here -- I
believe those were primarily manufacturing defects and did not include
mistakes in handling, banging things into walls and whatnot, but I'll
defer to Sridhar. We're talking about initial defectives and the basis
for the numbers we have now and what we will be doing in the future to
further refine those estimates.
MR. NARASI: I think the initial defects that we assumed
subsumed a lot of uncertainties in many things. So we cannot
characterize them as only arising from manufacturing defects.
They may include some manufacturing defects, they may
include other defects that become undetected, go undetected until
closure period, till post-closure period.
So at this point, we don't have a good way of characterizing
what are the origins or sources of the initial defects.
MR. GARRICK: But how did you come up with the number that
you used?
MR. NARASI: The number is sort of an upper limit
conservative number based on some literature survey we conducted. We
looked at -- this is a 1994 report that we published. We looked at the
reactor experience, mainly with respect to cladding defects.
Again, in the case of cladding defects, people are not
completely sure about where those defects came from, because this was
post-reactor defect detections. So it could have come from during the
reactor life or could have come from prior manufacturing.
We also looked at some other industries, notably aerospace
industry and some construction industry, but there the statistics are
not that easily available.
One of the problems with this initial defect is that there
are not good statistics on categories of sources of initial defects.
MR. GARRICK: But the analysis so far has been pretty much a
generic analysis of manufacturing defects. This is not waste package
design specific or --
MR. NARASI: Right.
MR. GARRICK: -- transportation and handling specific or
what have you, and it's not so important as long as the doses are very
low. But if the doses increase, then it seems that there might be merit
in taking a harder look at --
MR. NARASI: As a matter of fact, we are taking a harder
look at it right now, because with the new materials, like C-22, which
give a very long lifetime, the initial defect population plays a bigger
role than if there are much more corrosive or much less resistant to
corrosion.
Gustavo will talk about it tomorrow in his presentation, but
what we are saying is that we want to take a better look at what gives
rise to these initial defects and how we can better circumscribe the
numbers.
MR. GARRICK: Dick, was the dose increasing in each case at
the 10,000 and 50,000 year time period?
MR. CODELL: You lost me for a minute there.
MR. GARRICK: Is the peak dose for the 10,000 year period at
10,000 years, and the peak dose for the 50,000 year period at 50,000
years?
MR. CODELL: The 10,000 year dose is at 10,000 years, but
the 50,000 year dose isn't always. Sometimes it's sooner.
Remember, these are -- it's the peak of the mean.
MR. GARRICK: Yes.
MR. CODELL: So it can vary for individual runs. For less
than 50,000 years, there are some; there are some peak doses where it
occurs before the 50,000 years.
For 10,000 years, it's almost always at 10,000 years.
MR. GARRICK: But I guess the total for the TSPA, where the
were averaging all the realizations, the dose was still monotonically
increasing past the 50,000-year period.
MR. CODELL: Yes, that's generally true. I think for the
alternative conceptual models, some of those would peak earlier than
50,000 years. I don't have those in front of me. But particularly
where you have situations where you have a very fast dissolution rate,
you're using up your inventory before 50,000 years. So those will tend
to peak sooner.
We're only interested in the peak of the mean for -- that's
what those bar charts showed, the peak of the mean.
MR. GARRICK: But in all your runs, Dick, when is typically
the time to peak dose? Is it in 50,000 to 500,000 year range? Does it
go as high as 500,000 years?
MR. CODELL: We don't carry it any further than 100,000
years. Most of the runs were done for 50,000 or 100,000 years. In many
cases, as I recall, it's still increasing at 100,000 years, in most
cases. I'd have to look -- we're going to be looking at that.
It has to do mostly with the climate cycle, whether the
infiltration is still increasing or not. Where that peak is, I don't
recall.
MR. GARRICK: How many of your Monte Carlo runs did the dose
exceed -- did the calculated dose exceed 25 millirem within 10,000
years?
MR. CODELL: For 10,000 years, I don't think any of them
did. Not for the base case.
MR. GARRICK: For the base case. How many of them exceeded
ten millirem per year? Because you did an analysis on that one.
MR. CODELL: I can look at it up. I don't have it in front
of me.
MR. GARRICK: I mean, your gut level feeling, was it --
MR. CODELL: Very few. Except for the volcanism case, they
were always quite small for 10,000 years.
MR. GARRICK: The analysis you performed, Dick, is
principally to look at parameter sensitivity and you've done some
translation for us of what that means in some physical terms. But are
you generally satisfied that you're moving in a direction where you can
--
MR. CODELL: We've lost you. We've lost you again.
MR. GARRICK: I was going to ask a question about the
translation of parameter sensitivity into physical system sensitivity.
Did you hear that?
MS. WASHINGTON: No, we didn't hear the question. Could you
repeat it, please?
MR. GARRICK: You've done a considerable amount of work now
in parameter sensitivity and you've currently interpreted that for us in
terms of what's important and what's not.
But are you satisfied that this analysis indeed can give you
a strong basis for looking at the importance of specific physical
systems, specific barriers?
MS. WASHINGTON: Excuse me. We're sorry, we've cut out
again. We did not hear you.
MR. GARRICK: All right. Well, we'll give up. All right.
I guess we will move to the next speaker.
MR. CAMPBELL: John, maybe you can try the microphone.
MR. LEE: There is a mic right in front of him. The only
thing I can figure is we're having some sort of network problem on the
phone line.
MS. WASHINGTON: It went out again.
MR. LEE: It's a network hit over the phone lines. There's
not really anything -- not really anything that can be done, I don't
think.
MS. WASHINGTON: Okay. We'll just keep letting you go when
you cut out.
MR. CODELL: I have a figure here. I have a figure here.
MR. GARRICK: Go ahead.
MR. CODELL: I have a figure here from the report we're
putting out. The top one, it's hard to see, I will just point out that
this shows the dose versus time for 10,000 years, the expected dose.
This is ten-to-the-minus-two millirem, where I have the pencil point.
That's about the highest curve, that's about the highest any of the
individual curves, which are hard to see, but there is one -- the very
worst one goes up here about like that and is under ten-to-the-minus-two
millirem. This is for 10,000 years.
So that gives me an idea of the spread and the result.
They're still very small for 10,000 years. It's millirem per year,
ten-to-the-minus-two. Maybe ten-to-the-minus-one at the very peak.
MR. FAIRHURST: Ten-to-the-minus-one millirem, right?
MR. CODELL: On the base case. Millirem, yes. So these are
very small doses, as you might expect, because there are very few waste
packages. If you look at the scale on the 10,000 year doses, it's in
the microrem range for the effective dose.
MR. GARRICK: All right. Any other questions or comments
that they can't hear?
MR. McCONNELL: Dr. Garrick?
MR. GARRICK: Yes.
MR. McCONNELL: Could I --
MR. CODELL: We can hear you fine, if you want to try again.
MR. McCONNELL: I just wanted to point one thing out, and
that's in relation to a question Dr. Hornberger raised when Bill Reamer
was talking about how we use our risk insights in making our process
more transparent.
It's these results that will eventually be published in the
sensitivity studies report that will be given to the board in the KTIs
to use in their planning process for the next fiscal year.
So this is part of the effort to make it transparent, the
process of how we're doing risk-informed planning.
MR. HORNBERGER: Actually, my question had to do with how
you use information that is non-risk-based.
MR. McCONNELL: Oh, never mind.
MR. HORNBERGER: But thank you anyway. Dick, now I'm
confused, because I made a note that when you talked about your T-test
on means, you said you put it into less than ten millirem and greater
than ten millirem classes, and now you're telling me that there wasn't
anything above 20 microrem.
MR. CODELL: No. That was only done for 50,000 years. For
50,000 years, there are bigger doses. There were doses up into the --
up to approximately 100 millirem for 50,000 years.
MR. GARRICK: Okay. Gordon, you're next.
MR. WITTMEYER: I think actually before I start on my
presentation, I think it would be good for us to answer the question you
had about whether or not we had looked at subsystems that were important
to performance from the sensitivity analysis, and maybe between Dick and
I, we can address that, because I think it's good that we try and
address that.
I think one of the things that Dick did do was the
alternative conceptual model. So at least we can look at a fairly large
subsystem there and get an idea of what's important. We can see that how
we treat spent fuel dissolution is very important, as well.
I think, also, in trying to tie the collection of parameters
that have proved to be important to these integrated subissues, we're
also able to speak to which subsystems are important, and I think
towards the back of Dick's presentation, I think on page 21, he has
talked about which key integrated subissues at 10,000 years would prove
to be important.
Again, that came from the sensitivity analysis and we looked
at a collection of parameters that were -- that came out of this
particular integrated subissue. So you can see that even by doing
parameter at a time analysis, we are able to point back to subsystems.
I think that's what Dick had intended to say. Now, at this
point, I'm going to try and go into that in a little more detail,
talking about one approach that has been brought forward here recently,
the parameter tree approach. We discussed this informally back in
January, and Budhi presented it to you, Dr. Garrick. We've done some
more work since then.
I'm going to try and update you on that work. If we look on
the second slide, on the objectives that we had, the two first bullets
here are really things that we're trying to do to address some of the
questions that you've brought forward. One is to try and make it more
transparent as to which -- what factors are the ones that contribute
most to total system performance, and we're going to try and do this by
post-processing what we already have from the TPA code, not developing a
new code.
Likewise, this is an easy way to look at sets of parameters,
to look at the joint sensitivity, if you will, of performance to
collections of parameters.
Another important thing in this third bullet is we get a lot
of output from this code. Here is an example. We get 4,000 model
realizations. Well, we get 4,000 model realizations if we run 4,000
realizations. But sometimes we don't want to do that. We have a lot of
parameters to look at that we vary, 246 of those. So it's really kind
of a daunting task.
So we need something that can go through all this data, do
data mining, and I don't mean in a pejorative sense, but I mean really
find information in there and make it clear to the analysts.
Now, one method that's been -- the method that we have
looked at here is the tree approach. This next slide, which probably
doesn't show up too well on the overhead, but I think you can see
clearly in your handouts, is looking at a single parameter.
I'm going to step through this kind of slowly and try and
explain each part of what we see here. Let's assume we do have a
collection of 4,000 realizations from the TPA code. We make a division
of these realizations, depending on whether or not a single parameter is
greater than or less than its median value. So we get two bins, each
one consisting of 2,000 realizations. Now we look at what difference
did it make when we looked at the high values versus the low values,
what effect did it have on the performance measure.
If we choose as our performance measure the peak dose -- and
I'm talking about something different than we talked about before. When
we're talking about the peak of the mean dose, I'm talking about
comparing it to the mean peak dose, and I don't mean to be confusing,
but this is -- reflects work that was done before we had a change in
Part 63, where we're looking at a different performance measure.
We compare what the actual peak dose is of each realization
to what the mean peak dose is for all 4,000 realizations, and you can
see that for the case where this parameter, whatever it is, is greater
than the median value of the parameter, there are 1,700 of those 2,000
realizations whose peak dose is greater than the mean peak dose.
Likewise, we look at those where the parameter value is less
than the median value and there's only 200 realizations there, where the
peak dose is greater than the mean peak dose.
We can construct these ratios that we call P-one-plus,
P-one-minus, which essentially tell you the fraction of the realizations
where the peak dose exceeds the mean peak dose.
In the one case where the parameter is high, we had 85
percent of those exceeding the mean peak dose. Where it's low, the
fraction is .10.
Now, one possible measure of sensitivity of the performance
measure to that parameter is what I show on the next slide, the very
first bullet. If we look at the absolute value and the difference
between those two ratios. In this case, it's .85 minus .10, so the
performance measure is .75 or the sensitivity measure is .75.
Note that had the parameter not distinguished -- if it had
made no difference to performance, we would expect this measure here to
be essentially zero. The greater the difference in this value, the
greater the difference in these two values or the larger this value of
S-1, the greater the effect a single parameter has on the estimated
performance.
Now, what we do, we apply this single branch parameter --
actually, it's a two branch, I guess, but it's only one branch deep,
parameter tree method to each of the 246 sampled parameters and once we
have determined which of all those parameters is most important, we
follow a very similar procedure to look at the joint importance for a
pair of parameters.
So we compare -- we use that first parameter with the
remaining 245 parameters, and this is done until we have gone four or
five or six parameters deep into the tree to get an idea of what
collection of parameters is important.
On slide five, this is just an example where we go two
parameters deep. I'm not going to talk through all the detail, because
it's exactly the same as what I discussed two slides ago, but you see
here we have different measures now, P-one-plus-two-plus, compare that
value to P-one-minus-two-minus. If you took the difference between
those two, that tells you what the effect is of having both of those
parameters high versus having both of those parameters with a low value.
Now, we don't always expect that a parameter in the high
range will necessarily imply high performance. It could be something
where there is a negative correlation, such as pumping and bore hold
dilution; the greater the pumping at a well, the greater the dilution,
so you would expect to have a lower peak dose from that realization.
The next slide, slide six, is an example of where we have
applied this approach to results from the TPA code with 4,000 actual
realizations and this parameter tree goes five deep on branches, I guess
-- I don't know if there is a term for it, but I'll just say it's five
deep into the parameter space here.
If we look along the top, there's five different parameter
names, and I'll explain what each one of those is. We have Io, which is
the initial infiltration rate, which was determined with one parameter
tree to be the most important. Then Fow, which is a factor that defines
the -- or describes the focusing of flow, of seepage. WPdef is the
fraction of juvenile failures, initial defectives, that is. Fmult is
the fraction of water that is not diverted from the waste packages and
SAwf is the sub-area wet fraction; that's the fraction of waste packages
that will get wet in a sub-area.
Now, let's look at the two columns that are on the
right-hand side of this. Under the column with the heading P-one-plus
or minus, et cetera, that ratio -- for example, in the first row,
128/129, 128 are the number of realizations that had a peak dose that
exceeded the mean peak dose. There are a total of 129 realizations on
this particular bin. The bin where each of those parameters was greater
than or equal to its median value.
Likewise, similar statistics for everything in that row or
in that column.
In the second column, the one that says fraction of mean
peak dose, we simply assigned the fraction of the mean peak dose that
can be attributed to that collection of realizations. So if we look at,
again, the top row, of the 129 realizations in that bin, those
realizations contribute 21 percent of the total mean peak dose.
I won't go through all the numbers in here, but you'll see a
few other large fractions that don't necessarily have to do with all of
the parameters being high. If you look at the third row, where we
actually have the lower value for the Fmult, but we still explain
roughly 13 percent of the mean peak dose.
MR. FAIRHURST: I'm sorry. What was Fow?
MR. WITTMEYER: That's the flow focusing factor and I can --
maybe I should have an expert in the TPA code explain those in more
detail. We can have that done at a later time.
MR. FAIRHURST: Yes.
MR. WITTMEYER: Now, what we can look at here is we have a
collection of things then that are collectively important to
performance; that is, high values of these five parameters. In a way,
we can look at this as being a scenario.
If you have this collection of parameters, all high, then
you tend to have a huge effect on performance. Some of these things are
related types of parameters. Fow, Fmult, sub-area wet fraction, Io, all
have to do with the seepage, and then waste package defective is a
slightly different parameter.
Now, we can apply this not only to parameter values, but
also to subsystem outputs or intermediate outputs. On page seven, I've
given a very preliminary example of what we could do with some of the
intermediate outputs we get from the TPA code. TCR, at the top, is the
total cumulative release, and that's for all radionuclides. The
subscripts, ebs, uz, sz, are from the engineered barrier system,
unsaturated zone, the saturated zone.
So, again, we've taken a number of realizations -- in this
case, 1,440 realizations. We've divided them depending on whether or
not the total cumulative release from that subsystem is greater than or
less than median value.
On the first branch, you see a little equation there,
probability of TCRebs is equal to .5. That says that the probability is
.5 that half-year realizations will be greater than the median or less
than the median.
Now, it's a little more interesting when we look at the next
level, when we look at the unsaturated zone, and this is in the natural
progression of how radionuclides move in the base case scenario, from
the ebs into the unsaturated zone.
This conditional probability here simply tells you that if
the release from the engineered barrier system is greater than the
median value, that the probability that the release from the uz will be
greater than the median value is .9. So that really the effect of the
uz on changing the release coming out of the ebs is fairly minimal.
If we look even a little deeper, we can see that if we look
at the probability of the releases exceeding the mean release coming out
of the sat zone, given both the uz and the ebs have high releases,
that's actually a little bit smaller. The saturated zone tends to have
a little more effect on either delaying, retarding the movement of the
radionuclides.
Again, the two columns on the right-hand side tell you the
same information, although now we're looking at these subsystems or
these collections of subsystems, rather than parameter values.
In the first column, the first row, that if all of the
releases are high, just as you might expect, that if those are all high,
then you're likely to have a total release that is -- actually, excuse
me, a total -- a peak dose that exceeds the mean peak dose, and that's,
again, shown here in the second column, where fully 96 percent of the
mean peak dose depends on or comes about when the releases from these
three engineered or these three subsystems are greater than their median
values.
That's pretty much an expected result. I don't think that
comes as really any surprise.
Now, we're working on implementing this in a computer code,
to allow you to take any collection of parameters for any of the many
different intermediate outputs that we have from the TPA code, and do
this analysis and display it in this form to see if it, again, gives us
any additional insights into what collection of parameters or collection
of subsystem outputs dominate or are the most important factors in total
system performance.
In the summary and conclusions, three bullets here, I think
I've probably stated them already, but I'll go over them very quickly;
that this is one method that we can use to clearly identify sets of
parameters or subsystems that have a large influence on the risk.
It is relatively straightforward to implement and, I think,
straightforward to interpret, as well. Hopefully, you have understood
it in a straightforward manner. I'll find out shortly, I think.
And I think the last thing about it is that we don't impose
any constraints on the analyst with the TPA code before he runs it. He
can take whatever runs have been done and then post-process them and
interpret them with this method.
I'd be happy to take any questions.
MR. GARRICK: Questions?
MR. FAIRHURST: I'll ask you a question which is somewhat
peripherally, I think, related to what you said. But the focusing
factor, did you just use the same spread as DOE?
MR. WITTMEYER: No. It's a different parameter. I don't
think it maps directly to any DOE parameter that I know of.
MR. FAIRHURST: But what you mean by flow focusing, you take
the infiltration rate and you multiply it by a certain amount to
represent some sort of focusing into the drift.
MR. WITTMEYER: Maybe I could ask Dick or someone else who
is a little more intimately familiar with the details to explain that.
MR. CODELL: I can address that. Not all the waste packages
are wetted all the time. So of the packages that are wetted, it's a
factor less than one, but each wetted waste package can get a fraction
that can be greater than or less than one of the average amount of
infiltrating water.
So that factor, Fow, stresses the latter. The amount of
water that the wetted waste packages get that is above or below the
average infiltration rate.
MR. FAIRHURST: I see. So that's sort of variation along
the drifts, in essence, right?
MR. CODELL: Well, the abstraction in the TPA code is much
simpler than that.
MR. FAIRHURST: I understand.
MR. CODELL: We really only have one representative waste
package per sub-area. So it applies to all the waste packages. It's an
ensemble idea and it comes about from an ensemble calculation of
statistical parameters.
MR. FAIRHURST: I understand. It's along the drift.
MR. HORNBERGER: Gordon, I think I did follow your
presentation. It was quite nice and quite clear. Both the presentation
that Dick gave and your presentation.
I guess my question is a bit more generic, and that is if --
I'm not quite sure why I want to know a lot about sensitivity if my
doses are in the microrem range. That is, do I really care that the
flow focusing factor is really important if the calculated doses are
absolutely no where close to any standard?
MR. WITTMEYER: That's a nice big question, isn't it? It's
one we hear frequently. But I think we nonetheless, everything else
being equal, you need to focus your program on what is most important;
in this case, what I've described here in the base case analysis or our
base case scenario.
Your interpretation is more --
MR. CODELL: Gordon, could I add something?
MR. WITTMEYER: I think you will. Yes, go ahead.
MR. CODELL: The emphasis is once again on work creating a
review tool and a lot of the parameters, a lot of the assumptions have
to be developed by the DOE and some of the numbers we have, we're not
saying that that is the correct approach.
DOE is going to have to come in and defend a lot of those
numbers and the doses could change. There's a lot of assumptions in
there. The critical group assumption, the pumping rate, the dilution
factors. So I think we still want to look at what's driving our model,
trying to understand the system.
Yes, there's a dose there at the end that people want to
focus on, but I don't think we're ready to say that the doses aren't
going to be higher than a microrem at Yucca Mountain at this point, with
our numbers.
DOE has to come in and defend a lot of those parameter
values. Yes.
MR. WITTMEYER: One of the things, also, it's a way of
testing our code, just kind of verifying to ourselves that it makes
sense; that when these parameters or this collection of parameters are
all high, does it make sense that the realizations suggest high doses.
We hope that we're getting the physics however we capture it through
tables of values or simple abstractions, hopefully getting that right.
This is a very straightforward way to test what we have in
the code, frankly; to give us a good warm feeling about what we have.
MR. HORNBERGER: Did you carry your tree analysis out to
doses of 50,000 years like Dick did or is yours strictly for 10,000?
MR. WITTMEYER: I think it's just been done for 10,000
years.
MR. HORNBERGER: I think it would be quite -- I understand
Tim's point and your point, and I certainly accept that. I was -- of
course, I posed the question as argumentative a way as I could think.
But I understand that you want to do the analyses.
I do think that it would be interesting to carry the
parameter tree approach out to 50,000 years because there Dick's
analysis suggests that it might not be the same realizations that are
leading to doses of concern, doses of true concern at 50,000 years, than
your analysis at 10,000 years.
So you might want to exercise your model to learn about it
at the 50,000 years, as well.
MR. WITTMEYER: I agree. I think that's probably in the
works or will be done as soon as we have this tool, computer tool that
makes it easier to do this.
MR. GARRICK: How do you know which parameters you wish to
evaluate? Is that strictly based on the sensitivity?
MR. WITTMEYER: No, it's not. You don't -- you have 246
parameters and you don't know which one you're going to evaluate to
begin with, which collection. So you start out one at a time. You
figure out which of the 246 by itself describes most of the high doses.
MR. GARRICK: I understand that process.
MR. FAIRHURST: It's the fraction of waste packages that
have failed.
MR. GARRICK: That's where it starts, yes.
MR. FAIRHURST: That's what is going to be the big one.
MR. HORNBERGER: But it isn't, because the tree that Gordon
showed us, it was the amount of infiltrating water that was the most
important.
MR. FAIRHURST: No, I'm talking about 50,000 years, because
he had no failures.
MR. HORNBERGER: We don't know, right.
MR. GARRICK: Yes.
MR. HORNBERGER: But, you see, I would have a question just
on the general approach and I understand your approach, but suppose you
forced it by having the fraction of juvenile failures be number one and
look five deep. Would you or would you not come up with a slightly
different conclusion?
I understand the logic, but I don't know that you could
prove to me mathematically that that would give you the five parameter
set that is absolutely most important.
MR. WITTMEYER: I could say that you could -- you're right.
Someone could come and tell me that this is the most important thing.
What you do with the tree after that is up to you. You follow your
method, and we might get a different result.
But I think we have done the exercises, and maybe Budhi
could comment on that, and I think we do get slightly different results.
But what I'm showing here is just trying to consistently
apply one way of divvying up everything that we have, and you're right,
it's not unique, but it's consistent within this framework.
MR. GARRICK: Any other questions?
MR. WYMER: I wouldn't want my silence to indicate I
completely understand what you said, however.
MR. WITTMEYER: Well, this changes everything. Where can I
help?
MR. FAIRHURST: Touch, touch. Wonderful.
MR. WITTMEYER: Now I don't feel comfortable.
MR. GARRICK: Obviously, this could get pretty
computationally complex if you extended this to a long string of
parameters. Also, if you, for some reason or another, wanted to
consider more than two states.
MR. WITTMEYER: You're right. It does become more complex,
but I think it's doable. I think -- and I see Budhi itching to answer
this.
MR. SAGAR: I was going to say that it's not that the method
would get computationally more complex. We have found that as you go
deeper with more and more, you need more and more realizations for your
results to make any statistical sense.
Actually, the calculation is repetitive calculation.
MR. GARRICK: All you need is memory.
MR. SAGAR: Well, not even memory. We already have that.
We are handling that many parameters already, and it's pretty quick in
computation. It doesn't take time.
What it does require is a lot of realization if you add
another parameter. You don't want to draw your results from ten
realizations attached to the limb of the tree. You want many more.
I think that's what is limits it, much more than anything
else.
MR. HORNBERGER: Just like anything else, when you look at
correlations, they tend not to be stable statistically as the means or
median. Have you played with this to know how many realizations you
need even to go five deep?
MR. SAGAR: No, we haven't played, but we can take a look at
the tables that Gordon presented and you would see that some of the
branches of the tree are only associated with ten realizations. It's
not enough. At that point, it's not enough for that branch.
But I'm assuming that most of the important results, in the
sense of 21 percent and 12 percent contribution to mean, are associated
with branches which do have a significant number of realizations.
MR. HORNBERGER: Even there, my point is that I would be
more convinced if you showed me five different 4,000 realization
comparisons and that that 21 percent was stable.
MR. SAGAR: Oh, yes. Because on this issue that you are
talking about, yes.
MR. HORNBERGER: Right.
MR. SAGAR: Yes. If we had another set of 4,000
realizations, that would tell me something. We haven't done that.
MR. WITTMEYER: That's a more typical test. Actually, I
think the mean is the more difficult one, because it tends to be less
stable, particularly when you're getting very large variation in your
doses, and it reflects your large doses. The higher you go, the more
you need, square cube, et cetera.
MR. GARRICK: All right.
MR. SAGAR: Norm is next.
MR. EISENBERG: Are you guys ready?
MR. GARRICK: We're ready for you.
MR. EISENBERG: Okay. I'm going to talk about importance
analysis, as the next slide show. I'll talk about some of the concepts
of importance analysis and give an example for a repository.
The purpose of the importance analysis is to estimate the
impact of system components on the net risk. Previously today you've
heard other types of sensitivity analyses, looking at the effects of
parameters, looking at the effects of alternative conceptual models,
looking at the effects of various radionuclides, and specific issues;
for example, colloids.
So this is now we're talking about the effects of
components. As we've worked on this, we've come to believe, I guess,
that this is just another type of sensitivity analysis.
It's a pretty simple concept. You start out, you look at
your system performance, and estimate the risk assuming that the
component performs its modeled function the way you think, and then you,
for the selected component, do the calculation again assuming the
component does not perform it's modeled function, but all the other
components perform normally.
And so you have basically two results, the risk with the
component not performing its modeled functions and the risk with all the
components performing their modeled functions, and you take the
difference and then the ratio is shown in the equation. It gives you a
normalized importance measure and, of course, the bigger this measure
is, the more important the component is in affecting the risk.
If, by chance, you happen to get a negative value, it means
that by assuming the component didn't perform its function as modeled,
that means that the risks decrease, which means that it really -- that
particular component has negative effects on system performance.
MR. GARRICK: Not a good design. Sorry, Norm, go ahead.
MR. EISENBERG: We do a calculation like this. Did you guys
hear what I said before?
MR. GARRICK: No. You're cutting out.
MR. EISENBERG: Oh, there you guys are. A lot of designs
are done on a deterministic basis, so you may understand, in a
deterministic fashion, the effect of the particular component on
performance, but you may not understand fully what the risk implications
are.
Then, of course, we have a natural system. So some of the
components in the system are there whether we want them or not. So this
is a way of taking a look at everything on sort of an even playing
field.
Now, I will move on to the example. We used an earlier
version of the TPA code that had the old design, with the waste package
that was not as long-lived as the current design.
This graph shows the results from this kind of analysis and
you see that about four things stand out as having large importance.
Now, in order to claim this, I should say that strictly speaking, this
variable eye or the importance measure should be treated as the random
variable, so you can use different statistics associated with the random
variable to determine what the importance is.
This looks at four different statistics, essentially, which
simply all give about -- or get similar results, but I don't know that I
need to get into those details.
But what this shows is that the pumping well, the alluvium
in the saturated zone provides or has a big effect on the risk according
to this measure and that, also, the Topopah Springs, below the
repository, has a significant effect.
Everything else has a small effect and you can kind of see
that the layers above the repository, in some cases, seem to have a
negative effect.
Another way of displaying the results is to look at the CCDF
of dose with these -- assuming either that all the functions are
performed as models at the base case and then assuming that groups of
components, in this case, the natural system and key elements of the
engineered system, are assumed not to perform their modeled functions,
and, as you can see, for this -- remember, this is the old model of the
code and the old waste package, but that the natural components have a
much larger effect than the engineered components.
Now, there are some potential conceptual difficulties in
using this kind of an approach. One is how do you go about assuring a
systematic implementation when a component is assumed not to perform its
modeled function because it really gets into sort of exclusive
representations in the model and implicit representations in the model,
and it's not always easy to have a unique solution as to what we mean by
having the component not perform its modeled function.
There is difficulty in conceptualizing that the natural
system doesn't perform its modeled function. Of course, as with a lot
of these methods, there is a potential problem with interpreting the
results.
So in conclusion, this is a simple method to rate the system
components based on their impact on the risk from the system. It's the
type of sensitivity analysis, it compliments information obtained by
other sensitivity methods, and that property, I think, shares with a lot
of the other sensitivity methods.
Currently, it's not post-processor; that is, it requires the
modification of the code. However, as I understand it, they're planning
to have much more in later editions of the code that have much more of a
capability to look at intermediate outputs, and that may facilitate
doing importance analysis as a post-processor, to get the right kinds of
intermediate output.
And that's all.
MR. GARRICK: In your model, when you take a component and
put it in the no function mode, does that no function mode reflect all
downstream activities from -- it seems as though, we were talking
earlier, in an earlier meeting, about that there were some geochemical
issues that still assumed that that --
MS. WASHINGTON: Excuse me. We lost you again.
MR. GARRICK: I think it's me.
MR. LEE: What we're taking is network hits, and that's
something that we're just having to deal with through the phone lines,
where we're getting connected up. The way that it's affecting all three
of us, I'd say it's at the bridge at the NRC, there is something going
on there.
MS. WASHINGTON: I hate to tell you that you just cut out
again. Give me your explanation, Pat.
MR. LEE: About the only thing I know to do is try to call
in again. It's happening to all three sites at once, which means it's
happening through the bridge. I don't know if there's network hits
getting taken there or what.
MS. WASHINGTON: It appears to be only your site, because we
drop back to Yucca Mountain. We're still on.
MR. LEE: Is that right? Well, I can try dialing into a
different port that you have on your bridge and see if that doesn't
clear it up. You want me to try to do that real quick? It will take
about two minutes.
MR. GARRICK: Why don't we -- we're very close to a break.
Maybe what we ought to do is take that break and let you see if we can
get some of these bugs worked out, come back and finish this topic.
Would that make sense?
Okay. Then why don't we take -- can you hang around, Norm?
MR. EISENBERG: Sure.
MR. GARRICK: Okay. Why don't we take a 15-minute break.
[Recess.]
MR. GARRICK: All right. We were asking Norm to wait over
for some additional questions. The question I had we had an adequate
answer to during the break from Budhi, so I don't have anymore
questions.
Charles, you got any more questions about the importance
ranking?
MR. EISENBERG: But how do I know that it was a good answer?
MR. SAGAR: He doesn't trust me.
MR. FAIRHURST: Norm, on I think it's page eight of the
handout, you have on the importance analysis, the one showing natural
barriers not performing their functions, the base case, and the other
one.
MR. EISENBERG: Yes.
MR. FAIRHURST: Now, there was a presentation by DOE in
which, at one time, they -- it wasn't exactly an importance analysis,
but it was one where they took out components and they showed something
there with a 99 percent dependence on the engineered barrier -- excuse
me -- on the waste package. Yours is very different here, right?
MR. EISENBERG: Right. But remember, I said that we're
using an earlier version of the TPA code and it's assuming the waste
package, not the current material, but the old material, C-25, which is
not as long-lived.
Therefore, you see the natural barrier taking a much larger
in this model than you do in the DOE analysis.
But we -- Tim has looked at what they did and we have some
questions about exactly how they implemented it. So we're not sure that
what they say represents the effects of the waste package and the
natural system really does represent.
MR. FAIRHURST: I agree. They themselves had that caveat
because they said that they took some extremely conservative positions
with regard to the natural barriers and over-emphasized the waste
package. But I was just interested that -- okay. All right. Thank
you.
MR. GARRICK: Ray, got any questions?
MR. WYMER: No, I don't.
MR. GARRICK: George?
MR. HORNBERGER: Just one question, Norm. You had indicated
that one of the problems or -- I can't find it. Not necessarily a
problem, but one of the issues was just philosophically what it means to
remove the functioning of a given barrier. Did you have any -- I mean,
is that just something you live with with this kind of analysis or do
you have any thoughts about ways to satisfy critics who would point to
this as some disadvantage?
MR. EISENBERG: Well, you know, we've don't a lot of
thinking about this particular issue. First of all, I think you have to
think about why you would be interested in doing this kind of analysis
for a component and it's not necessarily because you think either that
the component would absolutely stop functioning as you think or fly
away. That's not what is intended at all.
But it's a way of getting at, for a particular component, if
it turns out to be highly relevant in determining what the overall risk
is, then it's that component that you have to have a great deal of
substantiation for in terms of the modeling and the data that we're
using to support the models and the parameters that you're plugging into
the models.
So it's a way of getting at something that I'm not sure some
of the other methods of analysis enable us to get to.
So that's one way to answer it. Another way to answer it,
from the point of view of somebody who has got a background in physics,
is that this is, pardon the expression, but this is the Gadonkin
experiment. This doesn't mean that this could actually ever happen.
It's a thought experiment and we're trying to see something about the
behavior of the model, because after all, that's what all the
sensitivity methods so. They're looking at the models. They're not
looking at the real system necessarily.
So it's telling us something about the model and where the
performance is coming from in terms of the model that was chosen.
That's why we're very careful to say, and the approach we adopted, was
that we're saying that the modeled function ceases to occur, not that
something else happens or that the function just disappears.
Let me say one other thing, however. Many years ago, I took
a course in ground water pollution up at Princeton, the famous course,
and one of the examples they discussed there was the two potential
polluters of a municipal well and the argument was about which one was
actually polluting the well, and one of the litigants said, well, we
can't be polluting the well because there is an aquifer that's
separating where our pollution is going from the municipal well and
we're innocent.
And I believe it was later determined that, yes, there was
an aquifer, but it was a leaky aquifer. So that's an example of a
natural system where you model a function and although the function
might not disappear completely, it would greatly diminish, to the extent
that it turned the tide in this particular law case.
So once again, I'm trying to underline the idea that this is
a way to explore, in a kind of aggregate fashion, the impact of both the
data that we think we understand describing the particular component and
the model that we believe is most apt for the particular component.
And if you assume that that function doesn't occur anymore,
it gives you an idea of what the impact is on the total system
performance.
The bigger that is, the more sure you need to be of those
two elements. I think that provides some useful insight.
MR. GARRICK: Norm, are there any constraints on how finally
you define a component?
MR. EISENBERG: Well, I'm of the old school of system
analysis. I think a system or a subsystem is any piece of the universe
that you can draw a material boundary around, so that everything on the
inside is the system or subsystem, and everything on the outside is not.
And other than that constraint, I don't believe there is.
MR. GARRICK: For example, what --
MR. EISENBERG: I mean, obviously -- I'm sorry.
MR. GARRICK: For example --
MR. EISENBERG: What I was going to say is you don't want to
dispertize your system too finely, because it just makes it much more
difficult to do any kind of sensible analysis. I'm sorry, go ahead, Dr.
Garrick.
MR. GARRICK: What about the cladding issue, with and
without? Norm, can you hear me now?
MR. SAGAR: Yes, but he doesn't want to answer.
MR. GARRICK: He's lost us again. Norm, can you hear us?
MR. SAGAR: Certain responses it refuses to transmit.
MR. GARRICK: He's frozen.
MR. SAGAR: If I might try to answer this, Dr. Garrick. We
did indeed consider cladding to be a compliment, even though in the base
case model that we worked with when we did this example, cladding was
not included at all in the model. So we did not include it.
But conceptually we said, yes, that's one compliment and we
could turn off its function in the sense that the time period, the
lifetime of cladding would be turned off basically in one case, and
consider it in the normal case. But that's very similar to Dick's
analysis, really.
MR. GARRICK: Right. Okay. Any other questions from the
committee for Norm? I'm not going to ask any. Staff?
All right. I guess we're ready to hear about investigating
the risk contribution of igneous activity.
MR. HILL: If somebody could just give me a holler if we
lose the NRC. I'm Brittain Hill, and I will be talking this afternoon
on igneous activity. I'd like to focus on four main points this
afternoon.
First, sort of put the bottom line in front of the
presentation and talk about risk insights from performance assessment.
We haven't had a chance to talk since Part 63 has been drafted. And how
we calculate an expected annual dose from volcanism is a little bit
different than how we calculate the expected annual dose for other
issues.
I'd like to review the technical basis and uncertainties in
probability. That was a specific request from the committee. And then
talk about some of the conservatisms and non-conservatisms in the
volcanism risk calculations that we're presenting this afternoon.
Finally, I'd like to talk about the post-VA interactions
that we've had with DOE and some pretty significant progress forward,
after viability assessment.
First, just to put us in the overall integrated systems
context, igneous activity has two key subissues, volcanic disruption of
the waste package and airborne transport of radionuclides. We also
contribute significantly to biosphere issue of dilution of radionuclides
in the soil.
And since this is all sort of a new paradigm on integrated
subissues, I'd just like to take a moment and show where the old KTI
subissues of probability and consequence fit in.
Before I do that, I just want to make sure everybody
remembers that we have two kinds of igneous events that we talk about
during performance assessment. The most important of these is a
volcanic event, where a volcano actually penetrates the repository and
directly transports high level waste into the accessible environment.
But also we have these things called intrusive events, where
magma intersects the repository, but it doesn't vent to the surface. So
all that happens is we would have a waste package that's failed, but
high level waste and radionuclides are mobilized by ground water flow
and transport.
In terms of the risk contribution, it's dominated by
volcanic events. We have done very little to evaluate intrusive events,
except some scoping calculations.
We've divided, in the past, the igneous activity KTI into
two subissues, the probability of the event and the consequences of the
event. That doesn't translate very straightforward to the integrated
subissues, but we've broken them out this way. Probability is focused
on the igneous disruption of the waste package subissue. There is also
a component of consequence onto that disruption subissue.
Then the other two subissues of airborne transport and
dilution in the soil were previously covered under consequences. The
whole goal here and the real challenge is how are we going to compare
low probability, high consequence events, such as igneous activity, and
associated with the other associated subissues of real high probability,
but potentially low consequence events.
So how do we calculate risk? Which is the only way that we
can make this comparison. For igneous activity, what we first have to
do is look at how the dose can change through time following an
eruption. Here we make the assumption that the volcanic event does
occur and penetrates the repository at some specific time following
closure of the repository.
Here we have run the TPA code for about 400 realizations for
each time, come up with a mean peak conditional dose; that means this is
the average dose that you would get if an eruption occurred, say, for
example, at 5,000 years.
The error bars represent a standard error of the mean, just
a very simple variant statistics. You can see that we have a very nice
regular function of how peak dose through time would behave for
different years of volcanic events and that roughly double exponential
function is controlled by inventory decay and also how different
radionuclides affect dose, depending on which radionuclides are dominant
in the inventory.
So the first thing in calculating the expected annual dose
is figuring out for each year what would the dose be if an eruption
occurred within that year. That's only part of the story.
The second part is that the dose isn't just received by the
eruption, but the fall deposits, the volcanic fall deposit from the
eruption will remain on the surface for many thousands of years
following the eruption.
We don't have any deposits preserved in the Yucca Mountain
region. The youngest volcano out there is 80,000 years old. So we've
had to make an assumption based on analogs, analog deposits, about how
long would a volcanic fall deposit last if we had an eruption in the
area of Yucca Mountain region. We've gone to some other areas and found
that they exist for at least thousands of years, but they're probably
gone by about 10,000 years.
So here at Yucca Mountain, we assume that the deposit has a
10,000 year lifetime and follows a roughly exponential decay through
time. That decay is controlled by radionuclide decay, deposit erosion,
and radionuclide leaching.
You can see here in this example, we assume the eruption
occurs a thousand years post-closure and allow these roughly exponential
functions to behave and you can see how the dose decays through time.
So I'll be referring to a deposit half-life, and that
doesn't really have anything to do with radionuclides, but essentially
it's an exponential decay function, an effective half-life for deposit
removal and radionuclide leaching that we're seeing to do these dose
calculations.
We're also making another critical assumption that I will go
into some details a little bit later on, that the dosimetry remains
constant. The particle concentration above the deposit remains constant
through time. That's a conservative assumption, but it's one that we
have some data at least to back up.
So what we need to do to calculate our expected annual dose
is for any given year, calculate the dose from an eruption, if it would
occur in that year, multiplied by the eruption probability within that
year, which we believe is ten-to-the-minus-seven, and for every
preceding year, we have to calculate what the dose would be from a
preexisting fall deposit, again, weighed by its ten-to-the-minus-seventh
annual probability of occurrence.
So here for example, in roughly year 1,000, we have a
ten-to-the-minus-seventh probability of an eruption occurring within
that year, but we also have a ten-to-the-minus-seventh probability of an
eruption occurring in year 999, with a one-year-old fall deposit giving
us a dose, and back up to a ten-to-the-minus-seventh probability of an
eruption at 100 years post-closure, with a 900-year-old deposit giving
us a dose, and summing for all prior years.
And when we sum that up and assume that our deposit
half-life remains constant at 1,000 years, we get an expected annual
dose curve, that's the upper one shown by the inverted triangles, and
that gives us an expected annual dose of around a millirem per year at
roughly 1,000 years post-closure.
We just did a brief scoping calculation to think, well, the
radionuclide decay -- or there are many more short-lived radionuclides
early on post-closure. So if we allow our effective half-life to be
much shorter early on, say, 100 years rather than 1,000 years, and then
allow that 100-year half-life to gradually increase with time, would it
make a difference.
And at the limits of uncertainty in the resolution that we
have here, I'd say no. You can see the curve is slightly lower, but
we're still looking at roughly a millirem per year expected -- peak
expected annual dose that occurs roughly about 1,000 or 5,000 years
post-closure.
So the bottom line here is that our best understanding is
that the peak expected annual dose from volcanism, and that's the metric
that's being used in Part 63, is around one millirem per year and the
timing of that peak dose is around 1,000 years post-closure.
MR. GARRICK: But that's purely arbitrary.
MR. HILL: What's arbitrary?
MR. GARRICK: The 1,000 year post-closure.
MR. HILL: It's not arbitrary. It's a result of the
calculations.
MR. GARRICK: But it's an assumption.
MR. HILL: Which assumption?
MR. GARRICK: Well, on slide eight --
MR. HILL: No. This is just an example at 1,000 years.
We'd be calculating this for every year, for 100 years post-closure to
10,000 years post-closure. It just was coincidentally that the timing
of around 1,000 years corresponds to this example. But you can do this
curve for every year post-closure from the TPA code.
MR. GARRICK: But it's still very conditional. The
ten-to-the-minus-seven is a fundamental condition.
MR. HILL: Right. It's a fundamental probability of the
event. I will address some potential concerns with that number in a
minute; why we believe that's not just reasonably conservative, but is
realistic as we can make it for the Yucca Mountain region.
MR. GARRICK: And one millirem is a no-never mind, so why
are we fussing around with it.
MR. HILL: Well, it would be, except we haven't addressed
the uncertainty associated with that number. On the next slide, we say
that these mean values do not reflect our current understanding of
uncertainty. There's two fundamental processes, that I will go into a
bit more detail in a few minutes. But, first, we're assuming that the
number of waste packages entrained corresponds to a volcano in an
undisturbed geologic setting.
We have a technical basis that's under development that
leads us to believe that number is under-estimated significantly in our
performance calculation.
Conversely, we also have been assuming that our mass loading
parameters through time remain constant. We're assuming that the
particle concentration over that fall deposit remains the same for
thousands of years into the future.
We know that that's over-estimating the dose consequences of
the event, but we do not have a technical basis to say how low those
concentrations should be through time. That's another area that we're
working on.
So I think the point that we have order of magnitude level
uncertainty about that one millirem per year risk number and that the
continued level of effort during the next two years can reduce this
level of uncertainty quite significantly. And we're concluding, not
just from that one millirem per year, but considering that there is
significant uncertainty on that number, that could cause one millirem to
go up or down, and not just at the third decimal place, but truly by an
order of magnitude sense, that this one millirem per year should be
viewed as significant to total system performance assessment.
So our insights can conclude that volcanism presents a
quantifiable level of total system risk. This is a doable and
defendable calculation.
Second, as with every other issue, our staff analysis shows
that the Yucca Mountain site does not exceed the proposed total system
performance standard.
Finally, because this is a significant issue, the DOE
license application will need a clear and credible treatment of igneous
activity.
MR. HORNBERGER: Can I ask you a question? Actually, on the
previous slide. Your expected annual dose, as you show it, is the sum
of -- it's a convolution of the deposit and so you basically have two
parts. It's the eruption dose and the convolution of past eruptions.
MR. HILL: Right.
MR. HORNBERGER: Which of those is most important?
MR. HILL: The past eruptions.
MR. HORNBERGER: It is.
MR. HILL: By orders of magnitude. By the way this is
formulated now for Part 63, the eruption dose becomes relatively
insignificant. You can get a sense of that importance by looking at the
dose through time, multiplied then by the probability of occurrence. So
12 rems times ten-to-the-minus-seventh probability of getting that comes
up to a very small number.
MR. HORNBERGER: That's what I would have guessed.
MR. HILL: And that's the real challenge of how we're going
to demonstrate compliance -- excuse me -- how DOE is going to
demonstrate and we're going to evaluate compliance based on a deposit
that no longer exists in the Yucca Mountain region, for which the key
parameters have no basis in the literature.
MR. HORNBERGER: The other presumption here is, of course,
that your ten-to-the-minus-seventh, that's it's an IID, it's independent
identically distributed process.
MR. HILL: Yes.
MR. HORNBERGER: And is that a reasonable assumption for
volcanism or do you --
MR. HILL: I believe it's temporally a good homogenous
process and that a scale of 10,000 years relative to the recurrence
rate, there is no significant difference between the probability at 100
years post-closure versus 10,000 years post-closure, even at a million
years, the differences are very insignificant.
MR. GARRICK: So how do you compare the uncertainty in the
ten-to-the-minus-seven number with the uncertainty in the parameters
having to do with the number of waste packages entrained and the mass
loading parameters?
MR. HILL: I'm not sure, because there are very different
uncertainties. The probability model doesn't represent a median or some
sort of statistical measure of a population of probability models. For
example, it's not the mean of what we can glean out of the literature
for probability of disruption at Yucca Mountain.
That's why I was putting down that we have uncertainty
really in the consequences. It's a little easier to quantify in terms
of the order of magnitude.
Defer the probability part for just a couple of slides,
because I can give you a sense of conservatism on that.
MR. GARRICK: All right.
MR. HILL: Again, we don't have the technical basis to truly
quantify the uncertainties in this first bullet. But I would say that
they are on the order of an order of magnitude and they could well be
offsetting magnitude uncertainties.
The committee had specifically asked to talk about
probability model uncertainties. So I'd like to give a very quick
overview on some topic that's been presented in great deal both in the
literature and in previous ACNW meetings.
The models that we're using for the NRC issue resolution
process are based on the clustering and age of past igneous events in
the Yucca Mountain region.
Now, there is no accepted methodology for how you do a
volcanism probability model calculation. We don't know, for example,
what the standard is for what constitutes an event. Is it a single
volcano? Is it a group of similar aged volcanoes? Do you include the
subsurface structures in there? Each of those three events has a very
different area term, but also a different recurrence rate. So what
we've done is we've examined different event definitions to see what the
significance is on do we call it a single event or a chain of events in
terms of the probability of an igneous event.
We also have a number of geologic features that could
influence where a volcano is going to erupt. Some of the ones that
we're integrating into probability models include variations in crustal
density as measure of past crustal extension that focuses where
volcanism would be and also how the orientation of existing faults is
relative to crustal strain.
If it's in an easy to dilate orientation, it's more likely
ascending magma can come up along those easily dilated structures
relative to forming new dilational structures. So models that we have
been developing and are well documented in the IRSR give us a range
depending on how you use the event definitions and the radiologic
features.
They can go from on the order of ten-to-the-minus-eight to
ten-to-the-minus-seventh per year at the repository, but also may be
ten-to-the-minus-seventh to ten-to-the-minus-sixth per year in Central
Crater Flat, and less than or significantly less than
ten-to-the-minus-eight by the time you get east of the repository in
Jackass Flat.
Now, I need to emphasize that we really don't have a
technical basis to distinguish these different kinds of models. Is it
more correct to say that a volcano is an individual vent or vent
alignment? Nobody knows. We don't, DOE doesn't, nobody in the
literature can tell us.
Does the degree of past extension or orientation of current
structures dominate how magma ascends or do the mathematical models,
which kernel do we use, some sort of a gallium kernel, which kernel best
describes the variation in the Yucca Mountain region? We don't know.
So we can't really say that we have a population with a
central tendency about it, but rather these probability models bound and
we'd say reasonably conservatively bound at ten-to-the-minus-seventh, a
range of probability for this site.
We have to also remember that we have very few igneous
events in the Yucca Mountain region.
MR. GARRICK: You use the word reasonably conservative,
which suggests to me that you have some knowledge about what -- if you
were put to the test, what kind of a representation you would put this
in in a probability distribution form.
Would you not feel more comfortable if you characterized
that particular probability as some distribution rather than a number
about which you say is reasonably conservative? Why not go that one
step further and say, okay, this is what I mean by reasonably
conservative? This is the distribution and I'm choosing a value here
and now I know exactly what you mean by reasonably conservative.
MR. HILL: I think we'd be mixing a lot of different models
together under the class of just this is a probability model. How would
we come up, for example, a licensing review position, with the existing
literature? How would we factor that in to a probability distribution?
MR. GARRICK: Well, you're ignoring it the way we're doing
it now.
MR. HILL: Right, because --
MR. GARRICK: You're ignoring the uncertainty and that's not
very satisfactory when you're there telling me that this number
represents a number that's reasonably conservative.
So you have some more information at your disposal that
you're not giving me and I'd much rather see you give me that
information because I can handle it a distribution curve and now you can
relate to that curve exactly what the evidence is that supports that
curve.
MR. HILL: Let me show this slide here that gives the basis
not from the statistics or model approach, but rather the features of
the Yucca Mountain site and why we would say that this
ten-to-the-minus-seventh number is more than just an artificial
construct of elegant mathematical models.
This is a map showing the distribution of past volcanic
events at the Yucca Mountain region. North is up. There is our
repository site. The red represents volcanoes that have erupted within
the last million years. Green is in the range of three to six million
years, and blue are volcanoes that were active anywhere from eight to 11
million years ago.
An important feature here is an interpretative feature
called this Crater Flat structural basin, shown in dashed purple lines,
that all of the paternary, the volcano is younger than five million
years have erupted within this basin. It's also well expressed in the
subsurface geophysics and we think this serves as the main locus of
volcanic activity within the Yucca Mountain region.
You have to go quite a distance, 50-60 kilometers away from
this basin to find volcanoes that are younger than two million years.
So it's not like there's a bunch of young volcanoes up here or right out
here that we aren't showing.
Now, within the past million years, we have had two volcanic
events in the Yucca Mountain region, within any conceivable distance of
the Yucca Mountain site. Right here, Crater Flat, a million years ago,
and down here at Lathrip Wells, about 80,000 years ago.
So in a very simple way, in the highest probability point of
the Crater Flat basin, which would be right here, and you can just
eyeball it and see that's sort of the locus of activity, we have had two
events in the past million years.
So if you say the annual probability is around
one-times-ten-to-the-minus-six, in the next million years, you'd expect
to have one volcano. Well, given that the previous million years gave
you two volcanic events, one million year or a probability model that
says ten-to-the-minus-six per year at the locus of activity seems pretty
reasonable based on the past patterns of activity.
MR. FAIRHURST: Isn't there some suggestion, at least by
DOE, that there is an alignment, that these volcanoes are following a
structural trend?
MR. HILL: At a very large regional scale, that's correct.
That's been recognized for over 20 years. The green water alignment
going down from east central California all the way up into central
Nevada.
MR. FAIRHURST: But not more locally? I thought I heard
some arguments that the alignment would bypass the repository area.
MR. HILL: That's a whole other issue on source zones, that
they would say that based on this distribution of sparse events, and
that's how I would characterize it, as a distribution of sparse events,
that they would say that the volcanism is localized to the left of the
arrow in many of their probability models. But yet that has been
nothing more than defining a zone based on a sparse pattern of past
activity. There is no geologic feature that's been presented in the DOE
models or accompanying literature that suggests that there is any
geologic structure or feature in that region that would localize
magnetism away from the repository.
In fact, the past pattern of events shows that a volcano did
form within one kilometer of the repository site about 11 million years
ago.
MR. FAIRHURST: Well, the DOE has a value, they've assumed a
value, I think it's 1.25 or something times ten-to-the-minus-eight.
MR. HILL: That was a --
MR. FAIRHURST: That was in the VA.
MR. HILL: -- fall event probability for igneous activity of
1.5-times-ten-to-the-minus-eight as a mean value. Yes.
MR. FAIRHURST: That's what you're suggesting is
ten-to-the-minus-seven.
MR. HILL: Yes. I'm not saying it's a mean. We're just
saying the best we can do is bound that in order of magnitude.
Now, what I want to get to on that kind of a number is we're
seeing an annual probability of ten-to-the-minus-six in this locus of
activity that seems reasonable from the past pattern of events.
Now, let's look at what's happened over the past 12 million
years. Depending on how you want to define an event, we've had anywhere
from 13 to 15 igneous events within this Crater Flat structural basin.
Now, one of those events, up here at the head walls of Solitario Canyon,
came within one kilometer of the proposed repository site. So we're
seeing one event out of 13 to 15 that's right there, right next to where
the repository is being located.
So in order of magnitude sense, we're seeing an order of
magnitude decrease from the locus of activity out here in the repository
site, in contrast to an equivalent probability or two orders of
magnitude decreased activity over the geologic record at Yucca Mountain.
So we would see these order of magnitude relationships
supports an order of magnitude decrease in probability from the locus of
activity in Crater Flat, which is reasonably well constrained up through
the paternary, out to the proposed repository site based on a longer
record, but the only available record we have to make a determination at
this site.
That's one of the reasons we believe that a
ten-to-the-minus-seventh number is the most defendable for this site
relative to something that would be ten-to-the-minus-sixth or higher as
proposed in some of the peer reviewed geologic literature or below
ten-to-the-minus-eight at Yucca Mountain.
We just don't see a two order of magnitude decrease in past
activity between here and here. So I know that's not a statistical
determination, but it does introduce site data.
MR. HORNBERGER: I have a question, though. This has always
intrigued me. So I follow your argument and say it's ten-to-the -- as I
would understand your argument, you're saying that there is a
ten-to-the-minus-seventh per annual probability of a volcano somewhere
in this region.
MR. HILL: No. That is for the site specific.
MR. HORNBERGER: How did you make it site specific? That's
the part that I miss.
MR. HILL: That's the probability.
MR. HORNBERGER: I understand that you have all of these red
-- you say, well, two in the last ten-to-the -- the last million years,
so one-times-ten-to-the-sixth.
MR. HILL: In the center of activity. In the locus, the
cluster, the center of activity, the highest probability is there at
ten-to-the-minus-six per year.
MR. HORNBERGER: Right. But if we care about Yucca
Mountain, it's -- so it recasts the argument. I hear the weather
forecast on the Weather Channel and they say there is a ten percent
chance of rain tomorrow and it always -- I always am curious, and I've
asked meteorologists, does this -- should I interpret this that it's a
ten percent chance of rain somewhere within a 50-mile radius of San
Antonio or should I interpret it that it is a ten percent chance that
it's going to rain everywhere tomorrow.
So there is a temporal and spatial aspect, and that's what
I'm asking you to disentangle here.
MR. HILL: Right. And that's what I'm trying to untangle,
given that we're dealing with a clustered event that has some sort of a
clustering to it. What is the boundary of that cluster? That's the
fundamental question. We would all agree, DOE and us and everyone else
agrees that right here in Central Crater Flat is the locus of activity,
the most likely spot for the next eruption is right out in here.
But is there a geologic feature that says the boundary of
this cluster is well defined where? It really isn't. Even when you get
up to here, that matter becomes very diffuse. You lose the structural
basin, the good manifestation of it, right about here. So where within
this is the eastern boundary? Well, it's easy to construct a model that
says the edge of the alluvium is the edge of the basin.
Now, that's one of the bases used to construct a model. But
we're trying to make a little more robust determination, what is the
feature that will affect the process. So that's why I'm bringing in the
order of magnitude decrease. It's the best you can do.
I'm not saying that 13 to 15 to one is exactly the number.
MR. HORNBERGER: I know, but I'm being dense, okay? And
it's clear that I'm being obtuse and I'm not understanding.
Let me change it from Yucca Mountain to -- what's the most
recent one there?
MR. HILL: Lathrip Wells.
MR. HORNBERGER: Lathrip Wells. Okay. Does the probability
-- is the annual probability of volcanic eruption right there, right at
Lathrip Wells, ten-to-the-minus-sixth?
MR. HILL: No. Actually, I think the probability at Lathrip
Wells is below ten-to-the-minus-ninth, because we've never seen these
volcanoes come up in the same place again. They always come up in a new
location. Something about the pathway.
But let me just give --
MR. HORNBERGER: Okay. Okay. One kilometer --
MR. HILL: Can we get rid of Lathrip Wells for a minute and
go back 100,000 years ago.
MR. CONNER: Based on the model --
MR. GARRICK: For the benefit of the court reporter, would
you give your name?
MR. CONNER: My name is Chuck Conner, at the CNWRA. I just
wanted to clarify that based on the models that we're using, the
probability of eruptions at Lathrip Wells -- by the way, Lathrip Wells
is a volcano that is expected one time -- a one-time eruption.
Based on all that, it's about ten-to-the-minus-seven per
year. It's a little bit higher than the probability of eruptions at the
site itself. In fact, if you do the cluster analysis, you discover that
Lathrip Wells is, in fact, at the edge of the Crater Flat cluster, much
in the same geographic position as the repository itself.
So based on the model, the probability of bull's eye is in
southern Crater Flat. The probability of decay is away from Central
Crater Flat, based on a waving function that tries to incorporate the
geology and the Lathrip Wells gives you about the same probability as
the site itself.
MR. HORNBERGER: What confused me was your statistical
argument for ten-to-the-minus-six versus a model generated probability
which does take into account this --
MR. CONNER: Yes. Our models take into -- yes, potential
recurrence rates, spatial weighting factors and that whole thing. Using
technical terms.
MR. HILL: Anything more on probability?
MR. GARRICK: A lot. I had forgotten what the -- was it a
NUREG or a DOE document that developed the equivalent of a volcanic
hazard curve which was the frequency of occurrence of volcanoes as a
function of different severity or magnitude or what have you.
Is there such a -- are you developing something like that?
MR. HILL: No, because we're really dealing with a single
type of volcano. Many of the volcanic hazard curves that you've seen
are for very different classes of volcanic impacts, such as debris flows
and ash fall, things of that nature. We're worried really about a new
vent forming, not indirect impacts from distant vents.
So there is no minimal volcanic event that does not affect
the repository performance given the volume and severity of past igneous
events in the region.
MR. GARRICK: That's a pretty bold assumption, isn't it?
MR. HILL: I think it's well constrained by 12 million years
of data.
MR. GARRICK: But I'm thinking also about the interaction of
the magma with the repository.
MR. HILL: That's why we're using stochastic processes to
sample a range of consequences to come up with our consequence
calculations. We're not taking a deterministic approach. There is an
uncertainty in how magma interacts, how magma entrains, we're accounting
for a great range within a basaltic eruption class of event duration,
event power, wind speeds, all these other parameters or samples
stochastically.
MR. GARRICK: And how do you deal, again, with the
location-specific issue? We're talking about a very pinpoint location
here and we don't have a problem unless there's some interaction and
there, depending on circumstances, we may still not have a problem.
For example, with backfill, the problem is much different
than without backfill and so forth.
MR. HILL: Conceptually, that's correct. But for the data
that we're -- for the results that we're presenting in previous
calculations, we've made no assumption on backfill. We've really only
considered that the source term of the canisters that are directly
entrained within the volcanic conduit, if you put a volcano and the hole
is 50 meters in diameter, and you're erupting material at 100 meters a
second, we feel that when you put a waste package into that hostile
environment, there is a failure and the material is entrained.
I will go into some of those critical assumptions, but we
have done scoping calculations on indirect effects, how many waste
packages would be affected indirectly from an igneous event, but in a
risk-informed setting, we just don't have a high priority to those tasks
because we're dealing with hydrologic flow and transport issues.
You still don't have an essentially instantaneous transport
of waste to the accessible environment. You fail a waste package, it
still takes 4,000, 5,000 years to mobilize that waste out into 20
kilometers to the critical group, and then you start weighing that by a
ten-to-the-minus-seventh annual probability of occurrence to come up
with an expected annual dose and you can see relative to a one millirem
per year expected annual dose, those indirect effects are many orders of
magnitude less.
So in prioritizing the work, we focused on volcanism and
paid very little technical direction to the indirect effects or the
hydrologic, enhanced hydrologic flow effects.
MR. GARRICK: You said earlier that you thought some of
these uncertainty effects might be possibly offsetting.
MR. HILL: Yes.
MR. GARRICK: Given the uncertainty with the
ten-to-the-minus-seven and the uncertainty with the parameters, what is
your present sense of how the uncertainty band would exist around the
one millirem dose?
MR. HILL: My sense is it's fair to put an order of
magnitude on that uncertainty, considering how you want to view that
uncertainty, is it solely on our models or does that account other
available information.
MR. GARRICK: So you think -- unless he's taking the
ten-to-the-minus-seven as something other than an approximate mean or
conservative mean or conservative median.
MR. HILL: There are values in the literature by recognized
experts in the field that say that probability should be
ten-to-the-minus-sixth using the same methodology that dominates the DOE
position of ten-to-the-minus-eight.
There's other -- there's work in the literature that
suggests recurrence rates have been under-estimated based on present
patterns of crustal strength. Now, while we do not feel, based on our
analysis, that those hypotheses tell us risk has been under-estimated,
they nonetheless exist in the literature and have not been addressed by
the Department of Energy. There is also undetected or interpreted
features in the Yucca Mountain region that the Department of Energy
needs to address from their own work that says that there are still a
significant number of undetected igneous events out there.
So if we were to say right now what our uncertainty is in a
licensing sense on how we evaluate all available information, it's very
fair to say that there is an order of magnitude uncertainty above and
below ten-to-the-minus-seventh per year.
MR. GARRICK: And you think that translates directly to the
dose.
MR. HILL: Yes, it does. Any more questions at this point?
Because we're going to move to consequences.
MR. FAIRHURST: That's what I'm interested in.
MR. HILL: Okay. So there's really about seven main points
I'd like to touch on for consequences and how we're evaluating the
degree of conservatism or at times the degree of realism in the one
millirem per year that I've been showing.
We've got a number of key processes and they don't
correspond to integrated issues or even part of the TPA modules, but
they're just sort of abstraction of where we believe the fundamental
assumptions are.
The first of these is the volcanic conduits are the same
dimension as observed in undisturbed geologic settings. Right now we're
assuming the conduit is about one to 50 meters in diameter and that
entrains anywhere from one to ten waste packages. We sample that under
uniform distribution.
We can constrain that number very well by observations of
intrusions, like here in Utah, where we see a shallow sub-volcanic dyke.
This is about a kilometer beneath the surface. This conduit is 45
meters in diameter, fed by a little one meter in diameter dyke here.
You can also go to active volcanoes and use the amount of wall rock,
subsurface rock to constrain that diameter very well.
But that observation is only for undisturbed settings. We
really haven't accounted for how a two-phase flow, fragmented magma with
dissolved gas in it, is going to interact when it encounters a
backfilled or non-backfilled drift.
The disturbed stressing also on the surrounding rock has not
been investigated by us or by the DOE.
All we can say is that the current value appears to be
reasonably constrained by analog data. We just have done some scoping
calculations that you've heard about that indicate the magma-repository
interactions may be much greater than we have assumed using strictly
analogs data. So we may be likely under-estimating the number of waste
packages entrained by a volcanic event.
We have ongoing investigations this year to conduct
numerical and analog experiments to try to scope out the magnitude of
that potential under-estimate.
MR. FAIRHURST: Is there field evidence that these eruptions
are indeed cylindrical?
MR. HILL: Yes. Reasonably so.
MR. FAIRHURST: How does that jive with the in situ stress
field?
MR. HILL: Well, you're ending up coming up with a plainer
feature, a dyke, that eventually localizes flow within this conduit and
begins to erode the wall rock outward.
So it's just getting into a nice -- it really doesn't have
anything to do with the stress field because the pressure within the
dyke and the flowing magma conduit is greater than the lithostatic
stress. So it begins to pluck out around the conduit and --
MR. FAIRHURST: Yes, but it would tend to have an
orientation.
MR. HILL: There is a slight elongation to it at times, but
at other times, it has nothing to do with stress as deduced by the
orientation of the feeder dykes. Sometimes the elongation is in a
different direction.
MR. FAIRHURST: No, I can understand that, as a secondary
consequence of the --
MR. HILL: Yes, it's secondary. It's not truly governed as
a penny-shaped ellipse, a very long ellipse by crustal stress. These,
where we've seen them, tend to be very cylindrical. They're not perfect
cylinders. They're geologic. But they do have that nice rounded
outline when we view it at one level at two dimensions.
MR. FAIRHURST: And that's the surface.
MR. HILL: Yes.
MR. FAIRHURST: Have you any indication of what it is as a
function of depth?
MR. HILL: No.
MR. FAIRHURST: Down to 300 meters or 400 meters?
MR. HILL: No. Depth -- those kind of depths are just
retched to constrain in geologic settings. Whether we're here at 500
meters or kilometers, at an eroded system, there is no mineralogical
relationships, there's no phase relationships that we can use to say we
know what our depth was within even 100 meters.
We're constraining this as about a kilometer, plus or minus
half a kilometer, based on stratographic relations out in the Escalonte
Highlands, but even there, somebody can come in and tell me this is 200
meters below the surface and there is now analysis to tell, the
paleosurface, excuse me.
MR. WYMER: So I guess your assumption then is that all the
waste packages that are affected by this volcanic action are totally
disintegrated and the entire contents are the source term.
MR. HILL: Yes.
MR. WYMER: Subject to all of the other assumptions in 3.2
that talk about the movement.
MR. HILL: Right. Compared to a fall factor, for example.
And here's our next slide, that we're concluding that the waste packages
are breached during these events. We look at the physical, thermal and
chemical loads that are emplaced upon a waste package when you put it
into a volcanic conduit. We say that clearly exceeds the design basis
for a canister.
In addition to corrosion and ambient effects and gravity,
we've got a temperature, magnetic temperature of around 1,100 degrees
Centigrade. The chemistry is fairly hostile. They are water, sulfur
dioxide, iron, silica, all that is available to react with the alloy
metals.
Also, there is a significant physical force. The people
come up and say, well, what is the density of magma. I tell them it's
anywhere from 1,200 kilograms per cubic meter fragmented to
non-fragmented, 2,600 kilograms per cubic meter.
Well, what is that? You take a Volkswagen, new Beetle, and
you compress that down into a cubic meter, you've all seen the jaw, the
crushers come in, put that into a cubic meter, that's 1,250 kilograms
per cubic meter.
So that's two Volkswagen new Beetles compressed into a cubic
meter, impacting your waste package anywhere from one to 150 meters a
second. That's two to 300-and-some-odd miles an hour for days to weeks.
MR. WYMER: Except it's liquid.
MR. HILL: What?
MR. WYMER: Except it's liquid.
MR. FAIRHURST: It doesn't move, it's pretty viscous.
MR. HILL: It's fairly viscous and it's a continuum. So
what I'm getting at, if somebody would have a detailed analysis
examining the stress imposed upon a canister in these conditions at
appropriate temperatures and mass loads, we will review it and modify
our assumptions accordingly.
But there are no data on how the candidate alloys have
behaved at 1,100 degrees Centigrade under extreme dynamic loads. So we
feel that while we cannot prove waste packages breach, given these
physical conditions, it's a reasonable conservative assumption that the
waste package is breached when it's entrained and in erupting conduit.
MR. FAIRHURST: So you assume that anything that's within
this 50 meter -- is it 50 meter radius or diameter?
MR. HILL: Fifty meter diameter.
MR. FAIRHURST: Fifty meter diameter, everything is plucked
up and thrown out.
MR. HILL: Right.
MR. FAIRHURST: And the numbers that you -- you said it's
from this to this, is that depending on the diameter alone or is it on
the repository design or what?
MR. HILL: It turns out it does depend on design.
MR. FAIRHURST: Because the EDA-2 is a much less dense.
MR. HILL: Yes, but they've gone to line loading. So the
end result is almost identical. I think it's maybe one waste package
difference, whereas before you had a 21 meter inter-drift spacing. So a
50 meter conduit can impact at least two emplacement drifts. Now you're
only impacting one emplacement drift because they've gone to about an 80
meter inter-drift spacing.
MR. FAIRHURST: Right.
MR. HILL: But instead of having 15 meters between each
waste package, we're now down to ten centimeters per line loading. So
that 50 meters, given the proposed EDA-2 design, still corresponds to
ten waste packages and all they've done is slipped the corrosion
allowance and corrosion resistance material on the canister and made it
a little bit thinner.
So that hasn't changed our risk understanding in any
significant way by going to the EDA-2. Again, these calculations have
no -- backfill has no effect on these calculations.
MR. HORNBERGER: A completely hypothetical question. You
run into a three-layer repository. Would that treble the risk of
volcanic eruptions?
MR. HILL: I'm not sure, because there is a limit to how
much material you can transport. I'm not sure it's exactly one to one
and whether hitting at ten waste packages or a hundred waste packages.
Your source term and the transport and also the dosimetry
limits, there's a limit to how much people can inhale per year. So I
wouldn't want to make that a priority assumption.
A critical parameter is how does high level waste behave
when you put it into a volcanic eruption. Again, we've seen this high
physical thermal loads.
We can go to analog, and I apologize for this being a little
dark, but we can go to analog volcanoes, ones that are as identical as
we can get to Yucca Mountain volcanoes, and see that there's periods of
activity at those volcanoes where wall rock has been pulverized to grain
sizes less than a micrometer in diameter.
Here is a scanning electron photo micrograph from the 1975
Tolbachik eruption, showing the white ash that is ten micron scale bar
right here and you can see these particles are significantly less than
ten microns.
We also know that in situ spent fuel has an average grain
size, again, average grain size on the order of hundreds of microns.
There's been a couple of crush impact studies where ceiling panels have
fallen on high level waste and those yield an average grain size of
around 100 microns.
But we know that the physical load and the thermal load and
chemical load from an igneous event exceeds an ambient condition crush
impact. So some high level waste grain size reduction is also likely
during an igneous event.
The best we can do is say the average grain size would
likely decrease an order of magnitude down to a mean of ten microns.
Again, if there are direct data and analyses or models that
could support that more robustly, we will incorporate those into our
performance assessment. But we're dealing with a process that has very
little investigation in the engineering sizes and we have to do
something.
Fourth, the high level waste is incorporated into the
erupting tephra. This one has some uncertainties, but I feel it's a
very easily defended assumption. We can obviously see that rock
fragments are commonly incorporated into erupting volcanic ejecta, and
here we've got an example from Cerra Negro in Nicaragua. These are
about a millimeter in diameter, surrounded by the tephra itself.
In the TPA code, we say that in order to entrain a piece of
high level waste, the tephra itself has to be three to ten times greater
in diameter than the high level waste fragment. So unless that tephra
particle is three to ten times greater than the high level waste
particle, it will not be entrained.
Now, also assuming that the high level waste is entrained
uniformly throughout the eruption, it's not some sort of an everything
comes in at once right at the end, it's a reasonably defended scenario.
We have to also remember the scale of the process that we're
dealing with here. The volume of tephra is anywhere from
ten-to-the-sixth to ten-to-the-seventh cubic meters and the volume of
high level waste that we're erupting is anywhere from two to 20 cubic
meters.
So you can see there is a great mass of basaltic magma
that's available to entrain a very small mass of very dense high level
waste.
So we believe that the current approach of using these size
limitations is constrained by interpretation of limited data from
geologic settings and appears reasonable, given the observed entrainment
of rock fragments.
Getting away from the EBS and back into the realm of
geology, what are the eruption characteristics of Yucca Mountain's
basaltic volcanoes? You may have heard the term Hawaiian, low energy
strombolian, which has the implication that there's really no mass
transport processes operating here.
We've gone to a number of active basaltic volcanoes and
recently active basaltic volcanoes, documented in the literature, and
found that these kind of deposits are characteristic of volcanoes that
put eruption column anywhere from two to ten kilometers off into the
atmosphere and transport material tens of kilometers down range.
Now, the deposits that you use to demonstrate that these
volcanoes have that kind of dispersivity to their eruptions have all
been removed away from Yucca Mountain volcanoes. So we have nothing but
indirect evidence about how dispersive Yucca Mountain volcanoes are.
But we do have some observations that at Yucca Mountain, the
tephra on the cone or the particles that make up the cone are highly
broken up, showing that they were ejected to a high altitude and came
down cold and fractured brittly, rather than made a big old goober pile
on top of the volcano.
The volume of cone is greater than the volume of lava.
Also, it's characteristic of the active volcanoes. Also, we have an
unusual abundance of rock fragments, anywhere from a tenth to one volume
percent, showing that there is significant subsurface disruption. It's
also characteristic of these dispersive volcanoes.
So what we're doing is using the volumes of Yucca Mountain
volcanoes and comparing the volumes we have with the volumes of fall
deposits from active volcanoes, the dispersive fall deposits, and saying
that the tephra dispersal is controlled by the eruption rate and
duration that we can constrain using volumetric relationships from our
analogs.
Of course, the wind speed and particle size distribution is
also going to affect how much of a deposit you have at 20 kilometers.
So, again, the assumptions that we're using in PA about
eruption dispersivity is constrained by interpretation of the available
data that appears pretty realistic, given observed characteristics
basaltic volcanoes.
We're also saying that the contaminant plume is directed
towards the critical group. This gets us around a number of problems.
For example, we don't have any data on two to eight kilometer altitude
winds for the Yucca Mountain region and you can't use the near surface
data because it's controlled by topography.
We're using two to four kilometer wind speeds from the
Desert Rock Airport, about 50 kilometers to the east-southeast as the
analog for Yucca Mountain wind velocities.
But more importantly, if you start to say the plume does not
blow toward the critical group location, we have to account for how that
deposit gets redistributed at the surface through time. We've all gone
out to the Yucca Mountain region and seen the sand dunes out in the
central Amargosa Desert, where material has been blown for tens of
kilometers around and collects at different points at different times
during the last 10,000 years.
We're also saying that the -- by directing the contaminant
plume, the airborne contaminant plume towards the critical group, we're
making a very parallel approach to how the ground water contaminant
plume is directed toward the critical group, where we have the critical
group sitting at the center of our ground water contaminant plume.
So we believe that while you can have different modeling
approaches for how the contaminant plume is directed towards the
critical group, this current approach is reasonably conservative and is
not going to under-estimate risk to the critical group.
And finally, perhaps the most important one is that we're
assuming our airborne particle concentrations remain constant through
time. We have to note that up until Part 63, we really didn't pay much
attention to the ash deposit, but now the whole expected annual dose is
dependent on how the ash deposit evolves through time.
There is no data on how we have airborne particle
concentrations on fresh or weathered basaltic tephra fault deposits.
We're beginning to collect data on some fairly young ones, but we need
to continue that investigation.
By looking at analog deposits, including data from the Yucca
Mountain region, we've been using a concentration of
ten-to-the-minus-fourth to ten-to-the-minus-two grams per cubic meter in
performance calculations.
Again, the deposits are eroded from Yucca Mountain
volcanoes, but they probably lasted about 10,000 years as opposed to
1,000 years or 100,000 years. But more importantly, we believe, as
geologists, that these deposits change in character through time. It's
just that we don't have any technical basis to say how they've changed
through time.
But it is one area, by looking at analog deposits, that we
can constrain that uncertainty in future PA calculations.
So we've made a conservative assumption that the airborne
particle concentrations remain constant in a weathered deposit as they
do in an unweathered deposit and use the mass removal processes solely
to govern how the dose decays through time. But we recognize that
that's probably a conservative and reducible conservatism in our
performance calculations. We just need a technical basis in order to
reduce that conservatism.
MR. HORNBERGER: So there is no evidence from the field that
these deposits get armored after -- through time.
MR. HILL: It depends on where you are and look at the
alluvial setting toward the critical group location, that we're going
from and sizing to pretty much generally a grading nick point at about
15-20 kilometers.
There is a sense, if you have an undisturbed deposit, it
would probably end up getting infiltration aolian finds and a little bit
of hardening within about three or 4,000 years, mainly from carbonate
cementation. But we are using a farming scenario for the critical
group.
So this is another important point for particle
concentrations. It's not enough to go out to an undisturbed deposit and
say this is what a worker is exposed to. In our 40 hour per week
exposure scenario in PA, we're assuming the person is disturbing this
ground surface. So when you're churning up the deposit continuously,
you have to account for that.
Anything on consequences that I haven't addressed?
MR. FAIRHURST: Just in your model at the moment, there is
no major dose at any one time. Every process that you've talked about,
it's averaging this one millirem per year, right?
MR. HILL: The expected annual dose. When I showed that on
a logarithmic scale, that we're really seeing within about an order of
magnitude variation.
MR. FAIRHURST: Sure, well, within that amount. But there
is not this sudden explosion of a package of waste that is derived
somewhere and hits the ground.
MR. HILL: No, because we are assuming a continuous
calculation.
MR. FAIRHURST: Sure. That's what I'm saying.
MR. HILL: It's not like some scenarios where you would have
failure of a thousand waste packages because of a corrosive process and
all of a sudden you get this big slug coming through the system.
MR. FAIRHURST: In essence, the waste package material is
intimately mixed at the time it erupts.
MR. HILL: Yes.
MR. FAIRHURST: All right. Okay.
MR. HILL: And it's not a matter that I fundamentally -- or
we fundamentally believe that that is how it would happen. Probably not
everything will be entrained. Probably not everything will fragment.
But we come down to how do we get a technical basis to say how much is
not entrained, and that's the challenge.
If somebody would come forward with a defensible technical
basis, we can modify the conservatisms accordingly. But given the
priority of other issues, the limited amount of time that we have, and
the bottom line number that we're showing of a millirem per year, do you
need to have a detailed investigation to prove waste package resiliency
if your ultimate understanding turns out to be a millirem per year.
MR. FAIRHURST: The thing that I'm -- and I think I'm
beginning to understand it. What I have a problem with is if you had a
fairly slowly arriving erupting magma, it would probably follow the
fracture trend, right?
MR. HILL: Pretty much.
MR. FAIRHURST: What you've got is a very explosive arrival,
at very high speed.
MR. HILL: Not quite. The conceptual model is a gradual
ascent, as has been observed at other basaltic volcanoes, that
corresponds to forming a dyke pretty much coincident with the existing
fracture regime, which is in optimal dilation tendency given the current
state of stress.
It's coming up, but flow mobilizes. Let's just say you've
got a repository and you have a five kilometer long dyke that cuts
through the repository. So you've got maybe two kilometers on either
end.
But alternatively, because you've got these drifts sitting
there at 300 meters, let's just say without hitting the surface
elsewhere, it's a flat surface and you've got 300 meters below the
surface, non-confined or loosely backfilled repository drift, and this
magma is coming up. Where is the flow going to go?
MR. FAIRHURST: But that's different from what you've said
so far.
MR. HILL: I'm explaining that the flow would localize
toward the drifts rather than randomly away from the drifts. So
conceptually --
MR. FAIRHURST: No. All right. Okay. Then I see a point
where I might disagree. But okay.
MR. HILL: But the magma that has to propagate upward and
the most dispersive part of the eruption is not the first part. Once
you've established flow and localized flow within a central conduit,
even if that conduit is only a couple of meters in diameter, that's when
the eruption begins to take off.
MR. FAIRHURST: That's when you --
MR. HILL: And the conduit starts to widen. And the real
process isn't that the thing just reams itself out right away, but it
begins to have --
MR. FAIRHURST: Erosion.
MR. HILL: -- a little bit of erosion, but the erosion is
more related to you've established a conduit and you've stressed the
rock because the pressure within the conduit, the fluid pressure is
greater than the surrounding rock. You have to have it to keep the
conduit open.
But there are, for whatever reason, some transients in the
flow that allow that pressure in the conduit to drop below the static in
a transitory way and that allows the wall rock spallation and it's those
-- you've relaxed the conduit pressure, and so the rock can cave in a
little bit.
It's that -- that's the best observation that we can make
from how real eruptions have occurred. So you're gradually widening
this thing out in spurts. So you can imagine that your --
MR. FAIRHURST: That's the part where the gas comes in,
being released and --
MR. HILL: Part of it is degassing, part of it is two-phase
flow effects, where you have degassed --
MR. FAIRHURST: Sure.
MR. HILL: -- around the conduit itself and some molten
rocks, magma wall collapse is part of that.
It's barely understood for undisturbed geologic settings and
then we're trying to extrapolate for the undisturbed geologic process
into the disturbed setting of the repository.
So there are uncertainties, but in the abstraction, we're
saying we ultimately end up with a 50 meter hole in the ground or a one
meter hole in the ground, in which case we're looking at one waste
package breach; 50 meters would give us ten waste packages within that
conduit space.
MR. FAIRHURST: Have you got the details written somewhere?
MR. HILL: On some of it, but it's something maybe we could
talk about after this session.
MR. FAIRHURST: Sure. It would be very interesting.
MR. HILL: Because I do want to tell everybody --
MR. CODELL: Britt, could I just add one thing?
MR. HILL: Sure.
MR. CODELL: I don't know if some things are being confused,
but in terms of the dose calculation, immediately after the event, we're
assuming there is someone farming there. So there isn't any delay or
any -- I don't know if people were talking about delays of sorts, but we
are assuming the dose is incurred immediately after the release of
material, which occurs in a short period of time.
MR. HILL: Right. The eruption lasts anywhere from days to
tens of days and since we're really only resolving dose on a yearly
basis, we assume that in the year of the eruption, there is site
occupancy.
There were a number of issues raised in our review of the
viability assessment and I'm really glad to be able to report that we've
made a lot of progress toward resolving those issues with DOE staff
based on informal interactions.
In contrast with perhaps the previous experience with the
committee members, the informal collegial communications, post-VA
interactions have greatly facilitated the issue resolution process with
the DOE.
We had a number of concerns in VA about the source zone
models that were presented that said the probability of volcanic
disruption at the site was thought to be below ten-to-the-minus-eight
per year, and they were saying the mean probability was
six-times-ten-to-the-minus-ninth, leaving the way open to screen that
scenario for further consideration.
Now, without going into the gory details of the source zone
models, we can just say we had an Appendix 7 in January and got an
agreement informally that the mean probability from their probabilistic
volcanic hazards expert elicitation, PVHA, of
1.5-times-ten-to-the-minus-eight for all classes of igneous events is,
for performance purposes, the probability of volcanic disruption of the
proposed repository site, the mean value.
So the DOE does not believe that the mean probability is
below ten-to-the-minus-eight for volcanic disruption.
Also, a recognition that that PVHA range has a mean of
ten-to-the-minus-eight, but the upper bound of that is
ten-to-the-minus-seven. So if they continue to show the range of their
elicitation that includes the value that we feel best resolves the
issue, then there is no substantive disagreement with the DOE over
probability of volcanic disruption.
We believe we have issue resolution prior to VA and it was
only the appearance of a new class of models in the viability assessment
that raised this issue again, and it wasn't just PVHA that is the issue.
But they constructed a new scenario for source zones that got the
probability below ten-to-the-minus-eight, and that's why we have to
bring this issue up again.
But I think for the second time, it's resolved.
We had a concern that the eruption characteristics
under-estimated the dispersive capability of Yucca Mountain volcanoes.
In a February workshop, the DOE agreed to place greater reliance on
active violent strombolian analogs of Yucca Mountain volcanoes, the ones
that have dispersivities of material tens of kilometers downwind.
The DOE -- the VA calculations had a critical dependency on
waste package resilience during volcanic events and our analysis showed
that that resiliency was not supported by models or data with a
sufficient technical basis.
In both February and April workshops, DOE agreed that
additional models and data needed to be developed to support conclusions
of waste package resiliency, including coupled thermal, mechanical and
chemical effects for igneous events.
Also, the effects of igneous events on high level waste
forms are poorly constrained; again, that February workshop says that
additional models and data are needed to give a defensible technical
basis for that assumption.
Finally, that the airborne contaminant plume bypass the
critical group for most of their simulations and in the February
workshop, they agreed that the parallel approach to contaminant plume
modeling, directing it towards the critical group is a conservatism and
avoids large uncertainties and remobilization of the deposit.
So for the issues that we have or the primary concerns with
the VA analysis, informally we seem to be making some real progress in
bringing the DOE to addressing these concerns by developing models and
data that will address this specifically.
We'll see if the TSPA site suitability implements these
changes in the time allotted.
So in conclusion, the staff believes a
ten-to-the-minus-seventh annual probability of volcanic disruption best
explains observed patterns in the Yucca Mountain region and provides us
a technically defensible value for using risk assessment.
Our current risk assessments of about a millirem per year
from volcanic disruption are supported by direct data, realistic
interpretations, and also conservative evaluations of complex processes.
A continued level of effort can reduce the large
uncertainties on the number of waste packages disrupted and airborne
particle concentrations through time. By continued level of effort, I
mean a sustained level of effort that we've received in past
investigations. We're not calling for an increase in budget, but rather
a sustained level of support.
Our concerns with VA analyses have been addressed informally
by DOE staff. We have a solid expectation that future DOE TSPAs will
evaluate these areas of concern further.
Finally, the DOE license application will need a clear and
credible treatment of igneous activity.
MR. FAIRHURST: So this essentially is, as you say, it's one
millirem per year, with your model.
MR. HILL: Yes.
MR. FAIRHURST: And with a ten-to-the-minus-seventh
probability. And what -- so why do you need a continued level of effort
to reduce large uncertainties?
MR. HILL: As I was explaining, I'm not comfortable in
quantifying that level uncertainty except very qualitatively. That
qualitative level of uncertainty can be above or below an order of
magnitude for that one millirem per year.
MR. FAIRHURST: So it could be up to ten millirem, you
think.
MR. HILL: Yes.
MR. HORNBERGER: But when you said -- to pursue that just a
little bit, though. When you say up to ten millirem, and granted, it's
a gut level feeling, but as you went through your presentation, it
seemed to me -- qualitatively, now, to me, my gut level feeling was that
you introduced a lot more conservatisms than you did areas where you
say, well, this might be a little higher.
Now, if you multiple conservatisms out, this would argue
that it's not one millirem plus or minus an order of magnitude, but it's
--
MR. FAIRHURST: One millirem minus an order --
MR. HORNBERGER: Yes. I mean, it's much more likely to be
less than --
MR. HILL: The area of real conservatism is how does the EBS
respond. I am willing to listen to all arguments on that and evaluate
models and data. The problem is there are no models and data. So what
do we advise the NRC to do with a technical basis? How do you reduce
that conservatism in a robust and defendable manner?
I can't appeal to anything. I'm open to suggestions on what
we should do about waste package resilience. I know we can't propose
that we do an analysis of these under laboratory conditions. We have
made the point to the DOE that this is important to do. If they want to
put their safety case on this resilience, they will need to support it
with models and data.
But what are we to do in the interim? I think we need to
make a distinction between conservatisms and reducible conservatisms.
What can we realistically achieve and what does the DOE need to achieve
here?
MR. HORNBERGER: I guess I worry that the kind of things
that always worry me is if you say, well, this is uncertain, so we'll
sort of bound it by choosing this value, and then we go on and this is
uncertain, so we'll bound it by choosing this value. And all of these
things tend to get multiplied out and pretty soon that's not a very
realistic analysis, because it's not just that you have to have a 747
crash into the Empire State Building, but it has to be on full moon and
--
MR. HILL: Yes, but we're not talking about a 747 going 300
meters underground on a full moon.
MR. HORNBERGER: I know, I know, I know.
MR. HILL: We're talking about a volcano, of a class that
has existed for 12 million years at the site, that imparts known
physical, thermal, chemical, mechanical loads on systems that were
designed not with that in mind.
So while it does seem at times overly conservative, I'd
challenge the audience to come up with how we can reduce that
conservatism in a manner that's going to sustain us through the
licensing.
MR. FAIRHURST: Let me just ask this, a more peripheral
question. Some time ago, we saw a paper given to us from two
consultants from Bristol, I think.
MR. HILL: Yes.
MR. FAIRHURST: Sparks and --
MR. HILL: Steve Sparks and Andrew Woods.
MR. FAIRHURST: Right. And that was, as I understand,
looking at a magma running -- going down an empty tunnel.
MR. HILL: That's correct.
MR. FAIRHURST: Now, what we've heard so far, what you've
mentioned, that's not involved here.
MR. HILL: That has not been. This is the initial stage of
evaluating magma-repository interaction. There have been no dose
consequences assigned to that scenario.
MR. FAIRHURST: Okay. The question of backfill or
non-backfill was relevant in that context.
MR. HILL: Yes.
MR. FAIRHURST: And probably not relevant in what you are
talking about now, because you're throwing -- you're taking everything
out that's in its path, whether it's full or empty, over that diameter.
MR. HILL: But now what we're trying to do is, how is that
path going to translate, when we put it into the disturbed geologic
setting, and --
MR. FAIRHURST: What do you mean by disturbed geologic
setting?
MR. HILL: A drift. So that's what we mean by disturbed.
MR. FAIRHURST: Understand, all right.
MR. HILL: To make sure everybody is clear, right now, we
have not made any assumptions about that. It's solely here is the hole
and this is --
MR. FAIRHURST: It's one to 50 meters in diameter and it's
going to pick everything up that's inside that.
MR. HILL: If it falls in the hole, it's a goner.
MR. FAIRHURST: It doesn't have to fall in it.
MR. HILL: Using it loosely. If it's entrained in the hole.
MR. FAIRHURST: Right. The hole is coming up, come hell or
high water, you're going to get it. But you did say that these
eruptions do start following essentially the dykes. So a vertical
magma, a line, length of magma. Now, do you understand the mechanics of
how that switches to this - you know, what depth below the surface at
which it suddenly literally erupts, accelerates, et cetera?
MR. HILL: The short answer is no. There is a whole process
of fragmentation, where we see this segregation to truly two-phase flow,
fragmented melt, is an area of intense controversy among people that
care about fragmented melts, and the depth is within the range of --
some people would have it down at about a kilometer, to sometimes less
than a couple of hundred meters.
It depends a lot on the vulval contents, water, carbon
dioxide, the gaseous phases. Also, how the melt viscosity and the
kinetics of how that gas is evolving.
So we're dealing here with a melt that we believe has two
weight percent water dissolved in it. We have no constraint on C02
right now. But that is not -- it's very typical for historical basaltic
eruptions.
What would be atypical compared to what you see in Hawaii
that has about, say, half a weight percent of water, which is a fairly
non-fragmented, low infusion, low dispersivity kind of eruption.
There are competing factors, of course, but in the first
pass, it's the water content that governs where you get that
fragmentation and transition from a fairly low ascent driven velocity to
something that's a gas driven velocity, on the order of a 100 meters per
second.
MR. FAIRHURST: The depth of that repository is about 300
meters.
MR. HILL: Three hundred meters, yes.
MR. FAIRHURST: It's in that "iffy" zone, right?
MR. HILL: It's 300 to 200, depending on exactly where
you're looking.
It's very difficult to quantify these processes at 100 meter
levels.
MR. FAIRHURST: Sure, I understand.
MR. HILL: We're doing extraordinarily well sometimes at
saying it's shallow than a kilometer.
MR. GARRICK: Let me ask a question about the dose pathway,
which I gather is a combination of airborne and ground deposition.
MR. HILL: It's 90 percent inhalation.
MR. GARRICK: It's 90 percent inhalation.
MR. HILL: Yes.
MR. GARRICK: If I were to ask you for a dose distribution,
an uptake curve, what would that look like as a function of time during
the course of the event? Over what period of time is there a dose,
since 90 percent of it is airborne?
MR. HILL: You mean following the eruption or --
MR. GARRICK: Yes.
MR. HORNBERGER: Given an eruption occurs today.
MR. GARRICK: Given an eruption occurs today --
MR. HORNBERGER: How long would the dose of one --
MR. GARRICK: How is that one MR distributed in time?
MR. HILL: Well, it's not one MR, because that's an expected
-- are we talking risk and dose? The example I showed in the
presentation was the eruption occurs at 1,000 years. The dose in that
first year of the eruption primarily from inhalation, but there is some
sort of a ground shine component to that, was about 12 rem.
Now, if you look at that deposit, it would decay down to
about 12 millirems over the next 9,000 years, at which we point we stop
the simulation at 10,000 years. So that decay, the magnitude of decay
depends also on your starting condition, because the decay rate will be
much quicker earlier than 1,000 years, because you have the
shorter-lived radionuclides that are decaying out.
You can go to that first curve of dose through time and you
can see up to about 1,000 years, the dose through time curve is very
steep and then shallows out when we get the short-lived nuclides out of
there.
So the decay rate isn't a constant. We're approximate a
constant at times, the simple calculations I'm showing, but, of course
--
MR. HORNBERGER: But after 500 years, it looks pretty
constant, right?
MR. HILL: Right.
MR. GARRICK: Is the re-suspension driver the farmer at the
critical group plowing the field basically and dust being resuspended in
that process?
MR. HILL: We're assuming that they're using a 40-hour per
week exposure scenario, that there is some surface disturbing activity.
We're not assuming a plowing concentration. But the particle
concentration that would be consistent with just walking around.
MR. GARRICK: So you've got ground shine and airborne.
MR. HILL: For all intents and purposes, it's inhalation.
MR. GARRICK: Yes, it's inhalation.
MR. HILL: When we talk about average over 10,000 years
scenario exposures. Of course, early on, there is a ground shine
component that's more significant than it is later on and the different
radionuclides come in at different times. But it is dominantly governed
by the inhalation.
MR. CAMPBELL: And you're assuming that all the waste is
pulverized into ten micron particles.
MR. HILL: The average waste grain size is ten microns. It
has log-triangular distribution, plus or minus one log unit.
MR. HORNBERGER: It's interesting, if we think in terms of
-- if we carried the dose to a health effect, it would actually be quite
different for this than for ground water pathway, because the ground
water pathway would have concentrations essentially constant for very
long periods of time, whereas this, over 9,000 years decay. The health
effects would actually be different.
MR. HILL: Right, because we're not talking about a person's
lifetime. It is many people, for thousands of years, can have this
contaminated deposit and the risk of individual lifetime in those post
years is, of course, different.
MR. GARRICK: Of course, this is more of an episodic event.
So you have a --
MR. FAIRHURST: Not the way he's calculating it.
MR. HILL: At ten-to-the-minus-seven, the probability of two
events is vanishingly small. But, again, we're assuming the event in PA
is a single volcanic conduit and the event does not constitute multiple
conduits within a repository, which is a possibility, but one we have
not explored.
MR. GARRICK: But anytime you talk about a number like
ten-to-the-minus-seven, in theory, you're talking about a recurrence.
MR. HORNBERGER: It is.
MR. HILL: Right.
MR. CAMPBELL: You show a number of buried deposits in the
map of Crater Flats and the surrounding areas. How are those
characterized in terms of whether or not they are volcanic? And then
how are they characterized in terms of their age, so that you can fit
them into what you come up with in terms of probability?
You had several that were colored green in there.
MR. HILL: These are all constrained by aero magnetic and
ground magnetic surveys. They have been modeled, with reasonable
assurance, basalt within alluvium, we're very confident they do not
represent pieces of bedrock. These large -- let me just say, the
largest one in the lower right part has been drilled two times by, I
believe it was Felderhoff Federal, for Shell Oil, as exploration. They
both encountered 50 meters of basalt in the drill hole.
That basalt has been dated at 4.1, plus or minus .1, million
years. That is the only one of these buried anomalies that's been
drilled directly.
By analogy, we're assuming in the probability models that
these other anomalies down here in southeastern Amargosa Desert are
contemporaneous with this feature, even though we have some evidence to
show that a couple of these have different magnetic orientations. They
had to form at a slightly different time than this one.
But, of course, the only way to date these directly is by
drilling through them.
In terms of up here, this, this, and this are all defined on
the basis of ground magnetic anomalies. We would interpret -- we would
make the assumption that they are volcanic rather than an intrusion that
stagnated at shallow levels, just because we have not seen evidence for
that in the recent past in this region, and based on the depth to the
sedimentation rate out there, these are probably around the Pliocene,
give us about five million years or so.
They may be contemporaneous with this event, they may be
older than that. But, again, there is no way to know for sure unless
they were drilled.
MR. FAIRHURST: But there were definitely eruptions.
MR. HILL: We believe they're eruptions. Some of the
magnetic signals give us a hint of flow outline rather than a uniform
sort of silt. We don't think that they're simple magmas that came up
within a couple hundred meters of the surface and stagnated within the
alluvium.
We can't eliminate that possibility with 100 percent
confidence, but I think it's highly, highly unlikely that you would
stagnate magmas with these volatile contents within a 100 meters or so
of crust -- excuse me -- 100 meters of alluvium or so. It's just very
difficult to defend.
MR. CAMPBELL: Why is your probability of a future volcanic
event, including that yellow feature that's at the repository site, in
essence, have so much narrower range than DOE's PVHA? They were
somewhere, anywhere from ten-to-the-minus-seven down to
ten-to-the-minus-ten. Of course, that's an expert elicitation, but --
and then how do you extrapolate probabilities from what looks like a
trend, kind of almost to the northwest, with some sort of gradient to
the repository?
MR. HILL: First, the reason that there is a greater
variance in the DOE data set is because they have many classes of models
that are called source zone models that say that there is some sort of a
step function isolating the location of volcanism in the past from
locations of future volcanism at the repository site.
So therefore, this recurrence rate only applies within this
zone and the recurrence rate out here is some sort of a random basin
range volcanic recurrence rate on the order of ten-to-the-minus-ten
through ten-to-the-minus-ninth. It depends on whose model you're
talking about.
They say that this region is decoupled from this region and
that is based solely upon expert opinion. There is no geologic feature
out there that can be appealed to, except the past pattern of sparse
events.
That's why they have a larger range. But within that range,
it's important to note that they did have models that came up into the
ten-to-the-minus-seventh range as well.
MR. FAIRHURST: From the expert opinions.
MR. HILL: Yes. From the elicitation. We've done our own
scoping calculation on what would the random occurrence of volcanism be
in the basin and range, if we go ahead and say that things aren't
clustered, but rather it's sort of a uniform random distribution and the
whole western United States is the source zone, if you will, that
probability comes in around ten-to-the-minus-ninth, which makes sense
when you look at the probability models for up here.
When they have decayed out to Crater -- to Jackass Flat,
we're around ten-to-the-minus-ninth, and there really isn't much change
in that away from the repository.
So I'd say it's very hard to say that the probability here
is ten-to-the-minus-tenth, given your proximity to something that gives
you a ten-to-the-minus-six recurrence rate. But yet that's the nature
of expert elicitation.
By the same token, you need to recognize that that same
methodology can be used by other experts to say the probability is
ten-to-the-minus-sixth at this site, by also defining a source zone
based on expert opinion, and that's been in the peer-reviewed
literature.
MS. DEERING: Did you say the ten millirem -- you said it
could go as high as ten millirem, plus or minus an order of magnitude.
Would you call that worst case?
MR. HILL: No, not --
MS. DEERING: What would you call worst case?
MR. HILL: I would not.
MS. DEERING: Could you put a likelihood on the ten
millirem, in your best judgment?
MR. HILL: No. It would be a guess and I don't think a
guess is appropriate in this venue. The reason I've been kind of cagey
about this is that we need a technical basis to quantify that. We do
not have a technical basis. I'm very uncomfortable with even saying an
order of magnitude at this stage without supporting work, because of the
sensitivity of that number to proposed standards.
MS. DEERING: But the work that you're going to do, you
believe you could have a technical basis.
MR. HILL: Yes. If we are funded for that work, which would
be a level of funding consistent with this year's level of funding.
MR. FAIRHURST: Does the one millirem come from the larger
diameter, the 50?
MR. HILL: The mean dose for an eruption is a very skewed
distribution. The larger events, the worse events are the ones that
give you a large count with a short-lived eruption and high wind speed.
So you've got a high concentration that blows out fairly far into your
critical group. Those are your worst volcanic events.
So the mean isn't the 50th percentile in these calculations.
The mean is governed by these small eruptions with higher wind speed and
I believe it's in the upper 80th percentile. And, of course, the larger
the source term, towards more ten above five, it is going to give you a
higher dose.
MR. TRAPP: Just a comment from this end. I'd like to
really push something that was stated earlier today. We're not the ones
that are being licensed. What we're trying to do is point out areas of
vulnerability that DOE has got to make sure that they come through and
answer on their licensing case.
The areas that you talked about with the conservatisms are
areas where we've shown -- had some problems with uncertainties that
need to be worked at.
We also --
MR. HILL: He's gone? Can you hear us, John? We've lost
you. We can hear you now. Are you still there?
MR. TRAPP: I'm still here.
MR. HILL: You dropped off.
MR. TRAPP: Anyway, the other point that we're trying to
make here is what we're trying to do is recognize areas that we've got
to make sure DOE addresses in the licensing, but also recognizing areas
that intervenors, et cetera, can come in and really toss in different
numbers.
We need to constrain those numbers. So, again, it's
preparing us for reviewing DOE licensing case, not us going into
licensing, which some of the comments seemed to be based on.
MR. HILL: To emphasize that, in the viability assessment,
the risk from igneous activity during the first 10,000 years of closure
was zero millirems per year. There was no risk. I don't think that
would be a difficult position in licensing to support. So to emphasize,
we're not saying that the analyses show that we're above a dose limit.
We clearly have a difference of opinion with the DOE on what the
relative risk from igneous activity is in this performance assessment
and to come in with an analysis such as presented in the viability
assessment, that might be presented, is a great impediment for us
reviewing and licensing the site.
MR. GARRICK: John Trapp is right. The committee keeps
wanting to design this thing, but they're going to deal with the review
and we apologize for that, but sometimes that's the best way for us to
get into some of the technical issues and it's appropriate for us to be
reminded from time to time that DOE is the one that's trying to get a
license here.
Any other questions?
MR. TRAPP: I would like to add one thing. If I could get
that "I was right" --
[Laughter.]
MR. TRAPP: www.acnw.com.
MR. GARRICK: Any other questions?
[No response.]
MR. GARRICK: Thanks a lot. Very good. This concludes the
presentation and presentation discussion phase of our today's agenda.
The committee will now go into a discussion of primarily administrative
matters having to do with agendas and future reports and meetings that
we've attended and what have you. For that phase of our meeting, we do
not need the court reporter. I think that what we'll do is take just a
very short break, so that we can make whatever adjustments we need to
make.
[Whereupon, at 5:20 p.m., the meeting was recessed, to
reconvene at 8:30 a.m., Tuesday, June 29, 1999.]
Page Last Reviewed/Updated Friday, September 29, 2017