479th ACRS Meeting - February 1, 2001
Official Transcript of Proceedings
NUCLEAR REGULATORY COMMISSION
Title: Advisory Committee on Reactor Safeguards
479th Meeting
Docket Number: (not applicable)
Location: Rockville, Maryland
Date: Thursday, February 1, 2001
Work Order No.: NRC-005 Pages 1-240
NEAL R. GROSS AND CO., INC.
Court Reporters and Transcribers
1323 Rhode Island Avenue, N.W.
Washington, D.C. 20005
(202) 234-4433. UNITED STATES OF AMERICA
NUCLEAR REGULATORY COMMITTEE
+ + + + +
479TH MEETING
ADVISORY COMMITTEE ON REACTOR SAFEGUARDS
(ACRS)
+ + + + +
THURSDAY
FEBRUARY 1, 2001
+ + + + +
ROCKVILLE, MARYLAND
+ + + + +
The Advisory Committee met at the Nuclear
Regulatory Commission, Two White Flint North, Room
T2B3, 11545 Rockville Pike, at 8:30 a.m., Dr. George
Apostolakis, Chairman, presiding.
COMMITTEE MEMBERS:
GEORGE APOSTOLAKIS, Chairman
MARIO V. BONACA, Vice Chairman
DR. THOMAS S. KRESS, Member
GRAHAM S. LEITCH, Member
DR. DANA A. POWERS, Member
DR. ROBERT L. SEARLE, Member
DR. WILLIAM J. SHACK, Member
JOHN D. SIEBER, Member
COMMITTEE MEMBERS: (CONT.)
ROBERT E. UHRIG, Member
GRAHAM B. WALLIS, Member
ACRS STAFF PRESENT:
JOHN T. LARKINS, Executive Director
ALSO PRESENT:
RALPH CARUSO
F. CHERRY
N. CHOKSHI
F. ELTAWILA
JOHN FLACH
WILLIAM JONES
MARK KIRK
RALPH LANDRY
SHAH MALIK
JOCELYN MITCHELL
GARETH PARY
NATHAN SIU
MOHAMMED SHUCIRI
ERIC THORNSBURY
EDWARD THRON
JARED WERMIEL
HUGH WOODS. A-G-E-N-D-A
AGENDA ITEM PAGE
Opening Remarks by Chairman Apostolakis. . . . . . 4
NRC-RES Presentation: Status of PTS Rule . . . . . 8
Screening Criterion Re-Evaluation
Siemens S-RELAP5 Appendix K Small. . . . . . . . .91
Break LOCA Code
Proposed ANS Standard on Internal Events PRA . . 142
Reprioritization of Generic Safety
Issue 152. . . . . . . . . . . . . . . . . 222
Adjournment
. P-R-O-C-E-E-D-I-N-G-S
(8:30 a.m.)
CHAIRMAN APOSTOLAKIS: The meeting will
now come to order. This is the first day of the 479th
meeting of the Advisory Committee on Reactor
Safeguards. At today's meeting the committee will
consider the following.
Treatment of uncertainties in the elements
of the PTS technical basis reevaluation project;
Siemens S-RELAP5, Appendix K Small Break LOCA code;
proposed ANS standard on external events PRA;
repriortization, proposed resolution of genetic safety
issue 152; design basis for valves that might be
subjected to significant blowdown loads; and proposed
ACRS reports.
The portion of the session associated with
the Siemens code may be closed to discuss Siemens'
Power Corporation's proprietary information. This
meeting is being conducted in accordance with the
provisions of the Federal Advisory Committee Act.
Dr. John P. Larkins is the designated
Federal Official for the initial portion of the
meeting.
We have received no written comments or
requests for time to make oral statements from members
of the public regarding today's sessions. A
transcript of portions of the meeting is being kept,
and it is requested that the speakers use one of the
microphones, identify themselves, and speak with
sufficient clarity and volume so that it can be
readily heard.
While this is my first day as the
Committee's Chairman, and I think the first thing we
should do is thank Dr. Powers, who is just joining
us --
(Laughter.)
CHAIRMAN APOSTOLAKIS: -- for the superb
job that he did the last two years leading this
committee. Thank you very much, Dana.
(Applause.)
CHAIRMAN APOSTOLAKIS: I would also like
to thank my colleagues for electing me chairman of
this committee. I have been a member for about 5-1/2
years now, and I have served under three chairmen --
Professor Searle, Dr. Kress, and Dr. Powers.
And although their managerial styles were
somewhat different, they all had one common objective,
namely to make sure that this committee provided sound
technical advice to the commission in a timely manner,
and I can only promise to try to do the same.
Sadly, today, and this week, happens to be
the last meeting of Professor Searle. He is
completing his second term on the committee. Bob, I'm
sure I am speaking on behalf of all of the members for
the committee when I say that we will really miss you
and your wise advice.
And finally also during the 5-1/2 years
that I have been a member, I must say that I have been
very impressed by the professionalism of the ACRS
staff under the able leadership of Dr. Larkins. I
believe it should be on the record that this committee
could not function without the support that we are
getting from the ACRS staff.
DR. SEARLE: I may have some remarks to
make, but I imagine that there will be a more
appropriate time a little later to do that.
CHAIRMAN APOSTOLAKIS: Whenever you want,
Bob. Are there any comments or statements that other
members would like to make?
DR. SEARLE: I would like to make a brief
statement. For the benefit of the members of the
commission staff that are so intimately involved with
the ACRS, but whom we have interacted with from time
to time, I would like to express my own personal
appreciation and admiration for the way in which
pretty much across the board they have conducted
themselves and interacted with the committee.
I have always received the utmost
cooperation when I was in a position where I had to
work with them, and I guess the thing is that I
believe we have a climate of mutual respect, and so
that does the process well.
We talk about issues and we don't talk
about personalities, and I am so very pleased to have
had the opportunity to work with all of them. And
since some of them are here, in here anyway right now,
I just would like to say thank you very much.
CHAIRMAN APOSTOLAKIS: Very good. Thank
you, Bob.
DR. KRESS: I think it is a shame that Bob
is leaving, just when I am hearing how to speak Texan.
(Laughter.)
DR. POWERS: And need to learn how to
speak Texas.
(Laughter.)
DR. SEARLE: Well, I am still working on
Tennessee.
(Laughter.)
CHAIRMAN APOSTOLAKIS: Our first topic
today is treatment of uncertainties in the elements of
the PTS technical basis reevaluation project. Dr.
Shack, I believe you will lead us through this.
DR. SHACK: Okay. We had a subcommittee
meeting on January 18th, where we had a fairly
detailed discussion of the treatment of uncertainties
in the PTS project, and as Tom Kress has pointed out
before, not only is PTS important in its own right,
but this we think is sort of a prototype example of
the kind of detailed treatment of uncertainties one
may need in other situations.
What is unique about this is the attempt
to integrate the treatment of uncertainties in all
aspects of the problem in the framework that
uncertainties have been treated typically in PRAs.
And I think we saw substantial progress at the
subcommittee meeting in the treatment of how they were
handling the aleatory and epistemic uncertainties in
the fracture toughness.
We saw some work towards the treatment of
uncertainties in the thermal hydraulics, and I
believe we are going to get an overview of the
approach to the treatment of uncertainties from
pressurized thermal shock in the presentation before
the full committee today.
But again this is a work in progress. We
are not really expecting to write a letter at the
moment. This is a chance to sort of see how they are
working their way through it, because they are
breaking new ground here and it is not a conventional
treatment of uncertainties that we are looking at.
With that, I will turn it over to Mike Mayfield, I
guess, to start, and Nathan Siu.
MR. MAYFIELD: Good morning. We
appreciate the opportunity to come to the main
committee again, or the full committee, and talk about
the progress that we are making on this project.
As Dr. Shack pointed out, this is -- we
are starting to move back some of the frontiers, at
least in our traditional treatment of probablistic
fracture mechanics as it relates to the structural
integrity of major components.
I just wanted to set the stage a little
bit, and then turn it over to Nathan Siu to work
through the rest of the presentation. We just wanted
to remind you first of all that the objective of the
program is to develop a technical basis for the
potential revision to the PTS rule.
We are not telling you or the public that
we necessarily will revise the PTS rule. Our activity
at this point is to look at the technical basis to see
if there is a justification for revising the rule, and
what that might look like, and then to make
recommendations to the Office of Nuclear Reactor
Regulation, since the rule making function is their
responsibility.
DR. POWERS: Could you remind me what
prompted you to undertake this daunting task?
DR. SHACK: Sure. We have seen some
improvements -- well, the PTS rule itself is based on
technologies from the late '70s and early '80s. We
have made some major improvements in a number of
areas.
For instance, in my business in
particularly, the probablistic fracture mechanics
area, our understanding of embrittlement and fracture
toughness, and the flaw distributions that were a
major source, and in fact the major source of
uncertainty in the original analyses.
So we felt like based on some very limited
scoping analyses, we felt like there was a strong
basis to undertake a more rigorous treatment or
reevaluation of PTS.
The notion at that point was that -- and
I think going back to the Yankee Row evaluation, that
there was significant conservatism embedded in the PTS
rule, and the analyses that are in Reg Guide 1.154.
So we thought that the technology had
improved to the point or matured to the point that we
should take that up and revisit those technical
underpinnings.
This is the first major application of
risk informed methodology to what has been
characterized as an adequate protection rule, and
while I don't want to get into a debate on what we
mean by adequate protection, the rule has been
characterized as such, and it has to do with
backfitting -- the regulatory piece of it has to do
with backfitting requirements.
It was originally promulgated as an
adequate protection rule when we revised it in the
mid-1980s and early '90s, and it was again treated as
an adequate protection rule.
But the fact that we are now taking a look
at it in a risk informed approach is causing us to
examine what we really mean and how we go about
dealing with that.
So that will create for us some additional
dialogue with the committee, I expect, as we go along.
We are evaluating four plants in an effort to develop
a generic approach that will cover the fleet of PWRs.
We recognize going in that evaluating only
four plants and trying to use that as a surrogate for
some 80ish PWRs brings with it certain stretches of
faith, and that is something that we are taking on as
a specific activity somewhat later in the program.
But it is an uncertainty in what we are
doing. The four plants we are looking at are Oconee,
which is a BMW design, Calvert Cliffs, and Palisades
are CE designs, and Beaver Valley, Unit 1, is a
Westinghouse design.
We do not intend to do plant specific
evaluations for the entire PWR fleet. That is well
beyond our resource capability, and it is not
something that is credible for us to undertake.
We do feel that by looking at this small
sample that we can at least improve on the basis for
the rule that is out there today, which is based
totally on stylized transients, and no plant specific
features.
We are looking to use the best available
tools for the analysis, the tools that exist today.
We are making some advances in some of the tools, but
we are not looking to make major improvements in some
of the underlying technologies.
And that we felt like the improvements
that have been made since the time the PTS rule was
originally promulgated that improvements in the
technologies up to this point provide a sufficient
justification, and it is not a practical thing for us
to undertake major revisions to thermal hydraulics
neutron transport calculations, and that sort of
thing.
So we are using the state of the
technology by and large as it exists today. As Dr.
Shack pointed out, this is one of the continuing
series of briefings that we feel have been very useful
in bringing the committee and keeping the committee up
to date on what we are doing.
And we are soliciting your feedback as we
go along. We didn't want to get into this project,
which is a major resource investment for us, and spend
a couple of years working on it, and get to the end
only to have the committee say, well, you have missed
these key issues.
So we wanted to try and solicit your input
along the way, not so much in an effort to get your
preendorsement of the program, but to help solicit
your input, and if you identify something that we are
missing, then we can get that fixed as we go along, so
that at the end of the day we have a complete package
that the committee can review.
DR. SHACK: Just to come back to Dana's
question a little bit. If you were convinced that the
analysis was conservative, what would be the impetus
for revising the rule? Only Palisades is going to hit
the screen criteria, at least under current
projections, right, for 40 years?
MR. MAYFIELD: Well, that's true. For
Palisades, it was a bit over 40, and there are a small
number of plants that would be approaching the
screening criterion out to 60. There is, however, a
fair bit of uncertainty within the licensees, or at
least it has been expressed to us, about what else
would the staff do with new embrittlement
correlations.
As the committee probably knows, very
small changes in chemical composition for the wells,
and our understanding of the chemical composition for
the wells can make very large changes in the estimates
of embrittlement of the vessel.
So that is something that the -- I think
the licensees would like to see a little more
stability in what we are doing, and to remove the
unnecessary conservatism.
So we originally took this on with the
notion that there was a fair bit of unnecessary
conservatism embedded in the rule, and to try and
bring it back, and base it on credible technology, as
opposed to conservative estimates made out of
ignorance. So that was motivating it initially, okay?
DR. SHACK: Okay.
DR. SEARLE: Mike, you made the comment
earlier that this is based on existing technology.
MR. MAYFIELD: Yes.
DR. SEARLE: At the same time, I think it
is worthwhile to recall that there were places where
specifically focused activities took place and
addressed what were at the time to be considered to be
concerns. Namely, as I understand it, the ENDF(b)(6).
MR. MAYFIELD: Yes, sir.
DR. SEARLE: And the cross-section
rendering grew out of a concern for the way in which
iron was -- and some other things in that
neighborhood, were being treated in the attenuation
calculation.
MR. MAYFIELD: That's correct.
DR. SEARLE: So there have been some
rather focused efforts in various areas to address and
identify issues of that sort?
MR. MAYFIELD: That's exactly right. The
cross-section libraries was one effort. We have
published -- and in fact the committee reviewed in
December a regulatory guide on neutron transport
calculations and improvements in the way that we go at
that, and the way that uncertainties are handled in
those calculations.
We have made improvements in the way we do
the fracture mechanics analyses, and some of the
underlying models there. We have got a much better
handle on embriddlement trends today, and the flaw
distribution work.
So there have been over the last 10 years
a number of major undertakings to improvement the
state of the technology.
DR. SEARLE: If there are -- I have become
aware of a problem. Some people who are involved with
the ASTM code group, apparently had questions
concerning the attenuation calculation and submitted
a large number of questions, which were apparently not
addressed, at least not to their satisfaction, in the
draft or in what is -- well, 1065, or whatever that
number is.
Anyway, the Red Guide that was published
just recently, and I believe that George received a
communication on this concern, and I thought it had
been passed along to other people in the commission.
MR. MAYFIELD: I'm sorry, but you are
catching me cold today here.
CHAIRMAN APOSTOLAKIS: I'm not sure I
follow.
DR. SEARLE: The thing that we talked
about here last week. I'm sorry, but --
MR. MAYFIELD: Historically, there has
been some disagreement over how attenuation is
handled, and it is an issue that we agree that the
technical basis needs to be revisited.
I think it would be fair to say that the
basis for arguing for a change is not a lot stronger
than the basis arguing against the change.
DR. SEARLE: I'm sorry for blindsiding
you.
MR. MAYFIELD: Well, I will be happy to
talk to you about it, and see where we can go. The
key issue that we are here to talk about today, and
that we met with the subcommittee on a couple of weeks
ago, is the treatment of uncertainties and treatment
of uncertainties in the major areas of the analysis.
We are also going to, in addition to
giving you an overview, we are going to try and deal
with some of the questions and comments that were
raised during the January 18th meeting.
And with that, Mr. Chairman, I would like
to turn it over to Nathan Siu to do the bulk of the
briefing.
MR. SIU: Good morning. I will first give
you an outline of what I am going to talk about. You
have quite a few slides in your packet, and a number
of those are backup slides. So don't worry about the
length of it. We can obviously tailor the
presentation to the time that we have got.
But what I would like to talk about first
of all are the objectives and the conceptual approach
regarding the treatment of uncertainties. Sometimes
we are going to get a little -- well, not muddled, but
we can't avoid the general issue of how is the
integrated analysis proceeding, because the
uncertainty analysis is an integral part of the
overall analysis.
And there will be times when we are
talking about in general how is the overall
computation and how we will proceed. But I would
really like to emphasize the treatment of uncertainty
is within that computational flow.
I will give you an overview of how we are
proceeding with the analysis, and we will try and
provide some of the details that were or the
information that was perhaps lacking in the
subcommittee presentation, where we provided a high
level framework, and then some of the bits and pieces,
and didn't talk to well to how those two link
together.
We will talk about the status of the major
discipline activities in PRA from a hydraulics and
probablistic fashion mechanics, and time permitting,
we will get into each of these areas in a little bit
more detail.
And in particular we have developed some
draft results from the Oconee study. We have talked
these results with Duke Energy a week or so ago and
received comments on that.
These are highly draft results, but we
wanted to give you a sense of how things are
proceeding. In the case of the TH analysis, again,
obviously we have TH results. Runs have been
performed, and we have time temperature traces,
pressure time traces.
The approach refers again to the treatment
of uncertainties, and how we are going to deal with
uncertainties in those computations, and similarly we
are going to talk about the approach we are using for
fracture mechanics.
And some of these things we are getting
draft results, but that we are not ready to present at
this point in time. At the end, I will then have a
discussion of key issues and summarize where I think
we are.
Okay. Again, this part of the analysis,
we are assessing uncertainties in the estimates of PTS
risks. That means that we are going to quantify the
uncertainties, and also of course try to identify the
driving sources of those uncertainties.
And the reason that we are doing this is
to support the technical basis for a potential rule
change. So, for example, we would be looking at
potentially new screening criteria, and potentially
new guidance for how you do a plant specific analysis
if the screen criteria are not met.
This just illustrates a conceptual diagram
of how the screen criterium might be developed, and
shows the roles of uncertainties here, where this is
the RT-PTS, and the RT and DT of the license, and this
is the through wall crack frequency.
You might have estimates of the through
wall crack frequency for a given plant, and the
uncertainty bins about that estimate, and somehow we
need to develop a line that relates the two, and
develop a screening value for RT-PTS based on some
notion of what is an acceptable through wall crack
frequency.
That is conceptually how we might approach
it. Again, please don't read too much into that
diagram, because we have not put a lot of work into
figuring out we are really going to proceed. But this
shows how the uncertainties will play into that kind
of process.
DR. POWERS: Could you explain to me
better what the significance of the lines on either
side of the square are?
MR. SIU: The dashed lines?
DR. POWERS: No, the --
MR. SIU: Oh, this would be the through
wall crack frequency, or let's say the mean estimate,
and then maybe you would have a 95th percentile, and
a 5th percentile. So it indicates the range.
DR. POWERS: How do you decide to use 95
rather than 99, or 80, or 2, or --
MR. SIU: Well, that's part of my problem.
We have not really gone through the work of figuring
out exactly how we are going to use these estimates.
DR. POWERS: How do people in other
contexts decide what to use?
MR. SIU: In other contexts?
DR. POWERS: Yes. I mean, you haven't
done it here, but do we know -- I mean, some people
put like standard errors are the length of those bars,
and they calculate a variance, and they put the square
root of the variance on either side for a bunch of
measurements.
It escapes me exactly what the probability
is on that, but maybe it is like 82 percent or
something like that, and other people would use 95.
I mean, how do you decide? Is that a subjective
decision entirely, or --
MR. SIU: I imagine that it would be
because one of the things that we will get to is that
these error bars are going to include the computed
uncertainty.
There will be uncertainties that we don't
think that we can calculate very well given the
current state of technology, and in particular model
uncertainties associated with some of the codes that
we are using.
So I think what you are going to get
realistically is an estimate of the computed
uncertainty, plus a description of uncertainties
associated with other issues -- maybe sensitivity
calculations, but some indication of what the full
range of uncertainty might be.
So to -- I don't see necessarily -- and
again we have not worked this out, but I don't see
just coming up with a simple rule that says pick a 99
and you are done.
DR. POWERS: But I think you have really
answered my question.
CHAIRMAN APOSTOLAKIS: Have we used
percentiles in any other situation? My impression is
that we are using mean wise -- is that true -- when we
allocate or when we decide that the contribution from
this particular accident is such and such, why --
MR. SIU: Excuse me, George, but this is
a screening criterion. This is the first step.
CHAIRMAN APOSTOLAKIS: So it is as
screening guide then?
MR. SIU: Yes. They are trying to just
say that if you meet a certain embrittlement level --
and that is the current rule right now, but if you
meet a certain level, and if you don't come up to that
level of embrittlement, you don't have to do anything
more.
CHAIRMAN APOSTOLAKIS: I see. Okay.
MR. LEITCH: The three data points there
represent three different vessels, or is that at a
different time and --
MR. SIU: Yes, as the embrittlement
increases, RT-PTS would increase. Again, how we are
going to match up with the four plants, which can have
very different results, and how we are going to
generalize to the larger population, these are big
questions.
The analysis -- to do the uncertainty
analysis now, we have to categorize the sources of
uncertainty, because that is built into notions of
which kind of matrix we will be using.
We have to construct an aleatory model,
and I believe we briefed the committee about the basic
notion of aleatory and epistemic uncertainties in the
PTS analysis.
And we then have to propagate epistemic
uncertainties through the aleatory model, and I will
try to walk you through that in a fairly high level
manner.
Conceptually, how we might approach this
is that we would develop event sequences, using a PRA
event sequence model to identify what are the
potential challenges to the vessel, or scenarios that
could challenge the vessel.
And measure certain frequency, and let's
call that lambda, and there is uncertainty about that
frequency. That is the epistemic uncertainty about
the perimeter lambda, and lambda is the measure of the
aleatory uncertainty.
That result -- and again this is
conceptually. We get fed into a thermal hydraulics
analysis, where for each PRA scenario we identify a
number of thermal hydraulic subscenarios, a different
variance on that PRA scenario; and perhaps differences
in timing of actions.
We would have to develop distributions for
the probabilities of each of these variance, as well
as distributions about the thermal hydraulic
characteristic variances that we care about. For
example, the pressure and temperature over time, the
temperature in the down comer.
Using that information, we would feed into
a stress strength analysis, where you look at the
stress on the vessel, which is a function of these
perimeters, the temperature and the pressure.
Therefore, that of course would be uncertain as well.
And you compare that against the strength,
which has its own uncertainties, and develop a
distribution for the conditional probability of vessel
failure given this scenario, and subscenario, and
integrate the results together, and get a through wall
crack frequency with probability distribution.
And without getting into the details too
much, there is something obvious here. This could be
a common for an explosion here as you develop more
thermal hydraulic subscenarios to append on to the PR
scenarios.
And that would require a lot of thermal
hydraulic analyses, and then you would have to feed
those into the stress strength analysis, which also
has its own variance.
We are not doing it that way for obvious
reasons. It's just that we can't do the computations
to this level. So let me talk a bit about some of the
simplifications we are employing.
CHAIRMAN APOSTOLAKIS: How much of this
was done in the original analysis?
MR. SIU: Not formally. My understanding
is that there were sensitivity analyses, but there
were no -- for example, even PRA uncertainties in the
event sequence frequencies were not computed. This
was back in the early '80s.
MR. MAYFIELD: The original rule had none
of the things that Nathan just talked about. They
took some stylized transients and did what amounted to
deterministic calculations.
Then there were some on the side
probablistic fracture mechanics calculations done,
where they tried to include some of the Monte Carlo
scheme, including variations or distributions on
flaws, on chemical compositions, some of the more
obvious variables to include.
But it was nothing as elegant as what is
being talked about here. There were subsequent
analyses, called the integrated pressurized thermal
shock analyses that Oak Ridge performed, and looked at
three plants.
And those analyses looked more like what
we are doing today. But the treatment of
uncertainties was not as rigorous as what we are
trying to do today.
MR. SIU: And that is an important point.
Again, just because there were resource constraints,
the time that it actually takes to run a thermal
hydraulic calculation, maybe on the scale of hours,
but also the pre-and-post processing requirements --
you get a result and you have to look at it and make
sure it makes sense before you go forward with it.
DR. POWERS: Why is RELAP5 and not the
consolidated NRC, the hydraulic code, being used for
these analyses?
MR. ELTAWILA: This is Farouk Eltawila
from research. We actually are doing the analysis
using both codes, but we have not finished the
consolidation completely right now.
So once we complete the consolidation --
so we are doing it, but we are relying on the lab
because it has gone through a lot of assessments, and
the consolidated code has not gone through this
rigorous assessment at this time.
So eventually once we finish all the
calculations, we are going to run with the final
version of the consolidated codes.
DR. POWERS: So at some point in time, we
will get a comparison between the two?
MR. ELTAWILA: Absolutely.
DR. POWERS: It may not be part of the GS
effort, but at some time we will get to see how well
the --
MR. ELTAWILA: Well, actually the analysis
is done also at this time with the consolidated code,
but we are focusing for the purpose of the rule making
change, we are going to rely on the RELAP5
calculations.
DR. POWERS: I mean, that's fine, but we
will get to see it sometime?
MR. ELTAWILA: Yes.
DR. POWERS: That's good. That's good.
MR. SIU: Correct me if I am wrong,
Farouk, but even with the consolidated code, I imagine
I can get significant resource requirements for a
particular run?
MR. ELTAWILA: There is no doubt about it.
MR. SIU: So for that reason, we need to
obviously use the standard strategy of being similar
sequences to represent the results of the PRA analysis
with a very limited set, a relatively limited set of
thermal hydraulic times.
DR. POWERS: Can you tell me how you
decide a sequence is similar?
MR. SIU: We have rules for doing that.
I wasn't prepared to get into the details of the
rules, and we can chat about that as we -- a little
bit later perhaps, or if I haven't answered that by
the end of the presentation, I will make sure that we
come back to it.
We did present that at the subcommittee,
and we had provided samples of the rules that we were
using. Another issue with how we are approaching
this, in terms of that conceptual model -- remember I
showed you bins for the uncertainties about
temperature and pressure over time.
Part of the uncertainties in those bins of
course comes from model uncertainties in principle.
We don't yet have well established techniques for
dealing model uncertainties. There are a number of
proposed approaches.
We have done some initial work in that
area, and so I think at this point it is fair to say
that the formal methods are under development, and we
can chat about that a lot.
But again for the purpose of what we are
trying to do -- and that gets back to Mike's point
about using available technology. This is one case
where we are not trying to push the envelope very
hard. We are trying to use what we have got.
One of the reasons, of course, is that we
have limited data now to really apply the methods that
we have got if you want to use, for example, a basing
approach to estimate the -- to quantify the model
uncertainties, we would like to have some data to use
as part of that quantification process. And the
amount of data relevant for these sequences is highly
limited.
CHAIRMAN APOSTOLAKIS: So you will come
back to this?
MR. SIU: I wasn't planning to, but we can
talk about it now if you would like. This is just a
limitation, and so because of this limitation, this is
how we are approaching the problem.
We are certainly going to quantify
perimeter uncertainties. We are dealing with boundary
conditions. Again, things like -- you can call them
perimeters and a time when at which an action occurs,
and variations in that.
Submodels to some extent -- for example,
if you are talking about flow through an opening, we
can deal with that. But talking about -- let's say
RELAP5 is an assemblage of submodels and of course
uncertainty is associated with that assemblage.
There is uncertainties with the nodding,
and uncertainties with the application. These ere
things that we are not addressing in the quantitative
analysis at this point.
We are not planning to, and we are of
course going to supplement whatever information we
have with the results of experiments to address issues
that were raised in the subcommittee, for example,
about the possibility of a thermal plume.
And we are also going to perform selective
sensitivity studies. So we are not going to just
accept things directly as is, and in fact a comparison
with the consolidated code probably would be another
case of providing some benchmarking.
But again I think this is where we are
going to have the qualitative discussion of
uncertainties, as well as the quantitative discussion.
CHAIRMAN APOSTOLAKIS: But when you say
submodel, I remember in one of the earlier
presentations there was a diagram that said here we
are using the correlation and we are not so sure. You
are going to have an uncertainty about the correlation
itself?
MR. SIU: Yes, at that level, because we
can translate that relatively simply into a boundary
condition kind of representation, and I think that
there is enough information on that particular
submodel that this issue is perhaps of less interest,
the issue of limited data.
CHAIRMAN APOSTOLAKIS: But we have never
really heard how you are going to do that, right? I
mean, you never presented that, right?
MR. SIU: This is still frankly under
discussion.
CHAIRMAN APOSTOLAKIS: Okay.
MR. SIU: Given those simplifications,
this is a variance on a diagram that can be seen
before. I have tried to put -- there is a lot of
information here that we don't have to get into at
this point, but again it shows the PRA event sequence
analysis, the thermal hydraulic analysis, and the
probablistic fracture mechanics analysis.
The key point here is just simply the
banding idea, that we are taking sequences, and we are
banding them into a small number of thermal hydraulic
bins, and then possibly reexpanding those bins to
account for a variance in the -- let's say the
boundary conditions just as a simple example.
There are uncertainties in all of the
perimeters and that's why I have the little pi there
to represent the epistemic uncertainties. These are
being propagated through the analysis, and that gets
fed into a stress strength analysis, where the stress
now is a function of the deterministic temperature and
pressure traces here.
CHAIRMAN APOSTOLAKIS: And this is a
generic pi, right?
MR. SIU: This is a generic pi, yes. It
is not the same pi.
CHAIRMAN APOSTOLAKIS: It is not the same
pi? Okay.
MR. SIU: But again the point is that for
each of these thermal hydraulic subscenarios, we have
a defined trace here. We don't have the bands
anymore, and we try to accommodate the bands through
the definition of these subscenarios, but this is a
limitation in the approach that we are taken.
CHAIRMAN APOSTOLAKIS: I'm sorry, but I
didn't follow that. What is it --
MR. SIU: In the conceptual model, we have
the uncertainty bands, and let's say about
temperature.
CHAIRMAN APOSTOLAKIS: Right.
MR. SIU: What you have here instead is a
single trace that is dependent on your definition of
the scenario. Let's say that instead of 10 minutes
for the operator to throttle HPI, it is 9 minutes. It
won't be to that fine level of detail, but that is the
kind of idea.
Conceptually, you could have, of course,
different bands, and we are trying to accommodate
those variance through a discreet number of
subscenarios. And a consequence of that is that we
get basically a stress calculation for the
deterministic pressure, temperature, and of course the
heat transfer coefficient, and --
CHAIRMAN APOSTOLAKIS: And so for the same
thing --
MR. SIU: Let's say P-1.
CHAIRMAN APOSTOLAKIS: -- where you take
scenarios 1 and 3, and that's one bin?
MR. SIU: This is one bin, that's right.
CHAIRMAN APOSTOLAKIS: According to the
previous conceptual model, you would run the thermal
hydraulic analysis, right?
MR. SIU: In the conceptual model --
CHAIRMAN APOSTOLAKIS: And you would have
an uncertainly around P and D. Now, instead of doing
that, you are running three cases, right; is that what
this means?
MR. SIU: Yes. Don't take the three
literally, but it is a small number.
CHAIRMAN APOSTOLAKIS: And what is
different from the first to the second?
MR. SIU: Well, the first one just simply
said in general I could run separate -- I could do
this expansion if you will for one, two, three, four,
however many. So we have to bin down, and the binning
is a major modeling step.
CHAIRMAN APOSTOLAKIS: Right. But in the
thermal hydraulic analysis, how do you decide to have
a number of -- what is different between these three
runs in the same bin?
MR. SIU: Consider perhaps that this was
the action at 8 minutes, and this is 10 minutes, and
this is 12 minutes. It could be.
CHAIRMAN APOSTOLAKIS: Okay.
MR. SIU: Now, we have a method for
identifying what are the important variables to look
at, and we are trying out methods to identify the
subscenario.
CHAIRMAN APOSTOLAKIS: So what you said
earlier was that, yes, this uncertainty and the
boundary conditions would be handled, but the
uncertainty in the T/H analysis itself, at this point
at least you are not handling it?
MR. SIU: That's right.
CHAIRMAN APOSTOLAKIS: Okay.
DR. POWERS: So in other words, if I'm
agitated over the quality of some heat transfer
correlation, that it is embedded in RELAP?
CHAIRMAN APOSTOLAKIS: You will remain
agitated.
DR. POWERS: Does that mean that that will
affect the security of the free world, the security of
the free world remains in threat?
MR. SIU: We are, of course, not doing
this entirely arbitrarily. We have reasons, and that
is explained in a fairly lengthy report why we are
concentrating on certain issues and not on others.
And in the case of the PTS analysis, part
of the point is that the time constance associated
with the reactor pressure vessel, the wall, the
thermal response to a transient, is relatively long.
And that means that some of the details
that you might worry about for other situations may or
may not have a great effect on the through wall crack
frequency.
CHAIRMAN APOSTOLAKIS: And I don't think
it is part of your charge to protect the free world is
it?
MR. SIU: That wasn't my stated objective.
DR. POWERS: I guess maybe you need to
point to me the heart in this lengthy document where
that is stated, because it seems to me that taking a
thermal response time of the wall to decide whether
I work with heat transfer correlations or not is
precisely the wrong thing to do.
MR. SIU: Okay.
DR. KRESS: If you are concluding that
this heat transfer coefficient that Dana might be
agitated about was very important to your final
answer, you would include it in these variations, in
that middle box, perhaps? That might be the thing
that you are changing?
MR. SIU: You certainly could. You could.
You know, at this point, I guess we -- and this is
part of where we are getting feedback from the
committee, of course.
We have identified certain things that we
think are important and that we do need to address,
and if the committee gives us a feedback that we have
not considered some important things, that would be
important for us to know.
CHAIRMAN APOSTOLAKIS: Yes, because a
number of calculations will multiply tremendously if
you are not careful here. So instead of the three
subscenarios, you compare an extra 10 to describe
these uncertainties.
DR. POWERS: George, let me ask you this
question. If I set out and do some sort of a Monte
Carlo approach on this thing, which -- and in which
some respects they may be doing here, how many samples
do I have to take in order to get an understanding of
what the uncertainty is?
CHAIRMAN APOSTOLAKIS: If you do a
traditional -- you know, a straight sampling, I think
it would be into the thousands. But they would
probably do some latin hypercube sampling to determine
the number of runs.
DR. POWERS: Well, even if I go to such a
stretch of the imagination as using limited latin
hypercube sampling --
CHAIRMAN APOSTOLAKIS: I think in the
waste business where they have monster codes, the
number of runs as I recall is not very high, maybe 70
or 80.
DR. KRESS: Yes, that is what I recall.
CHAIRMAN APOSTOLAKIS: Which is not really
too large when considering the goals that you are
using.
DR. KRESS: I think you can get by with
that few. I think Dana's point is going to be how can
we trust this particular uncertainty, which looks like
maybe 5 or 6 cases, when you really need about 70 to
do it right.
MR. SIU: Well, actually, again, we were
expanding on one particular thermal hydraulic bin.
There are many thermal hydraulic bins. And the
actual number of runs -- take a wild guess -- in the
end they might be on the order of a hundred.
DR. KRESS: So you may be covering enough
there to --
DR. POWERS: Excuse me, but if I just drew
a circle around this and said that everything that
goes in here is basically a Monte Carlo analysis --
and it's not, but let's say that it is. And I say I
would like to know this uncertainty.
I would like to know that I sampled 95
percent of the possible range of outcomes with a 95
percent confidence. That is not an unplausible kind
of expectation, and I think you are up around 90
calculations.
And the fact is that I could do that
calculation. Can I come back after you are all over
and answer that question? Let's see. At what
confidence level did you sample what fraction of the
possible response base here.
MR. SIU: I guess we haven't been thinking
along those lines, partly because we weren't sure how
to deal with again this issue of the integrated model
uncertainty.
And to work the -- to overwork perhaps,
and maybe that is an unfair term, but to work the
perimeters side too hard given that you have got this
other part that you haven't quantified -- I guess we
just simply weren't thinking in those terms.
DR. POWERS: Well, I think that is a
question that I would expect this committee to come
back and ask you, is okay, you have a response base so
big.
How much of it did you sample, and at what
confidence level? Tom will ask you about the
confidence level, and Bill will ask you about the
fraction of the space issues.
MR. MAYFIELD: If I could, because we have
had -- as we were first getting into this project,
actually several years ago, some discussions about
what is the level of rigor, and is it practical to put
RELAP, or the consolidated code into a Monte Carlo
scheme.
What level of resource are we going to
invest in it. That question started being outweighed
by plant to plant variability. We are doing four. So
I think the qualitative opinion of those others that
we are talking about is at some point -- that at some
point the level of rigor in any individual transient
analysis, or any individual plant analysis, is going
to be swamped, or that the level of uncertainty in
those analyses is going to be swamped by the plant-to-
plant variability.
And we were starting to struggle with
counting angels on heads of pins for one plant, and
then losing that sense --
DR. POWERS: You came to that conclusion
for some reason, and I guess it really surprised me,
because it is not the intuition that I would come to.
Can you explain?
Maybe not here, but at some point can you
explain why you would think that the plant-to-plant
variability would be so large compared to the
phenomenological uncertainties?
MR. MAYFIELD: We can, and today is
probably not the best time, but in general, if you
just look back at the old IPTS studies, Oconee is
probably not a good example, because that is the first
one that they did, and there were a lot of assumptions
made.
But if you just look at the difference in
the calculated probability of failure between Robinson
and Calvert, and that's a CE versus a Westinghouse
design. They are about two orders of magnitude apart
if I remember my numbers correctly, and yet for
similar levels of embrittlement.
So we were struggling with why, what is
the big deal between them, and it got down to specific
sequences and what drives it. In the BMW plants, you
find that steam generator tube failures is the
dominating sequence, or I'm sorry, the main steam line
breaks the dominating sequence.
Westinghouse tends to be small break LOCA.
I think that is the dominating sequence for CE also.
So it is that kind of stuff we felt like was going to
swamp uncertainties in the specific calculations. But
this was a judgment, as opposed to based on hard
calculation.
CHAIRMAN APOSTOLAKIS: But I am not sure
that you should be trying to develop a methodology for
plant-to-plant uncertainty. I mean, you are
developing it for a particular type.
MR. SIU: That's correct.
CHAIRMAN APOSTOLAKIS: And then if you do
it for several plants. So the uncertainty from plant-
to-plant really shouldn't play much of a role here.
MR. MAYFIELD: Well, until we go back to
Nathan's first chart, where we were trying to
establish --
CHAIRMAN APOSTOLAKIS: Yes, for the
criteria, but not here.
MR. MAYFIELD: So the notion was -- yes,
not here. The notion was what is the level of cut-off
in rigor for a specific plant analysis.
CHAIRMAN APOSTOLAKIS: That's correct. I
understand that.
MR. MAYFIELD: And so it was a judgment
call as to what level we had to go to.
CHAIRMAN APOSTOLAKIS: But what I am
getting out of all of this discussion -- and I realize
that this is still a work in progress, but eventually
it would be useful to try to see whether you can use
a limited latin hypercube sampling scheme to
demonstrate that you have picked the whole range of
values.
That's essentially what it does. And also
it limits significantly the number of runs that you
have to make.
DR. POWERS: I will argue that the way
that George did this -- the number of runs with a
straightforward Monte Carlo is not larger.
CHAIRMAN APOSTOLAKIS: Well, all of the
studies have seem to show that there is in orders of
magnitude --
DR. POWERS: Well, having lived right down
the hall from them, from them who developed the
limited latin hypercube sampling for a lot of the
reactor accident codes, I am fairly confident in my
position.
MR. MAYFIELD: I think Ali Mosleh might
have a few comments.
MR. MOSLEH: We started with that as an
approach to take, where it would remove some of the
uncertainties that would inevitably be encountered in
the process of reduction, and in the process that
Nathan showed earlier, we have to go through binning,
districtizing the continuous universe, and that
introduces uncertainty.
We looked at as a potential problem with
reducing the problem into smaller pieces, but at the
same time the complexity of running a full Monte
Carlo, even with latin hypercube, in a fully
integrated model, going from the PRA oriented model
and all the way to the PFM, was just in terms of size
and resources, and capabilities, was just too much to
handle in the scope of the analysis that we were
doing.
CHAIRMAN APOSTOLAKIS: Well, I would bring
again the work that has been done in the performance
assessment of high level waste depositories, which
cannot be simpler than what you guys are doing now.
It is really huge.
So maybe what you can do is pick up some
of their reports and see how they handle that, because
they certainly have had the same problem.
MR. SIU: We will take a look at that, I
think.
CHAIRMAN APOSTOLAKIS: That's all.
MR. SIU: What I am showing here on this
diagram, just drilling down one lower level of detail
than that three box diagram that you had on the
previous figure, just to show you again the different
analysis tracks; basically the PRA analysis track, and
the thermal hydraulics analysis track, and
probablistic fracture mechanics analysis track.
Part of the point of this diagram is to
point out that as the way the project is really being
done, as opposed to how you might conceptualize it,
these are indeed being done in parallel.
Some of the thermal hydraulics analysis
is done before we really had significant interactions
with the event sequence analysis. So there are some
runs, for example, that we are using in our analysis,
and some others are just indications of what might be
interesting, but aren't really folded into the final
analysis.
And similarly there are currently in the
results that you are going to be seeing, there isn't
full feedback yet from thermal hydraulics into, for
example, the PRA success criteria that we have used.
We have made some assumptions based on our
understanding of the progression of the accident, and
that understanding will be improved after we explore
the detailed results of the thermal hydraulic
calculations.
So there is a lot of interactions here
that are taking place. Of course, the results of the
PRA analysis will be eventually the frequencies of the
various bins identified, and that gets fed into the
probablistic fracture mechanics analysis when we
quantify through wall crack frequency.
Similar to thermal hydraulics analysis,
it develops the subscenario histories that get fed
into the wall crack frequency. One of the other
things that I wanted to point out, some of the
discussion that we had at the subcommittee meeting was
really on this issue here, what are the potentially
uncertainty important scenarios.
How do we justify narrowing down the
problem to a limited set of issues, and so that was
the point of that discussion. Okay. Where are we
now.
DR. SHACK: And just coming back to that,
I mean, that is where you sort of addressed Dana's
question of how important for example a heat transfer
coefficient might be.
MR. SIU: We really did look at that
particular one. Now, that was the specific issue of
the heat transfer coefficient in the downcomer, and
explored if you will through a sensitivity fashion, t
he variance and the results is not very great compared
to the variance that you would get from other sorts of
issues.
Now, whether there are other concerns that
were not addressed, we obviously did not do an
exhaustive list, and it was based on the high level
model of what is important and what isn't important,
and again we would welcome feedback on that.
Where are we now. We have developed an
aleatory model and that's what you saw. That is the
event sequence model, the T/H subscenarios for
different bins; and then there is a aleatory treatment
of the K1C term in this probablistic fracture
mechanics analysis for fracture toughness.
So at least conceptually we have the
pieces, and we know how they are going to fit
together. We have categorized the different model
perimeters both in the white paper that the committee
saw several months ago, we categorized -- at least in
the preliminary fashion -- the probablistic fracture
mechanics analysis perimeters.
And that has been revised a little bit,
and Mark Kirk talked about that at the subcommittee,
but we have also categorized the thermal hydraulic
perimeters, and as was pointed out at the subcommittee
meeting, the PRA analysis is conventional.
We are treating the uncertainties in the
perimeters as being epistemic, and that is no big
surprise.
In the PRA event sequence analysis, we do
have draft distributions for Oconee. Again, we have
received a lot of comments on them. We have our own
comments as we reviewed the results in detail, but we
will -- and we expect to revise those distributions as
part of the iteration process.
Nevertheless, we thought it would be
useful to bring it in front of the committee to give
you an indication of what are the things that seemed
to be important, and what sorts of uncertainties do we
have in the results of the calculations to date.
We have in the thermal hydraulic
analysis, as I indicated in a previous slide, we have
identified classes of scenarios where the boundary
condition uncertainties appear to dominate the model
structure.
And I am talking about the model as an
assemblage, rather than individual submodels, because
the submodels, where they affect the boundary
conditions, we are treating through the boundary
conditions.
CHAIRMAN APOSTOLAKIS: A boundary
condition means at the time of operator action?
MR. SIU: For example, the size of a hole,
discharged through the hole, and that sort of thing.
Things are basically --
CHAIRMAN APOSTOLAKIS: And these would be
handled as epistemic variances?
MR. SIU: Actually, these are aleatory.
Again, you think of the variation in the operator
actions. This is a level below which we are modeling.
So you are saying that -- you see, the PRA defines
success and failure in very global terms.
Let's say that success is throttling
before 10 minutes. Well, there are variance on
success, but there are also variance on failure. If
I don't throttle in 10 minutes, or if I throttle in 15
minutes, what is the difference.
CHAIRMAN APOSTOLAKIS: And how about the
size of the hole?
MR. SIU: The size of the hole also is --
I mean, we have got a big category that is called
small LOCA , and that accommodate a wide variety of
break sizes and locations. So again there is a
variation there that is all lumped into that category.
CHAIRMAN APOSTOLAKIS: So you think that
is an aleatory issue?
MR. SIU: That is an aleatory issue. It
is different than saying if I have a particular sized
hole, would I know about it. Then we have identified
the potentially important perimeters, and that was a
table which we will clean up, and which the
subcommittee has seen in the report.
And we need to clarify a few things there,
but again we feel comfortable, and at least as a first
shot, we know which perimeters we need to focus on,
and we are developing a process for quantifying those
subscenario probabilities.
And that question came up in the
subcommittee as well. Clearly, we are not taking 5th
percentiles of variables and combining them and saying
that is a 5th percentile of the outcome.
So basically we are looking at a DPD or
dispute probability distribution kind of approach to
identifying subscenarios. So it would be a
discretized approach.
CHAIRMAN APOSTOLAKIS: And that is what
those guys did on the performance assessment and it
may be useful to you.
MR. SIU: Yes.
CHAIRMAN APOSTOLAKIS: And to see what
they did.
MR. SIU: Okay.
DR. KRESS: The important perimeters
identification, was that a PIRT process?
MR. SIU: Marilyn, who did the work,
started with PIRT, and looked at the approach, but
basically had to extend it. And frankly through the
use of modeling arguments, physical modeling
arguments, concluded that a very limited set of issues
was important.
Again, a review of the committee would be
helpful to say whether those arguments are convincing.
We will demonstrate this process as part of the Oconee
analysis, and obviously we intend to use this for the
other plants as well.
The probablistic fracture mechanics. We
do have distributions for most of the model
perimeters. For example, we have distributions for
the flaw characteristics, and I think the committee
was presented with that material, or was it the
subcommittee. I don't remember.
We have distributions for fluence, for
chemistry, the copper content and nickel content, and
phosphorous. The current work is focusing on treating
uncertainties and fracture toughness, and that is the
K1C, and then the RT/NDT, or actually the radiation --
DR. POWERS: I have examined a document
that I cannot recall exactly, discussing the need for
continued research in the enburtelment of reactor
vessels that has the phrase in it that the
correlations that have been developed are only -- I
say I believe, but it goes something like this.
Semi-empirical in nature and only include
the effects of copper, nickel, product form, and
fluence. It does not go on and tell me what else
ought to be in there. But it looks like a lot to me.
I mean, nickel, copper, product form, and
fluence, and it was a little hard for me to come up
with what else there ought to be. But I am not an
expert in that fashion.
My point is that that seemed to suggest
that this was an inadequate understanding here, that
there was something missing, something better ought to
be available.
Does that mean that we have something here
through the implausible unknown that just bars
progress here or something?
MR. MAYFIELD: The model that is going to
be used in these calculations is the latest thing that
we have put together, and that I think we have briefed
the committee on, but I am not sure.
It is based on a statistical analysis of
the existing embrittlement date, and coupled with a
fair bit of work from Professor Odette, and some of
the other radiation damage mechanists that have been
looking at this.
The work does go beyond just sort of the
traditional product form, copper, nickel, composition.
It has looked at factors that pop up, such as long
term thermal embrittlement. So there is a time at
temperature factor that gets rolled in.
We have been looking at what factors show
up in the statistical analysis, and do they have a
physical basis. Conversely, is there something from
the physical metallurgy that should be in the data,
and we have gone looking for that.
And in some cases there has been some
extensive dialogue between the mechanists and the
statisticians. We think that the model that we have
today embraces the physical understanding of
embrittlement, down at a fairly basic level.
And it embraces that, as well as
statistical trends in the data, and so that's as good
as I get, I guess is the point.
DR. POWERS: As good as you can get now,
or as good as can ever be gotten?
MR. MAYFIELD: Well, that's why we
continue to work on this. We are not convinced that
we have the answer. However, today, and at the level
of fluence that the vessels are expected to see
through 60 years, we think we have a model that
captures those trends.
DR. POWERS: The phrase that I am
imperfectly reproducing here has this only term in
there, as though there was some heat factor, a very
important factor, missing. It didn't say what it was
unfortunately. It just said we only have this stuff.
Now, you have suggested as one the time
and temperature factor there, but is there some great
imponderable that just constitutes a barrier that we
have to put in some fudge factor here to say, well, it
can be no bigger effect than this?
MR. MAYFIELD: I don't think so, but Mark
Kirk has come up and perhaps he has --
MR. KIRK: I think, of course, that future
knowledge is never perfect, but I think the answer is
that we have beaten that one pretty well. We have
looked at a lot of model -- at radiation experiments
on model materials that are designed to bring out
certain forms of radiation damage.
And those data have been considered in the
development of the model, and I think that helps to
screen out some of the imponderables that one might
otherwise be worried about.
But as Mike said, the form of the
correlation that we are now using, much of it has a
very firm physical basis, and we feel that it is
important to combine both the physical and the
statistical understandings.
And not so much for fitting the data,
because of course you can do that without any physical
understanding whatsoever, but to provide -- and I
don't think this is a word in Websters. Well, I won't
use it then. But the ability to extrapolate, which is
of course what we are always doing here.
But we could certainly go into this in
more detail like Mike said. I think or I know that we
have briefed at least the materials subcommittee on
the embrittlement correlation.
That is something that we could do in the
future. Certainly this is an area, along with what
was brought up earlier about through all attenuation,
in which there has been a lot of interest, both within
the NRC, and the industry, and the international
nuclear community, and continues to be -- and in fact
at the ASTM E-1002 meeting last week on radiation
damage mechanisms, there was discussion of this issue
yet again.
And I spoke earlier this week with Stan
Rosinski, who is a program manager at EPRI, and he
indicated that he was going to initiate a small
project under their materials reliability project,
using funding from their materials reliability project
to do in the short term a review of what technical
basis there exists through all attenuation functions
to provide the NRC some assistance in that regard.
So that information will be coming in, and
if it comes in during an appropriate time frame, and
I think it will, it would be considered. And just to
also mention so that there is not the perception that
the NRC is working in a vacuum on this.
We are currently in the process of
developing a technical basis document for the
embrittlement correlation. We have got a deadline on
that later this year to have a draft new reg.
Equally again, EPRI is also working on a
tech basis document concerning embrittlement
correlations, and that is due out in February, and
EPRI has agreed to provide that to the NRC so we can
have the value of that information as well.
DR. POWERS: So I get the impression that
what you are telling me is that I should not worry
about this only. That you have put in here enough
description of this embrittlement process for the
regulatory decisions that you are looking to make
here?
MR. MAYFIELD: I believe that is a true
statement.
MR. SIU: And from the standpoint of the
uncertainty analysis again, those things that are not
specifically in the models are treated as contributing
towards aleatory uncertainties. This was basically
the reason why we decided that the K1C term needed to
be treated as an aleatory issue.
DR. SEARLE: Don't worry any more. Just
get nervous.
DR. POWERS: Well, I will quit the
subterfuge here. It shows me that for both research
programs on vessel embrittlement are no longer needed
for making regulatory decisions.
MR. SIU: I will show you -- these are
pretty hot off the press -- draft PRA results overview
for Oconee-1, and what is new about this is not only
the scenario frequencies, which se came up with a few
weeks ago, and again had some review with Duke Energy
to talk about specifics relative to how we
characterize the plant and operations.
But also the characterization of
importance with respect to probablistic fracture
mechanics. We have been conducting a scoping study
for Oconee just to get an idea of where the numbers
are coming out.
So we have a current version of the FAVOR
code that is being used to propagate the thermal
hydraulics traces, the PRAs events frequencies,
through to the end to develop some notion of small
crack frequency.
We are not confident yet enough about the
probablistic fracture mechanics material to give you
a conditional probability of through wall crack given
a scenario, because again this is really new stuff.
But at least I think we can indicate that
these were the kinds of scenarios that were turning
out to contribute to the results, and I am going to
give you a caveat about these descriptions here in a
second.
So, please, again don't take down
literally as their are described. But we have
basically -- if you focus in on these numbers here,
these refer to specific thermal hydraulic runs, and
certain assumptions are made, and the analysis is
done.
Frequencies are assigned to these runs,
and they are run through the probablistic fracture
mechanics. These are the kinds of scenarios that
turned out to be relatively important, and the one
that I put in gray right now appears to be the most
important one.
Again, all of these things are subject to
change as we dig into these results and identify what
is really driving them, and whether we have got it
right or not.
CHAIRMAN APOSTOLAKIS: If you go to the
conceptual model, can you tell us at which point these
frequencies are calculated?
MR. SIU: Sure. This is the output of the
event tree analysis basically. I'm sorry. Let me go
to the framework, as I think that would be better.
CHAIRMAN APOSTOLAKIS: Yes, whichever.
MR. SIU: What you are seeing is that we
have binned the scenarios into about 40 thermal
hydraulic runs. We didn't use all 40 as it turned
out, but that was the universe which we were
considering.
CHAIRMAN APOSTOLAKIS: For one scenario?
MR. SIU: No, no. All the possible PTS
thermal hydraulic scenarios. We ran 40 cases of
RELAP, and we are binning the thousands of sequences
that we got into one of those 40 cases.
CHAIRMAN APOSTOLAKIS: Okay.
MR. SIU: So when you see a specific
number, like Run Number 3, that is a specific one run
out of the set of, let's say, 40 roughly. And we have
got the probability distributions about those
frequencies.
We have not done this part here. The
fractionation into subscenarios, which we talked
about, we haven't approached, but we have not applied
it yet to Oconee.
So we are taking a particular if you will,
and all of these are collapsed into one. There is
only one trace associated with that bin, and that
trace gets fed into the fracture mechanics analysis
with this distribution, obviously the convolution of
the --
CHAIRMAN APOSTOLAKIS: So the frequencies
on slide 11 are between the first two boxes?
MR. SIU: That's correct. After you have
been --
CHAIRMAN APOSTOLAKIS: Over there?
MR. SIU: That's right.
CHAIRMAN APOSTOLAKIS: But what you show
as description is one of the scenarios that goes into
that.
MR. SIU: That is the scenario that
characterized that particular bin.
CHAIRMAN APOSTOLAKIS: But that bin may
include other scenarios as well.
MR. SIU: Exactly.
CHAIRMAN APOSTOLAKIS: And the one you
showed here is what, is a representative, or just one
of them?
MR. SIU: The one I showed on this chart
here?
CHAIRMAN APOSTOLAKIS: On 11.
MR. SIU: Okay. Let me get back to Slide
11, because this is worth talking about.
CHAIRMAN APOSTOLAKIS: So when you say the
one that you have shaded there, the large MSLB is
medium?
MR. SIU: Yes, lots and lots of scenarios
feed into this bin. The total number of sequences was
around 14,500 I believe.
CHAIRMAN APOSTOLAKIS: So lots of them are
going into 25?
MR. SIU: That's correct.
CHAIRMAN APOSTOLAKIS: And why are you
showing the large main stream line break?
MR. SIU: This is the description of this
particular run. So this is the -- to run RELAP, of
course, you have to provide the initial conditions,
the boundary conditions, and certain things that occur
over time.
This is a description in very loose terms
of what that run did, the T/H run.
CHAIRMAN APOSTOLAKIS: But there are other
scenarios that lead into --
MR. SIU: That's right. We have been many
scenarios into this, which some of them may not follow
this description very closely.
CHAIRMAN APOSTOLAKIS: Okay. So the
reason why you have not is because it is kind of
representative of that?
MR. SIU: In a sense. I mean, if I gave
you a number, it wouldn't mean anything either. So I
have to give some idea of what kind of scenario this
run represents.
But, yes, there are lots and lots of
scenarios feeding into these, and we are examining --
now that we have got some sense of priorities here, to
see if this is really right.
Are we feeding the right stuff into this
bin, and do we need actually another run because the
contributions from this are so large, but they are not
really well represented by that run. That is the
question that we have to raise after we get a chance
to get some results.
CHAIRMAN APOSTOLAKIS: So this includes
now aleatory stuff and everything?
MR. SIU: Again, we do not have the
subscenario fractionation.
CHAIRMAN APOSTOLAKIS: We don't have it?
MR. SIU: We do not. This is simply the
PRA results.
CHAIRMAN APOSTOLAKIS: So does the
operator intervene here anywhere?
MR. SIU: Well, he fails to. For example,
he fails to throttle the HPI.
CHAIRMAN APOSTOLAKIS: So you are just
using a representative time for that?
MR. SIU: That's right.
CHAIRMAN APOSTOLAKIS: Which later on
would be refined.
MR. SIU: Which has to be refined based on
now a more careful look at that particular scenario.
CHAIRMAN APOSTOLAKIS: So up until this
point, you really don't have any model uncertainty do
you?
MR. SIU: That's correct.
CHAIRMAN APOSTOLAKIS: So this is a
traditional PRA.
MR. SIU: That's right.
CHAIRMAN APOSTOLAKIS: But the new thing
is that you have these bins that you are showing?
MR. SIU: Yes.
CHAIRMAN APOSTOLAKIS: Okay. Fine.
MR. SIU: And now to say what is new or
not, but simply this is how we are progressing through
the analysis.
CHAIRMAN APOSTOLAKIS: So this will go
into 25, but 25 has not been run yet?
MR. SIU: No, 25 has been run. That's why
-- look, 25 -- I can show you. In your viewgraph, on
the back of the viewgraphs, this is Run 25, the
thermal hydraulic trace. This -- and I don't know if
it is smoothed out or not, but it gets fed into FAVOR,
with a frequency and uncertainty about that frequency.
And based on the combined results of all
of those things, including the PRA frequencies, we
have some sense of priority, and this is what you are
seeing.
DR. POWERS: Now, when you formulate the
RELAP model for this number 25 calculation, which of
the myriad of conditions do you tell it about? Do you
tell it about the mean condition or the 95th
percentile condition, or the 5th percentile condition?
MR. SIU: I will give you a high level
description, but I think Dave -- well, Dave is here.
Can you answer to that?
MR. BASETTE: Let me see if I caught the
question correctly. This is David Basette. Of
course, RELAP gives you a median or a nominal best
estimate calculation for a given, or however you fix
the initial boundary conditions, it gives you a best
estimate calculation.
DR. POWERS: He has described this grade
line as main steam line break, with full high pressure
injection. He has told us, however, that there are
scenarios within that bin that can deviate to some
amount.
He has provided a synoptic description of
a distribution for that bin that includes a mean, a
95th, and a 5th percentile. Now, you have to
formulate a run with RELAP.
You cannot put those distributions in.
You have to say it is this plant, and at this time
this operator does this successfully or
unsuccessfully. Which one of those did you tell RELAP
about?
MR. SIU: Let me respond to that, as I
think I can take that. This is a somewhat more
careful description of that particular scenario. So,
for example, you see high pressure injection 21
seconds into the transient based on the control logic
for HPI.
Now, there are variance on this. You
could say the operator doesn't throttle in 5 minutes.
The operator doesn't throttle in 10 minutes. The
operator doesn't throttle in 15 minutes. That's not
here.
This is literally what they have. So the
PRA at this point was not telling the thermal
hydraulics this is the variant that you need to look
at.
When we talk about subscenarios, which
again we have not applied the Oconee yet. We have to
start investigating those variance, and say what are
the possible variance that we want to model, and then
identify are there RELAP runs that will represent
those variance reasonably well, or do we need new
runs. We have not done that yet.
DR. POWERS: For this particular case, is
the scenario you told about RELAP indicative of a
frequency equal to mean, the 95th, the 5th, or some
other one? Yes is not a suitable answer.
MR. SIU: No, there is no frequency
associated.
DR. POWERS: There is a frequency
associated with whatever calculation you told RELAP
about.
MR. SIU: We have not -- let me give you
an example. Let's talk main steam line break. This
is a large break, and we have defined large to be
greater than 8 inches here, because the size, I
believe, of the TBBs.
We have not said that we have a frequency
for breaks in the range of 8 to 9 inches, from 9 to 10
inches, 10 to 11. Yeah, you could -- we would have to
give you -- and I don't know this off the top of my
head -- what was the size of the particular hole, and
what was the shape of the hole, and what discharge
coefficients are associated with that.
So I don't have that information there.
You could come up with, if you will, density functions
for these characteristics like the break size. We
haven't done that.
So I guess you could say, well, what is
the frequency that you have of a break between or
larger than 8 inches. Yes, we do have that. That is
the PRA frequency that we have used, and that is the
.0 -- well, I shouldn't give you a number off the top
of my head.
We do have that based on an empirical
dataset. I mean, we had one failure in 600 odd
reactor years.
MR. MAYFIELD: I think the answer is that
it doesn't necessary represent any of the frequencies
here. It is a descripter of a class of transients
that fits in the bin, and at this stage, I don't think
we can tell you that they have gone back to the PRA,
and we have taken up a bunch of things through some
rules that have been developed, and have taken a bunch
of transients that fit that set of rules, and put them
in this bin.
And he is telling you about the
distribution of frequency on those transients, and I
don't think we can tell you today that the particular
RELAP run that was made, called Number 25, where that
fits in this frequency, and it is my guess that it is
probably closer to the mean than any. But I don't
think we can pin that down.
DR. POWERS: I think that would be an
inadequate answer for me, to say, well, it is roughly
the mean, or maybe the appropriate answer is it is
none of these particular ones, but its going to be
kind of representative in a sense that it will be
carefully explained.
MR. SIU: Yes. I think as we define
really what those subscenarios are -- I mean, right
now you have a cartoon. It says we will develop
subscenarios, but you have to develop those
subscenarios based on the underlying principles.
And the principles would be, for example,
what are the key variables, and what variations are
you going to consider, and what probability are you
going to assign to each of these variations.
And then I think at that point, I think we
can give you a more meaningful answer, because now by
definition when you have these discreet scenarios, you
have binned things. It is either in this bin, that
bin, or that bin. We haven't done that yet.
MR. MAYFIELD: Does that answer your
question?
DR. POWERS: Well, the underlying question
is we are going to have a bunch of thermal
hydraulics, and they believe they have got these huge
uncertainties in their codes that need to be resolved.
And they are going to come in and say, oh,
this is giving me an unfair answer, and that the
thermal hydraulics don't make any difference, because
had you done this thing out here at either 95th or 5th
percentile, and I am not sure which one.
You would have seen it, and it would have
made all the difference in the world, this strange
coefficient in an equally strange empirical
correlation that does not include anything in it,
except maybe copper, nickel, or fluence.
And that is what I have to listen to on
why this is an unfair characterization of the
uncertainly of the thermal hydraulics.
MR. ELTAWILA: I have tried to resist
getting into this, but when it comes to thermal
hydraulic uncertainly, I think you raised that issue
several times. When we are dealing with single faced
flow, which is the case for this particular
application, the uncertainty in the heat transfer
coefficient -- and we are going to give you a paper.
We have done specific studies which shows
that is not important. The main important perimeter
would be the pressure and the temperature, and have
confidence that the code can calculate these very
accurately or reasonably accurately.
I think your question has the right issue,
that the particular thermal hydraulic calculation that
we presented, it is a representative of one particular
scenario, which will give you a mean answer for that
particular scenario.
We have not gone back now to look at all
the other scenarios and redefining process. Is there
much variation if I change the break size, or I change
operator action. What will be the effect on the
pressure and the temperature, and that's when we will
be able to at that point to give you the 95 percent
and 5 percent uncertainty.
But the model uncertainty itself is not
going to be the driver in this case. In this case, it
is going to be the boundary condition as everybody
said here.
DR. SEARLE: I must express some curiosity
about the difference between runs 3 and 4 on page 11.
There is a factor of 2 in the flow area, and a factor
of two orders of magnitude in the mean probability.
MR. SIU: I was afraid that you were going
to ask that.
DR. SEARLE: Clearly, there has got to be
a snake in the grass somewhere.
MR. SIU: There is a reason for that, and
now it actually gets back to George's question, and
that's why I wanted to characterize these as not
literally what the PRA sequences are, but what the
thermal hydraulic run is.
Again, there was a binning choice to say
out of the myriad of sequences that we have, we have
some of them going to this one, and some of them going
to this one, and so on and so forth.
What you are seeing here in the PRA space,
we don't have a fine distinction between 2 inch LOCAs
and 2.8 inch LOCAs, and 1.4 inch LOCAs. We have
LOCAs, small LOCAs with a certain frequency.
What you are seeing here in this
particular scenario, and what this particular scenario
is really representing LOCAs where, even though the
HPI is on, the pressurization is sufficient that you
are going down to a lower pressure.
So again I don't want to take these labels
literally. This is the RELAP run that was done, and
we binned a bunch of stuff into that. Again, we are
going to reexamine that as we go further.
DR. SEARLE: My only comment is that the
way in which you differentiate between the runs is
probably woefully inadequate at this point if you are
going to really examine the differences in the
probabilities.
MR. SIU: We know, of course, exactly what
the input decks are, and we know what scenarios feed
into those. That's correct.
CHAIRMAN APOSTOLAKIS: So you said earlier
that the size of the break within the class of small
breaks is aleatory.
MR. SIU: That's right.
CHAIRMAN APOSTOLAKIS: So can you
elaborate a little bit on that? I mean, what kind of
distribution did you assume here, and so what fraction
of --
MR. SIU: Again, this is part of the
problem with flashing results at a summary level. The
scenarios that got fed into this particular bin are
LOCAs, with either the HPI throttled or the break was
large enough that the system depressurized quickly.
Most of the contributing ones I think were just small
break LOCAs, where HPI was throttled.
That doesn't match exactly the description
you have here. In the physical world, when you have
a break that is large enough, you do depressurize the
system.
CHAIRMAN APOSTOLAKIS: But then it is not
a small break anymore is it?
MR. SIU: Well, remember now that the
small break refers to the diameter for which we have
event statistics.
CHAIRMAN APOSTOLAKIS: Right.
MR. SIU: And small, actually believe
extends beyond 2.8.
CHAIRMAN APOSTOLAKIS: Well, how do you
decide? I mean, what is the aleatory probability that
I would have a 2 inch small break, or a 2.8?
MR. SIU: That has not been addressed.
This is the PRA sequences within which, and we still
have to fractionate, and when we fractionate, we will
actually find that maybe there is some bifurcation at
some critical value, and we can argue if we know that
critical value very well.
But the pressure is going to go along one
path, and the other one is going to drop rapidly.
CHAIRMAN APOSTOLAKIS: I don't understand
that. I mean, the frequencies you show there on the
table include the frequency of the initiating event.
MR. SIU: That's correct.
CHAIRMAN APOSTOLAKIS: So the initiating
event in one case is a 2 inch break, and --
MR. SIU: No.
CHAIRMAN APOSTOLAKIS: It's not?
MR. SIU: No. The initiating event is a
small break LOCA, which includes a whole range of
sizes. So that's why this is so anonymous. You say,
oh, my goodness. How do I know really that a 2.8 inch
break is two orders of magnitude less likely than the
2 inch break, because everything else looks the same.
It is the binning that we assign the
sequences to this particular thermal hydrologic
scenario.
CHAIRMAN APOSTOLAKIS: So for this
calculation, the frequency of the 2 inch and the 2.8
inch break is the same?
MR. SIU: Exactly. It is a small LOCA.
CHAIRMAN APOSTOLAKIS: It's a small LOCA.
MR. SIU: And this one we don't throttle,
and this one we do.
DR. SHACK: So what we are looking at is
the difference between pressurized and depressurized?
MR. SIU: That's right.
CHAIRMAN APOSTOLAKIS: But later on you
will have some fraction?
MR. SIU: Oh, yes. Again, that is one of
the important perimeters obviously as you go through
this, because you have qualitatively different
behaviors.
CHAIRMAN APOSTOLAKIS: And that's why it
is --
MR. SIU: That's right. We have 15
minutes?
CHAIRMAN APOSTOLAKIS: We have 15 minutes,
yes. Now, at some point, at some subcommittee meeting
-- and I don't know if you did it last night, but I
really would like to follow one sequence from
beginning to end.
CHAIRMAN APOSTOLAKIS: Right. With all
the uncertainties, have you discretized how you did
it? Did it happen with some epistemic uncertainty
with you, Mike?
MR. MAYFIELD: Well, it is our intent --
and we were talking about it before the session
started, that probably in the May-June time frame, we
will be far enough along with our calculations that we
can come to -- I don't like coming in -- we felt like
we needed to do something.
CHAIRMAN APOSTOLAKIS: No, I am not
complaining.
MR. MAYFIELD: What we would like to do is
have gotten far enough through this so that we are not
giving you real time results; that we have had a
chance to look at it and make sure that it is holding
together. So it is probably in the May-June time
frame.
CHAIRMAN APOSTOLAKIS: But we will take
one sequence and beat it to death all the way?
MR. MAYFIELD: We will take one sequence
and walk you right through it. That's the intent.
MR. SIU: That's right.
CHAIRMAN APOSTOLAKIS: One question I had,
since I am beginning to understand this, but as you go
to your Slide Number 12, before you do that, could you
put up your --
MR. SIU: This is the backup slide --
CHAIRMAN APOSTOLAKIS: Which is the same
as this?
MR. SIU: That's correct.
CHAIRMAN APOSTOLAKIS: So that is the
scenario that you are calling Number 25?
MR. SIU: That's correct.
CHAIRMAN APOSTOLAKIS: And the Number 27
would be the one where you succeed --
MR. SIU: Well, whatever scenario. There
is a mapping of it.
CHAIRMAN APOSTOLAKIS: But you are listing
those two, 25 and 27, and runs as you call them.
MR. SIU: That's correct.
CHAIRMAN APOSTOLAKIS: And those two are
the ones in red, and the one --
MR. SIU: This would feed into 27,
correct.
CHAIRMAN APOSTOLAKIS: What happens with
all the other scenarios now? You are throwing them
out into different bins?
MR. SIU: We are throwing them in
different bins.
CHAIRMAN APOSTOLAKIS: And some of them
may not be steam line break bin?
MR. SIU: That's correct.
CHAIRMAN APOSTOLAKIS: Okay.
MR. SIU: And some that feed into the
steam line break bin may not be steam line breaks.
VICE CHAIRMAN BONACA: So you are looking
at the pressure temperature behavior, and the fluid
behavior, and --
MR. SIU: That's right. That's the
binning and the mapping between scenarios, and that's
where we discussed with the subcommittee some of the
subjective judgment is right now in going from all
these sequences to a somewhat more detailed
description, which we feel pretty comfortable with.
But then jumping from that to the limited
set of thermal hydraulic bins that we do have.
MR. MAYFIELD: And part of the work, yes,
is subjective, but to try and bring in a rule based
scheme, where it is not just tossing coins, but there
is actually some technical basis for the judgment.
VICE CHAIRMAN BONACA: So it is very plan
dependent?
MR. SIU: Yes.
VICE CHAIRMAN BONACA: And so I understand
much more than I did before.
CHAIRMAN APOSTOLAKIS: Okay. So what else
would you like to tell us?
MR. SIU: Okay. Let me just talk in
summary about the draft PRA results. We do obviously
have issues and we have talked about these. The
binning of the sequences, and the time frame for the
operator actions, which we now have the thermal
hydraulic runs to get a better sense of that.
And dependencies. This particular
scenario involves three operator actions or failures.
Failure to isolate the break, and failure to isolate
the flow, and failure to throttle HPI flow.
We need to make sure that we are handling
the dependencies not only for the dominant scenarios,
but obviously the scenarios that might have dropped
off the map because we didn't address those in detail.
CHAIRMAN APOSTOLAKIS: Where do the
operator --
MR. SIU: This is the Atheana Team. This
is a subjective assessment process based on a
description of context. At the Duke Energy meeting,
we actually got very positive responses on our
descriptions of the context, and actually there was
some discussion about the numbers that were assigned.
But we didn't seem to be way off is my
notion of that. Again, these are things that we will
continue to refine. We had put intentionally
conservative numbers in many places just to make sure
that we didn't lose anything as part of this process,
and now we are reexamining what we have got.
Thermal hydraulics analysis. I think we
have talked about this already. This just illustrates
more of a process rather than results, because we
don't have results at this point on the uncertainty
part of the analysis.
We have identified the key sources of
uncertainty, and we talked about boundary conditions
on models, and of classified scenarios regarding in a
simplistic fashion whether they involve single-faced
flow, or two-faced flow.
And for single-faced flows, we are going
to follow the approach that we have basically
described already. We are going to look at
representative boundary condition variations to define
subscenarios, and we are going to develop
distributions for the subscenario probabilities.
And then either identify an existing T/H
run to map to, or perform an additional run, and
that's just the approach that we envision at this
point.
CHAIRMAN APOSTOLAKIS: Are you going to
identify -- and not necessarily only here in the
thermal hydraulic analysis, but the overall analysis,
the important perimeters or models that seem to drive
the risk?
MR. SIU: That's right. That's part of
the assessment process. It's not only what is the
number, but what is driving that number.
CHAIRMAN APOSTOLAKIS: And how are you
going to do that? I mean, you have perimeters all
over the place.
MR. SIU: Yes. I imagine that there will
be some sense of decomposing the results, because of
course one of the results of a risk assessment is that
the dominant scenarios also typically dominate the
uncertainties.
That's just the way that the math works
out. You can have a very unlikely scenario that is
very uncertain, but it doesn't really contribute then
to the final result. So I think we will be able to
concentrate on a few scenarios, and that's the hope.
CHAIRMAN APOSTOLAKIS: But then you are
going to the P/H analysis and FAVOR, and --
MR. SIU: But again the point of what
Professor Almenas showed was that there is a rationale
for identifying what are the important perimeters, and
so that would at least be our starting point for
talking about what seems to be driving this.
CHAIRMAN APOSTOLAKIS: My understanding is
that in the waste area that they have been struggling
with this issue now for 2 or 3 years, and I have seen
a paper or two where they have proposed something to
Hizenberg, who used to be a member of the staff.
I am not saying that is the way to do it,
but since those guys have attempted, it would be
worthwhile looking.
MR. SIU: Thank you, yes.
CHAIRMAN APOSTOLAKIS: Actually, your
problem has a lot of similarities with that problem,
because it involves complex computer programs and
uncertainty propagation, and so on, and so you can
benefit a lot from what those guys have done.
MR. SIU: Yes.
CHAIRMAN APOSTOLAKIS: Of course, they
cannot use the traditional importance measures that we
use in level one and PRAs.
MR. SIU: Right.
CHAIRMAN APOSTOLAKIS: Because you have
computer programs with physical phenomena. That's why
it may be worthwhile to look at what they have.
MR. SIU: This is a Mark Kirk slide
obviously. It is 21st Century stuff. I am way
behind. But this is just an indication of --
MR. KIRK: What is wrong with this?
MR. SIU: Nothing is wrong. This is
great. Let's walk through it.
DR. POWERS: The thing that jumps out
immediately is that the embrittlement model only has
fluence, copper, nickel, and product form on it. All
this other stuff that you told me that yo were going
to put into it apparently doesn't make this viewgraph
here.
MR. SIU: Well, my understanding of what
are the important perimeters, yes, are here. And the
point is to show that these are the major uncertain
elements feeding into the shift model, which I
understand work is still ongoing as to the shift model
itself.
It has a blue band around it, and I don't
know if that is an indicator, but this is one place
where work is going on, and that's one of the issues
that I indicate, and where we are still doing things.
But the point is to show that there are
perimeters feeding in, and there are epistemic
uncertainties associated with these perimeters, and
they get fed into the process, to the resistance side
if you will of the stress strength equation.
And on this side, on the driving force,
you have uncertainties in the flaw density and flaw
size. Again, we have characterized these
distributions already.
Of course, you have the thermal hydraulic
input, pressure and temperature, and you have the
vessel dimensions that get fed into the stress
intensity factor calculation, and determine if the
applied stress is greater than the strength.
And this box here shows again the
recognition that because of the things that are not in
this model explicitly, you have chosen a model at a
certain level, and the strength is an aleatory issue,
and that gets fed into eventually an aleatory
description of the vessel response to the thermal
hydraulic scenario, which is the applied stress.
In the interest of time, I think I will
move on. There is a similar diagram for arrest
toughness. Okay. Only two slides to go. Key issues.
These issues again become apparent as we dig into the
results. We finally have a prioritization of results
that tells us which things to focus on.
The success criteria, and how much time is
available for the operators to perform their actions
is something that we need to look at very carefully.
And more generally how do we quantify the human error
probabilities, which is -- obviously a consideration
of uncertainties is an important part of that
quantification process.
And in the thermal hydraulics analysis,
there is the question of how we are going to deal with
model uncertainties, especially for the two-phase
scenarios, and we still have to develop just as a
mechanical matter the perimeter of distributions, and
what are the uncertainties for the boundary
conditions.
Probablistic fracture mechanics analysis.
We have uncertainties in the fracture toughness and
the radiation shift. Again, that is that
embrittlement model that I talked about, and there are
significant uncertainties in crack arrest and how you
model that, that still need to be addressed.
I separated the integrated analysis out
from these three because in some fashion we have been
focusing so much on the three boxes, and we have not
talked enough about the integration of those boxes,
and I am talking about our project, as well as this
presentation.
And this binning is obviously really,
really important. It drives a lot of the results. We
have to look at that very carefully. This is one
where again I suspect we wouldn't be quantifying the
uncertainties in our binning process, but we would
have to recognize it is a source of uncertainty.
We believe that we are consistently
treating uncertainties across the different
disciplines, and we are trying very hard to be
consistent.
We think we are quantifying most of the
potentially important source of uncertainties, and
again we have a rationale for saying that. So the
model perimeters, boundary conditions, and submodels.
We are addressing those explicitly.
Model structure uncertainties associated
with the system codes, for example. And as was
pointed out, maybe these are not important for many of
the scenarios that we care about, but they are likely
to be important for some of the scenarios.
And I believe at this point that we can
only treat them qualitatively, but we will see. We
recognize that we may need to refine our models,
depending on the results of experiment sensitivity
and perhaps the integrated code word.
We will document the approach and we will
update to the white paper the committee saw earlier.
And this was mentioned already, but work is in
progress, and we are iterating on the initial results.
So later when spring comes by, hopefully
we will have something. But we indeed can walk
through a scenario, and what we tried to do today, of
course, was give you some sense of at least the
beginning parts of that scenario.
We think that the approach for treating
uncertainties may be useful, and another risk informed
applications, and certainly it is a model that we are
going to try out as we start approaching other issues.
And with that --
MR. LEITCH: Might this work lead to
relaxation of some conservatism that is in the
pressure temperature curves that are in the tech specs
now?
MR. MAYFIELD: That's another possible
application of this. We have backed off quite a ways
there, but that is another possible application of
this, as well as using this kind of scheme to look at
relief for the boilers.
As the embrittlement trends tend to go up,
the boilers are being pinched more and more on their
hydro test temperatures, and the time that it takes
them to get to those temperatures. So we think this
structure may be a good way to look at the
underpinnings for those pressure temperature and hydro
test temperature requirements.
MR. LEITCH: I think that some licensees
have already applied for some relaxation in those
curves to give them greater operating flexibility. Is
the basis for that some of this work?
MR. MAYFIELD: No, it is simply a change
in the fracture toughness curve.
MR. LEITCH: Okay.
MR. MAYFIELD: We went way from the very
conservative reference fracture toughness curve, and
are permitting them to use the initiation fracture
toughness curve, and that's the big change that was
made in the ASME code.
MR. LEITCH: Okay. Thank you.
MR. MAYFIELD: Mr. Chairman, there are two
points that I would like to make as we close. We have
named four plants here, and I would like to emphasize
with the committee and on the record that we are using
them because they have kindly volunteered to support
this effort, and not because we are concerned about
their integrity from a pressurized thermal shock
standpoint.
So they have stepped forward and
volunteered to help us in this activity. And finally
I would note that we welcome the opportunity to come
before the committee and discuss the pressure vessel
embrittlement research, and the need for it, and to
explain to you why Dr. Powers is so completely wrong
in his assessment.
(Laughter.)
MR. MAYFIELD: And unless there are any
other questions, we thank you.
DR. POWERS: I would hope that Dr. Powers
would get a chance to rebut.
CHAIRMAN APOSTOLAKIS: Dr. Shack, are we
done with this?
DR. SHACK: We are done with this.
CHAIRMAN APOSTOLAKIS: Well, we finished
a minute-and-a-half early, which pleases me to no end.
Thank you very much, Nathan and Mike. We will recess
until 10:35.
(Whereupon, a recess was taken at 10:14
a.m., and the Committee meeting was resumed at 10:34
a.m.)
CHAIRMAN APOSTOLAKIS: We are back in
session. The next topic is the Siemens S-RELAP5
Appendix Case, Small Break LOCA Code. Dr. Wallace as
I understand it could not get here on time, but Dr.
Kress has kindly agreed to lead us through this. Dr.
Kress.
DR. KRESS: Thank you. Dr. Wallace is
having airplane delay problems, and that's why he is
not here. I am sure that he would have wanted to be
here.
The purpose of this meeting today is for
the full committee to review the NRC staff safety
evaluation report on the Siemens Power Corporation S-
RELAP5, which is a thermal hydraulic code.
The application for its use is for
Appendix K Small Break LOCA analysis only. You want
to keep that in mind, because our review should focus
on the Appendix K requirements, and not best estimate,
or those things.
We will get a chance later when we come
back to us for application to have this code be used
for best estimate for large break LOCA, but that's not
part of today's meeting.
We did have a couple of subcommittee
meetings, one back in August, and the latest one on
January 16th and 17th. We had a real turnout of
committee members to that. I think the people there
were me and Graham Wallis.
So what you hear today is -- and we did
have our consultants there, too, our usual suspects.
But what you will hear today is a very abbreviated
summary of what went on in the subcommittee meeting.
So with that -- and we are expected to
have a letter on this.
CHAIRMAN APOSTOLAKIS: In fact, we have a
draft.
DR. KRESS: I think there is a draft.
CHAIRMAN APOSTOLAKIS: There is a draft in
there.
DR. KRESS: So with that, I will turn the
floor over to Ralph Landry.
MR. LANDRY: Thank you, Dr. Kress. As Dr.
Kress said, my name is Ralph Landry. I was the lead
on the review of the Siemens S-RELAP5 code, and what
we would like to do today is present to you the
results of our review of S-RELAP5.
And as Dr. Kress said, S-RELAP5 has been
submitted by Siemens Power Corporation for application
to small break LOCA in PWRs, specifically Westinghouse
and Combustion Engineering Design PWRs, under the
guidelines of 10 CFR, Part 50, Appendix K.
So that a lot of what we did in the review
is supposed to be along the guidelines of Appendix K
and the requirements that came out post-TMI-2
accident. But we looked at a lot of depth in this
code,, a lot more depth than has typically been done
in small break LOCA analyses code reviews, because we
knew that the code was coming in again for a large
break LOCA for a best estimate application.
So while we were looking at the code, we
looked at a lot of depth in it to make sure that we
understand this thoroughly before we even start the
next phase of the review.
What I would like to do today is cover
some of the milestones in the application which we
received and talk about very briefly some of the code
modifications that have been made.
This code is a combination of a group of
codes that have been approved individually, some
additional modifications. This combines the A&F RELAP
code, which was submitted and approved for small break
LOCA under Appendix K, which combines that with the
TOODEE2 HROD model code; the RODEX2 fuel model code,
and the ICECON containment model code.
So that the code that is now running under
the name of S-RELAP5 is a combination of the codes to
run as an integrated unit, rather than individual
codes from which data must be taken and put into the
next code, that code run, and you can iterate back and
forth.
But now the codes can talk to each other
and transfer information at specific time intervals
without having to manually take data from one code to
another.
I would also like to spend a little time
talking about t he assessment which is done for this
code. The assessment has been done more extensively
than is required under the guidelines of Appendix K
and the requirements of NUREG 0737.
We would like to talk about some of the
regulatory requirements and how the regulatory
requirements for a small break LOCA have been
satisfied in the code, and the conclusions of the
staff.
We received the code just a little over a
year ago. Now, when the application came in, Siemens
understood that the manner in which we conduct code
reviews today is that we have to have not only the
documentation for the application, documentation for
the code, but the code itself.
The applicant submitted to us the code in
a source code form and in a bindery form so that we
could install both on the computer. We could build
the code ourselves and make sure that the code builds
the same as the code that is being used by the
applicant.
We have test cases that we can run on the
code, and we have of course all documentation for the
code. We requested or sent out a request for
additional information in December, and we have now
received the formal response to those requests for
additional information.
That sounds like there is not much time in
which to review the REIs. In reality, the way we have
been conducting code reviews has been to communicate
to the applicant as we perform the review the concerns
and issues that we have in our examination of the
code.
So that we have communicated our REIs to
the applicant throughout the past year. Then when we
had all the REIs together, we then sent the REIs
through the normal signature process, and formally
asked them in December.
We have received draft copies of their
responses along the way from the applicant, and now
the applicant has formalized and gone through their QA
procedure, and is sending their formal response to the
REIs.
So it sounds like there is a big time lag
before the REIs, and then suddenly everything comes at
the end. In reality, it is not a big time lag,
because we ask the questions and get answers as we go
through the review.
And we have found in conducting these
reviews that this is a very efficient way for us to
conduct a review. We have prepared a draft safety
evaluation report that was submitted to the Thermal
Hydraulic Subcommittee for their review.
We have discussed that with the Thermal
Hydraulic Subcommittee as Dr. Kress pointed out. We
have had meetings with the subcommittee, and we talked
very briefly with them back in March in the context of
other code reviews.
That, yes, we had received the code, and
yes, we were accepting it for review. There seemed to
be sufficient material to allow us to do a formal
review.
We met with them in August to go through
the review plans, and to talk in detail about the
contents of the code, and then we met with the
subcommittee again in January, at which point we
reviewed with them the safety evaluation report, which
the staff had prepared.
And we are meeting today with the full
committee, and we plan on finalizing the SER after
this meeting.
We will go through and make sure that we
have covered every concern that we have raised, and
that we have covered every concern that the
subcommittee has raised, and the concerns that you may
raise today. So that when we issue a final SER, we
can have all the issues properly closed.
Very briefly, some of the modifications
that have been made to the code. The code started at
A&F RELAP, which is a version of RELAP5 MOD2. You are
probably all aware that the version that research has
is RELAP5 MOD3.
Siemens started with a MOD2 code, and made
some changes, such as making the code
multidimensional, TOODEE2 capable, in the hydraulic
components.
This is used primarily in the areas such
as the downcomer, where we have been seeing that 1D
modeling does not seem to be the best way. That there
are TOODEE hydraulic effects, and so the applicant has
modified the code to make it 2D hydraulic capable,
especially in those areas where 2D effects become
important.
There have been changes made in the
energy equations. They have been reformulated to get
rid of some of the problems that we have seen with
RELAP5 in the past.
One of the problems that we had several
years ago was a misapplication of the code for
containment analysis, and that came back to us, and we
looked at what was being done.
And we said, wait a minute, you can't do
this with RELAP5 because if you go from a very high
pressure node to a very low pressure node, with a very
large change in area and volume, the code doesn't
conserve energy properly, and this is not the intent
of the code.
Well, changes have been made in the way
the energy equation is formulated in the code so that
some of those problems are alleviated in this version
of S-RELAP5.
We looked a lot at the numerical solution
scheme that has been installed in the code. The
numerical solution has been changed, and the approach
to the S-RELAP5 code over the other RELAP5 codes to
correct some of the numerical problems that create
numerical instabilities, numerical diffusion, and
other problems.
Those have long been a problem with the
code. Sometimes they are created because the code
developer's intent is to make the code fast running.
Well, they make it fast running, but they make the use
of the code a real art form because if you don't use
the code exactly right, you can create numerical
instabilities.
Some of that has been taken out to make
the code more robust, and with the recognition that
you can still have a fast running code today without
having to use the numerical schemes that make it
unstable.
DR. POWERS: Instability tends to be a
self-revealing thing.
MR. LANDRY: Right.
DR. POWERS: I mean, you get a bunch of
spikes with RELAP.
MR. LANDRY: Right.
DR. POWERS: The other issue of numerical
diffusion is a little more subtle isn't it?
MR. LANDRY: Yes.
DR. POWERS: Is it possible to tell just
by routine examination of the results if you are
getting a numerical diffusion?
MR. LANDRY: A knowledgeable user can.
DR. POWERS: All right.
MR. LANDRY: What Siemens has done is
improved the numerics and the solution techniques, so
that it reduces the amount of numerical diffusion, and
it makes it less of an art so that the user, while
they still are using knowledgeable users, it becomes
less of an art, and less sensitive to the user.
They have improved the aspect of numerical
diffusion.
DR. POWERS: What I am struggling with is
when people talk about uncertainty analysis -- and I
am dealing with an issue somewhat tangential to this
particular SER -- they sometimes raise the issue of
the numerical solution itself being a source of
uncertainty.
And I am wondering is that a major
uncertainty here?
MR. LANDRY: We don't think it is. So we
are going to get into that more when we look at the
code for the large break application, because that is
based on an uncertainty analysis.
But in the discussions which we have had
with Siemens' personnel at this stage, it appears to
us that they have done a lot to take that numerical
uncertainty out of, or reduce it, in the code.
DR. POWERS: Now, are there things that
one should worry about other than the numerical
diffusion and instabilities in these codes as far as
the solution algorithm itself goes?
MR. LANDRY: There could be if the code is
used by an unknowledgeable user, because you have to
make sure that you are not making all the standard
mistakes that a user would make, violating courant
limits, and things of that nature.
DR. POWERS: This code lets you know about
violating courant limits?
MR. LANDRY: Well, codes don't always come
right out and tell you that. You have to be
knowledgeable enough to recognize what the code is
doing.
DR. SEARLE: That is where some of the
instabilities come in.
MR. LANDRY: That is where some of the
instabilities come in, but a lot of this is in the
hands of the user also to recognize when the code is
not behaving --
DR. POWERS: Noding schemes area also a
problem.
MR. LANDRY: -- numerically correct, and
when the result that the code is giving is wrong for
numerical reasons, and not because of a
phenomenological reasons.
Let's see. One of the points that we
looked at was with the heat transfer model. While the
vast majority of the correlations that are used in the
code are directly out of the RELAP5 set of codes, a
change has been made to incorporate, instead of
Dittus-Boeter for field boiling and for gas heat
transfer, another correlation, the Shiralkar-Rouse
correlation, which has a better representation of
data.
It is a newer correlation, and represents
data that has been checked against
FLECHT SEASET data, and appears to be a better
correlation to use. A very close correlation to
Dittus-Boeter, but the uncertainty in the data seems
to be much better.
So we feel like that is the kind of
attitude that we want to see in an applicant that they
will not just use a correlation because it has been
used for 35 years, but look at it and say there is a
better correlation today.
And let's try it out, and if it works
right, and it gives very good answers, and it is
stable, and it represents data better, let's go to a
better correlation.
DR. SHACK: Just for my information, where
does the virtual mass term arise from in here? Why do
I get a virtual mass term in the momentum equation?
MR. LANDRY: Gee, I wish Graham Wallace
was here so he could go into that one. Let me ask Joe
Kelly from Siemens if he could respond to that.
MR. KELLY: Joe Kelly from Siemens Power.
It comes out in the toofuwood (phonetic) model, and so
if you have an equation for the relative velocity, it
comes in the time rate of change and in the relative
velocity in the phases.
So it is the idea of like if you have a
ball of liquid, or excuse me, a bubble trying to be
accelerated in a liquid, it has to also accelerate
some of the liquid around it.
MR. LANDRY: Thanks, Joe. Joe has been
dealing a great deal in this discussion with the
concerns that Dr. Wallace has raised on momentum, and
so I appreciate his response.
Some of the models that have been changed
to be consistent with the requirements of Appendix K
include adding the Moody choke flow model. Power
current flow limit has been upgraded.
DR. KRESS: We had some discussion about
that in the subcommittee. Why do you have a code that
is configured in such a way that it is basically
incompatible with something like a Moody model, in the
sense that the code itself calculates the things that
create the critical flow as things progress down the
pipe to the hole?
But the Moody model takes boundary
conditions and calculates the same thing in a
different way. Do you recall how that discussion
turned out?
MR. LANDRY: That discussion came out that
if all of the conditions were being calculated and fed
directly into the Moody model, there could be a
problem. But the code calculates fluid conditions,
which then become the boundary conditions for a hard
line Moody model.
And once those conditions are input into
the Moody model, the Moody model will calculate
correctly as it is supposed to calculate.
DR. KRESS: So you just calculate the
boundary conditions?
MR. LANDRY: Right.
DR. KRESS: And where do you stop the
calculation to decide where the boundary is?
MR. LANDRY: That is in the nodalizing of
the pipe or --
DR. KRESS: It is the hole in it, the
nodes of the pipe with the hole in it, you stop there?
MR. LANDRY: Yes, but that has to be in
the user guideline specifications; that part of the
sensitivity studies are where to come with the final
mode before you nodalize for the break itself to see
that you are getting the right conditions into the
node for the break.
Okay. They have added to the code EPRA
pump data. They have added, as I said earlier, the
ICECON code, RODEX2, TOODEE2, and they have changed
the code architecture, so that even though it is based
on the RELAP5 MOD2 code, the architecture now matches
the RELAP5 MOD3 series of codes, the more modern
architecture, and it is based on FORTRAN 77.
So they are upgrading into a more modern structure.
DR. SEARLE: I have a couple of questions.
Well, one basically. Is there somewhere in all of
this that tells us what the limits are on the
application of this code?
MR. LANDRY: In the submittal, yes. Th is
application is for a small break LOCA.
DR. SEARLE: No, no, I am talking about in
terms of the models that are being used to define
specific physical phenomena, are there any cautions
about trying to apply this code in cases where clearly
you don't have that situation?
DR. KRESS: Are you thinking maybe about
upgrades? Is there something --
DR. SEARLE: Well, for example, we know
that there are a bunch of people running now talking
about increasing the burn up on fuels. I think there
are probably problems with Baker-Just when you go
above 40K maybe.
DR. KRESS: Yes, I think you're right.
DR. SEARLE: And is there anything that
cautions you that you may be walking the plank if you
try to use this in the wrong region?
MR. LANDRY: Well, the documentation
provides the perimeter range over which the different
models are reviewed, assessed, and acceptable. We
have to rely on user guidelines that they will not use
the code outside those ranges.
And then we do have the option or we have
the requirement when a calculation comes in to review
the application of the code to see that it was applied
and used within the proper range of perimeters for
each model.
DR. SEARLE: You mentioned user
guidelines.
MR. LANDRY: Yes.
DR. SEARLE: Have you looked at the user
guidelines to convince yourself that a reasonably
sensitized user would be able to pick up on any
problems by looking at those guidelines and thinking
about what it is that he is trying to apply to them?
MR. LANDRY: Well, the manuals that we
have seen, we would in our judgment say, yes, the
reasonable user would understand where the code is to
be used and where it's not.
This code -- I think what you may be
referring to is other codes which are given out, or
sold, or distributed throughout the world, and
throughout the industry, and you don't have the
control over the user.
This code is used solely within the
corporate structure of Siemens Power. So that they do
have through their quality assurance program have the
control to ensure that the code is used properly, and
it is not used outside acceptable ranges or applicable
areas for even things like Baker-Just equation.
DR. KRESS: And we had a concern in the
subcommittee about default values built in, and they
might not properly be used. But Ralph's answer was
what set our mind to ease on that, that it is within
the corporation, and when they get specific
applications, that is one of the things that they look
at.
MR. LANDRY: Right. It is not a pure
black box where the code is used, and just an answer
is given to us. We have the responsibility to review
the way it has been applied also.
MR. LEITCH: I seem to recall some earlier
versions of RELAP5, when you put in various sizes of
small break LOCAs, a prediction of peak fuel
temperatures had some fairly significant
discontinuities in it, and gave rise to questions
about the validity of the code. Does this have that
same problem?
MR. LANDRY: This I don't believe does,
because the RODEX2 model has been incorporated in the
code.
MR. LEITCH: Say again? RODEX2?
MR. LANDRY: Which is the fuel model. The
fuel model, which Siemens is using in this code, is a
fuel model which we have reviewed and approved for use
in the Siemens fuel design work. We have reviewed
that pretty heavily, and that is not using the RELAP5
fuel model now.
MR. LEITCH: Okay.
MR. LANDRY: In talking briefly about the
code assessment that has been done, the small break
LOCA assessment cases are pretty well defined for
applications. Post-TMI, the requirement came out in
NUREG 0737, Section II.K.3.30, of what was required
for assessment of a small break LOCA code.
And there the position of the staff is
very short, and says that appropriate LOFT and semi-
scale tests are to be used for assessment of small
break LOCA.
If you go down in the text, it then
suggests two specific tests; a specific LOFT test, and
a specific semi-scale test, should be used for the
assessment purposes.
DR. KRESS: That wa a subject discussed
also at the subcommittee.
MR. LANDRY: Right.
DR. KRESS: And I remember the flavor of
the discussion was why is it that we believe that just
two tests provide sufficient validation for a code for
Appendix K purposes. And I don't recall what the
answer to that was.
MR. LANDRY: Well, those two tests looked
at two specific problems that came out from the
calculations that were done for the TMI-2 accident.
DR. KRESS: As I remember, one of them, I
believe, was the LOFT test. Basically you could match
it with just some energy balance, which almost any
company could do.
MR. LANDRY: Right. But in assessing the
S-RELAP5 code, Siemens has gone beyond those two
tests that were required. In fact, they looked at all
the test data that were available and said these two
tests are superseded by other tests at a later time.
DR. KRESS: That was the answer. I
remember it now.
MR. LANDRY: And would be better tests,
and tests that would give a more thorough examination
of the capability of the code. In fact, the
assessment was done against a different semi-scale
test, and a different LOFT test, against the --
DR. KRESS: And that leads me to another
question. Do we have a bad rule when we specify just
those two tests are sufficient to validate a code?
You know, it has nothing to do with this
Siemens application. But is this a bad rule that we
have?
MR. LANDRY: Well, I would rather say that
at the time, and with the data that were available, we
felt that this was --
DR. KRESS: Well, at the time, that may be
just about all it was.
MR. LANDRY: -- the best information we
had for assessing the phenomena that we saw occurring
in CMI, and that the codes had to predict in
particular -- well, there are other tests or other
assessments that have to be done.
DR. KRESS: Does the rule read -- well, it
may not be in the rules. It is in the NUREG.
MR. LANDRY: The NUREG says --
DR. KRESS: Does it suggest at least these
tests?
MR. LANDRY: Well, I quoted the position
of the staff in the SER verbatim, and the position
simply says or concludes with that they have to assess
against appropriate LOFT and semi-scale tests.
In the descriptive material that follows
that position, it suggests that these two tests are
the tests that must be used, L3-1, and S-07-10B.
DR. KRESS: So Siemens could have stopped
with just those two?
MR. LANDRY: According to those
requirements, they could have, but they didn't.
MR. BOEHUERT: But it's really up to you
guys though, isn't it, Ralph?
MR. LANDRY: Yeah. But they didn't stop
there. They went into two different LOFT and semi-
scale tests, plus 2D flow tests, and UPTF tests, and
a very recent BETHSY test.
But then in looking at the assessment that
was done, Siemens put together what they called an
informal PIRT, because Appendix K doesn't require a
PIRT.
But they put together an informal PIRT
that looked at different locations in the reactor
cooling system, and different phenomena that would
occur, and how they rank those phenomena, and then
what test data, what test facilities, what test data
would best represent the phenomena that they are
trying to examine.
That was then used and the total
assessment was based on that informal PERT. So the
assessment that was performed was not just based on
the one semi-scale and LOFT test, but it was based on
these tests, plus all the tests that were done in
response to their informal PERT.
So our conclusion was that they examined
significant perimeters throughout the range that could
occur in different components of the system, and
throughout the different aspects of the small break
LOCA.
They have substituted newer tests, which
are supposedly and should have better data, better
qualified data, for the older tests. So they have
used good tests, better qualified data, and a more
expanded assessment that is required.
DR. POWERS: A NUREG is not a rule. It is
a recommendation.
MR. LANDRY: That is correct. The only
caveat that we make is that following the issuance of
the NUREG and the bulletins and orders, some plants
may have had put into their licensing basis the
requirement that they have to analyze these two tests.
So that when we get the applications and
using this code that we would have to make sure that
if it is in the licensing basis of a particular plant
that they use S-07-10B and L3-1.
That those cases would be analyzed or
there would be a change made to their licensing basis
to use these assessment cases instead.
DR. POWERS: An interesting point.
MR. LANDRY: Now, we have already touched
on some of the regulatory requirements in a previous
discussion. In looking at the application which we
have received, the modeling requirements of 10 CFR,
Part 50, Appendix K, which such is Moody critical
flow, have been incorporated in the code.
We believe that the assessment not only
meets the intent of II.K.3.30, but goes beyond the
requirements of II.K.3.30. A full assessment has
been done, a very good assessment.
Instead of calling it an informal PERT,
this is a step beyond the requirements of Appendix K.
Many sensitivity studies have been put
into the application, and these are required of all
licensing basis LOCA codes. They have looked at a
range of break size once they determine the worst
break.
Then they vary the effect of time step,
loop seal model, and pump model, radial flow foreign
coefficient, nodalization, and what they found in all
of these sensitivity studies after they determined the
worst break size is that each of these effects is less
than five degrees on peak clad temperature.
So that then comes to the conclusion that,
yes, they have a converged solution, and the code is
functioning properly.
DR. KRESS: It also says that peak clad
temperature is not very sensitive to those things.
DR. POWERS: Why does it show that they
have got a converged solution?
MR. LANDRY: I'm sorry?
DR. POWERS: Why does it show that they
have got a converged solution?
MR. LANDRY: Well, in addition to looking
at the numeric response, it shows that there isn't a
big variation for any of these perimeters.
Altogether, it is not just those perimeter variations.
DR. POWERS: You can't tell from the fact
that it is only 5 degrees. You can only tell how the
iteration approaches that 5 degrees.
MR. LANDRY: Correct. Correct. It's not
just that. It is everything combined that indicates
that they are converging
DR. POWERS: All right.
MR. LANDRY: Conclusions that the staff
has arrived at is that the ANF RELAP code which was
approved, the RODEX2, TOODEE2, ICECON codes, all of
which were approved individually by the staff, have
been combined into an integrated code, an integrated
package that can perform the entire calculation
without transferring data manually from code to code.
We believe that the code documentation
supports the modifications made to the ANF RELAP code.
We accept the modifications.
DR. KRESS: Let me ask you a question
about that. I think in our subcommittee meeting, we
put it this way. We expressed some disappointment in
the status of documentation, in the sense that the
equations and models that we were presented were
different than the ones that are in the documentation
we had.
And that some errors in the previous
equations were still in the documentation. Is that
going to be fixed over some time period, or is the
situation going to be different when they submit for
the best estimate application, or maybe I should be
asking this to the Siemens people. I don't know.
MR. LANDRY: Well, I think Jerry Holm from
Siemens will address -- would you rather address that
now or later, Jerry?
MR. HOLM: I can address it right now.
There were a number of what were characterized in the
subcommittee meetings as typos identified in two of
the documents that we submitted, the models and
correlations document in the programmer's manual.
And in conjunction with supplying the
response to the request for additional information, we
went through and tried to identify all the typos in
those two documents, and we have provided revised
documents, along with the RAI responses.
DR. KRESS: And those will be the
documents, plus any MODs that you make, and that you
will submit for the best estimate analysis?
MR. HOLM: They will be the starting point
for the best estimate. There have been some small
number of additional model changes made for the best
estimate program, and we will describe those and
modify those documents.
DR. KRESS: You will get rid of the Moody
model, for example?
MR. HOLM: It won't be used for the large
break LOCA, but it will still be in there, because we
have to use it for small breaks.
DR. KRESS: So it would be an option, I
guess?
MR. HOLM: Yes.
MR. LANDRY: Okay. As was just discussed,
we point out errors in the course of the review and in
documentation. One thing that we would like to
emphasize is that this has been a very fast review.
If you look at the history of reviewing
computer codes, one year is a fairly quick turnaround
on a review, and we feel that is primarily because
Siemens Power Corporation has been very responsive and
very cooperative during the conduct of this review.
When we asked questions, they were very
quick to work together with us to arrive at an
acceptable answer. We feel that their cooperation and
their willingness to work through any problems that we
discovered in this review was instrumental in being
able to conduct a review in such a relatively short
period of time.
DR. KRESS: I would like to second that
comment, Ralph. We found in the subcommittee meeting
that their ability to answer our questions, and their
candidness with their responses was actually
refreshing. So I agree with you.
DR. SEARLE: Are we going to see some
actual run results later?
DR. KRESS: Probably not. Did you plan on
presenting some results still?
MR. LANDRY: Yes. In my presentation I
will show one data --
DR. SEARLE: Very good. Thank you.
MR. LANDRY: So the conclusion of the
staff's review is that we find the S-RELAP5 code
acceptable for use in satisfying the requirements for
analysis of the small break LOCA under the
requirements of 10 CFR, Part 50, Appendix K.
DR. KRESS: That is the major finding
right there.
MR. LANDRY: That is the import that you
have to get to.
DR. KRESS: You have to get there,
otherwise --
MR. LANDRY: Otherwise, we go back and
start over.
DR. KRESS: -- you go back and start over,
yes. Okay. I guess now we turn the thing over to
Jerry Holm, of Siemens.
DR. SEARLE: Jerry, I've got to say this
logo you have on here with a PWR bird cage, and a BWR
box, gives me the cold shivers.
MR. HOLM: The topic today will be the
Siemens PWR Appendix K small break LOCA analysis, and
this is going to be based on the code S-RELAP5, and my
name is Jerry Holm, and I am the manager of product
licensing for Siemens.
And I will give a short introduction, and
then Joe Kelly will give some more detailed
information about the code and the methodology. But
of course we have to keep it to something of an
overview since we have only got about 45 minutes or
less.
Again, I am just going to give about three
slides for an introduction, and then Joe Kelly will
talk about the S-RELAP5 code, and the first thing he
will show is the relationship to the RELAP5 family of
codes, since RELAP5 itself is extensively used in the
industry.
We will give a summary of Siemens'
enhancements, and only a selected few of those that
Ralph Landry talked about, the ones that we thought
were most important. We will give a summary of the
methodology for the Appendix K LOCA, analysis, and
then a summary of validation.
And we have chosen one of the benchmark
cases to show some plots from so you can see the
technical comparisons. Then I will get up at the end
and just make a quick conclusion.
Okay. Ralph Landry alluded to the fact
that we are going to be presenting or submitting to
the staff a large break LOCA methodology, and what we
call our realistic large break LOCA methodology using
S-RELAP5, and that submittal will be later this year.
Right now what we have submitted to the
NRC is the use of S-RELAP5 for small break LOCA, and
we have also submitted it for non-LOCA methodology.
Our future plans are to extend this code to BWR LOCA
analysis, and long LOCA analysis.
And in fact the R&D program for the
conversion to a BWR LOCA will start later this year
after we submit the realistic LOCA methodology and
the development staff free up to do that work.
Our motivation for this is primarily that
the cost of benchmarking and doing maintenance on
codes is increasing, and our desire is to choose one
code to try to maximize the results of our
benchmarking work, and also to maximize the expertise
of our staff.
It is a lot cheaper for us to do work and
become experts in one code, rather than six, and that
is the main purpose. We have been working on S-RELAP5
for realistic LOCA methodology for close to 15 years
now, and it is that extensive effort that led us to
choose this as the base code.
We provided, we think, an extensive amount
of information to support the staff's review, and we
have a topical report which describes the methodology
in the benchmarking.
And then in addition to that, we provided
a significant amount of supporting documentation; our
models and correlation manual, a programmers guide, an
input requirements manual. We provided on a CD-ROM
the code source and an executable version, and sample
cases so that the staff could actually run the code.
And we have made a presentation to the
NRC in March of last year, and two presentations to
the ACRS Thermal-Hydraulic Subcommittee. And then we
provided a formal response to the RAIs which we sent
last Friday.
And the main point is that we have tried
to provide sufficient information to support the use
of the code for small break LOCA. With that, I will
turn it over to Joe Kelly.
MR. KELLY: Okay. This is the same
outline slide that you saw just a minute ago with
Jerry, and I am going to give a very brief overview of
the history of S-RELAP5 thermal hydraulics code, and
then talk about the Appendix K methodology for small
break LOCA.
And show one example of the validation,
and that is the BETHSY test, and it is the
International Standard Problem Number 27. Actually,
Ralph Landry covered this, but I had it in pictorial
form, and it is the relationship of the S-RELAP5 code
to the other flavors of RELAP, and also the other
codes that we have incorporated in it.
We started with MOD-2 of the RELAP5 code
which was developed at the IENL, and we made changes
to it to perform non-LOCA transients, main stream line
break, and small break LOCA Appendix K analysis.
And this resulted in the ANF-RELAP code
which had been submitted and approved in several
different topicals between the years of 1983 and '89.
And so this is the code that we have been currently
been using in our licensing applications.
Since that time, RELAP5 Mod 3 was
developed, and from that we have primarily taken
upgrades to the code architecture to make it more
portable. Again, as Ralph said.
Also, there are three stand alone codes;
RODEX2, which is fuel rod performance, TOODEE2, which
is a hot rod model accounting for diversion flow due
to flow blockage; and ICECON, which is a containment
analysis code.
Again, these are stand alone, and they had
all been submitted and approved individually, and they
were used in concert with ANF RELAP, and that required
manual transfer of data from one code to the other.
You know, the output of one is input to
the other, and sometimes it would require an iterative
process in those two. So what we have done now is
build these three codes into what is now history lot
5, and so the data transfers happen automatically so
you don't have a staff intervention there.
And also so that if it is something like
the effect of containment pressure on a large break
LOCA, that is an integral part of the analysis, and
not something that has to be done off-line, iterating
between the results of two codes.
DR. KRESS: This may be a little off the
subject, but how do you validate a hot rod model like
TOODEE2? Don't you have to have a full bundle test,
with an actual and a radial power distribution? How
is something like that actually validated?
MR. KELLY: Well, unfortunately, you have
the wrong person up here to answer that, because my
experience is more in realistic, and this is more
Appendix K, and from what little I know of it, what it
does is that it implements NUREG 0630, and the
regulations to do with that.
DR. KRESS: Okay. I understand that.
Okay. But you may need something more when you get to
the realistic.
MR. KELLY: If on realistic we were going
to try and take credit for the enhancement in heat
transfer that you see when you have blockages is a
drop with shattering, et cetera.
And that would be a much longer assessment
program to validate that, using something like the
FLECHT-SEASET 163 rod test, but we are not planning to
try and take credit for that.
And then finally there are a number of
enhancements that Siemens developed, which I will
briefly show in the next slide. There are a number of
enhancements, but they mainly fall into four different
areas.
They are mass conservation, energy
conservation, momentum conservation, and the
constitutive models.
In mass conservation, the numerics have
been improved to minimize mass error during long term
transients, and I will show something on that in the
next slide.
Energy conservation, again, Ralph
mentioned this. We reformulated the energy equation
to eliminate the problem that would occur when you
have a flow going across a large pressure drop.
It is not important for small break, but
it is important for large break, having to do with
energy deposition into the containment.
For momentum conservation, traditional
RELAP5 uses cross-flow junctions, and in a way to try
to emulate multi-dimensional, or multi-regional flows
might be a better way of saying it.
What we did instead was implement a 2-D
component, and is primarily used in either the core or
the downcomer. And what had been seen in the past
with cross-flow junctions was anomalous flow
recirculations. The 2-D component eliminates those.
Of course, there are hundreds of
constitutive models in the codes, and a number of
those have been upgraded, primarily to increase
accuracy in the large break LOCA application.
But there are also modifications to what
is called a vertical stratification model, which helps
improve the loop seal clearing prediction.
And in talking about long term mass
conservation, when system thermal hydraulic codes,
such as TRAC and RELAP5 were first applied to small
break LOCA back shortly after the TMI, one of the
primary challenges was long term mass conservation.
When you start running these transients
out to a million time steps, what would happen is that
the errors in solving the mass equation would
accumulate over time, such that in effect the code
would be either creating or destroying mass.
And if that fraction became appreciable
relative to the inventory in the vessel, then there is
no validity in the calculation whatsoever. So this is
an area that Siemens paid some attention to.
And so what I am going to show is from
results from integral assessments and the small break
LOCA sample problem, which were part of the submittal
in the topical report.
So the three integral tests and the PWR
sample problem, the transient time for each of the
tests, the number of time steps, and the mass error,
and again, this is the error in conserving mass for
the entire system, normalized with the initial mass
and expressed in percent.
So, for example, if you go to the BETHSY
test, there were over a million time steps, and the
cumulative mass error is less that 2/1000ths of 1
percent. So that is very good. So we do not have a
problem anymore with long term mass conservation.
And that is all that I am going to say
about the code, unless I get questions, and we are
going to switch to an overview of the methodology.
And the first thing to realize is that we
define a methodology as basically two things. It is
the codes that we use, but it is also how we use those
codes.
And once a topical report has been
approved that methodology is then encapsulated into an
analysis guideline. And then that analysis guideline,
together with the quality assurance procedure --
because as Ralph said, all our users are in-house, and
they are subject to the analysis guideline and a QA
procedure.
So consequently you have the plant model
nodelization specified, and you ensure that the
Appendix K conservatisms are correctly applied. Also,
there are sometimes additional conservatisms that
Siemens prescribes.
For example, the way that we do loop seal
modeling for the small break LOCA; and also things
like the delays in diesel start times. And these are
all specified as exactly what you are going to do in
the analysis guideline.
And then because of the QA procedure, the
analysts are constrained to adhere to those
guidelines. So that gets rid of things like the user
effects that you hear about a lot these days.
When you looking at performing small break
LOCA analysis, you can do a PERT and come up with
many, many phenomena that appear to be important. But
there are really four major factors.
The first is, if you will, the transient
that you are running, for determining the limiting
single failure; and for most of our plants, this is
usually a loss of one diesel generator set.
So consequently we are going under the
assumption of only one high head safety injection
system being available. That's what makes the small
break LOCA something interesting.
The next is where you are in the fuel
cycle, and the limiting condition is normally the end
of cycle, and the reason for that is that gives you a
top-skew power profile, so that your high power part
of the core is in the part of the core that will
become uncovered.
The next is break size, and so we perform
a break spectrum to determine the limiting condition,
and that is really looking for a window. And what the
window is bounded by are very small breaks, where the
break flow would be smaller than the capability of the
safety injection system to make it up. So those
cases don't even uncover.
It is bounded on the other side by breaks
that start getting large enough so that you get a
fairly rapid depressurization to the accumulator set
point, which then recovers the core.
So what you need is a break that is small
enough that the flow is -- excuse me, large enough so
that the flow is greater than the safety injection
makeup, but small enough that you get a gradual
depressurization rate so that you have a prolonged
transient with significant core uncover.
And then finally there is how you treat
loop seal clearing. The peak clad temperature is
affected by both which loop is clear and the number of
loops.
So we have come up with a proposal and a
methodology for this in order to remove the
variability that you see between calculations. I did
bring backup slides on both the break spectrum and the
loop seal clearing. So if there are questions about
that, I can provide more details.
Now, looking at the validation matrix.
Actually, there is four of them. The first is called
the general matrix, and it is a set of separate
effects and integral effects tests, and those are
performed and documented for every code version.
Then there is the small break LOCA matrix,
and again it is both integral and separate effects
tests, which is what Ralph showed. And that was part
of the small break LOCA submittal.
Similarly, there is a non-LOCA assessment
matrix, and those are a set of integral tests which
were part of the non-LOCA submittal, and for the
realistic large break LOCA, there is a PIRT based
matrix that is much more extensive. You know, in the
order of more than a hundred tests.
And so when we come in with the realistic,
you will be seeing that, and what that does is that it
shows not only code applicability of the transient,
but also how we determine the uncertainties in the
models so that it can then get propagated through the
uncertainty analysis.
And the small break validation matrix you
have already seen. It is one BETHSY test,
International Standard Problem 27, two-inch LOCA
break, and this one goes through pretty much all of
the expected phenomena that you want to see.
You know, the natural circulation phrase,
loop seal clearing, core boil-off, and also recovery.
Semi-scale S-UT-8 has core uncover before
loop seal clearing. So it is a different kind of
test. LOFT LP-SB-03 is basically a core boil-off and
uncover.
UPTF loop seal clearing, this is a
proprietary test that was run at KWU, and so it is a
full-scale model of the loop seal. And so there is a
separate effects test to examine the clearing process.
And then 2-D flow test, and the purpose of
those was to provide some assessment for our 2-D
component.
I am going to show the results of one, and
this is the BETHSY ISP-27, and it is probably being
shown for two reasons. One of those is that it is the
most comprehensive test, in the sense of going through
all of the phenomena. The other one is that it is
also the best data comparison.
BETHSY is a full-height, 1/100 square
model of a 3-loop RWR. So as test facilities go, it
is pretty big. For example, it is 17 times larger
than semi-scale.
Test 9.1b is a 2 inch break with no high
head safety injection. It results in deep core
uncover and rod heat-up.
In the S-RELAP5 assessment, the input
model follows our small break LOCA modeling
guidelines, with a few small changes. Obviously if
you are doing an experiment and you want a realistic
prediction, you don't use ANS, plus 20 percent, for
the power.
You use the actual power that was used in
the test. Also, we note from the test results that
one of the intact loops, and so loop number 2 clears.
And so what we have done is apply our loop
seal modeling methodology, where we bias the broken
loop and one intact loop to plug. So then in our
calculation, we clear loop number 2 just as it was
done in the test.
And that loop seal clearing is something
that we can talk more about if you would like. It is
something that in reality is more statistical, and you
can't really do it deterministically.
And so what we have done is put in a
biasing methodology to limit the variability and
ensure a conservative result. Also, for the critical
flow model, getting the break flow correctly in a
small break test, if you want to do a prediction of
the test, it is very important.
And so you can't use Moody here. So we
use the more realistic critical flaw model in the
code. Similarly, even though I said BETHSY is pretty
large, it is only about 420 rods, and so it is just
slightly larger than one 17-by-17 assembly.
So using a 2-D component for the core
didn't make a lot of sense, and we used a 1-D core
model. And as I said, we get an excellent comparison
of both the core collapsed liquid level, and the
maximum rod temperature. And that's what I am going
to show.
So this is the core collapsed liquid level
comparison. The black line is the data, and the red
line is the S-RELAP5 prediction. It says core
collapsed level in meters. It is actually core, plus
a good chunk of the lower plenum, okay?
And it's versus time, and the data is done
by delta-P cell, and so what you are really seeing is
a delta-P measurement. And that is what we have
plotted also for S-RELAP5.
This is not a sum of wood fractions, but
rather it is a pressure difference between replacing
the lower plenum and the top of the core. So what you
are seeing here in the initial part is the front
coast down, and that is the frictional pressure drop
having to do with the flow coasting down.
Once you get to this point, the two-phase
mixture level is in the upper plenum, and the core,
although two-phase, is completely covered. That's why
the collapsed liquid level just sits here constant for
a while.
The little blips in both the data and the
calculation is the depression and recovery of the core
level did a loop seal clearing. Immediately after
that, the liquid level in the upper plenum has receded
into the core, and you begin the boil off part of the
transient.
It is at this point that you have reached
a pressure, such that the accumulators begin to
inject. You recover the core inventory, and the PCT
location would occur about this point in time.
And for the maximum clad temperature, it
is temperature versus time, and again the black curve
is the data, and the red curve is S-RELAP5, and as you
can see, there is a very good prediction of the dryout
time, and also the peak temperature and the recovery.
And there is about a 20 to 25 degree K
overprediction in the S-RELAP5 calculation, but this
is considered to be an excellent comparison.
DR. KRESS: Could I see your previous
curse a minute? Although it doesn't matter to the
temperature, what causes the bouncing around to 4,000
seconds?
MR. HOLM: Well, that is a good question.
At this point, the core is two-phase. It is a liquid
solid at about this point. So you have a two-phase
level in the core, and remember I said that there are
delta-P measurements, and not just measurements, but
it is the delta-P in the core.
So you are not seeing void fraction
changes, because this looks like a two meter change in
level, which would be catastrophic. But actually it
is an instantaneous delta-P difference between two
computational volumes.
And what you are seeing is a liquid level
crossing a cell boundary, and the thing that we
discussed in the subcommittee about having to
accelerate the liquid, and it gives you a little bump
in the momentum equation.
And so there is an instantaneous pressure
spike associated with the level crossing. So what you
are probably seeing, because you see so many of them,
is the level doing this, going back and forth across
the cell boundary.
But the indication of the collapsed level
is artificial, in the sense that we are not taking two
meters of water in and out. So, in summary, the
proposed Siemens SBLOCA methodology replaces the
combination of ANF-RELAP and the TOODEE2 code with S-
RELAP5, thereby streamlining the analysis.
And that is good for us from the
standpoint of being able to concentrate more
effectively, but it also makes the reviews easier.
And also we have done some work to improve the loop
seal clearing behavior, and that is the biasing
methodology.
And now I did not show this, but Ralph
alluded to it as well, and it is in the topical
report, but the results from the PWR sample problem
and the sensitivity calculations show that this
methodology is both convergent and robust.
The assessment shows that S-RELAP5 is
capable of capturing the important phenomena for
SDLOCA, specifically loop seal clearing, core boil-
off, and recovery, with an acceptable level of
accuracy.
And therefore the proposed methodology, or
the proposed use of S-RELAP5 for an Appendix K SDLOCA
is suitable for licensing. And then I will give the
floor back to Jerry Holm.
MR. HOLM: Since I only have one slide, I
will use this mike if that is okay. The bottom line
from our perspective is that the SER provides Siemens
with the ability to reference the topical report in
future licensing submittals without further NRC
review, and that's why we submit topical reports.
And the draft SER to be seen has no
additional conditions or restrictions over and above
what we have put on the methodology ourselves inside
the topical report. So we consider that a very
successful review from our perspective.
Our goal in this meeting is to hopefully
come out of here with concurrence from the committee
that the NRC can issue this SER by the end of
February, and that is our presentation, unless you
have questions.
DR. KRESS: Do the members have any
burning questions that they want to ask? I remind the
committee that there is under Tab 3 that there is a
report on the subcommittee meeting, and provided by
the cognizant engineer.
And we also have one of our consultant's
reports under PINK3, and then we have a draft letter
that we can look at.
DR. SEARLE: Is the consolidation of these
cases the aspirations of doing BWR with the same codes
as you do PWRs with? Does this suggest that these
predictions are going to become once again the realm
of the physics, rather than the realm of the
programmer?
DR. KRESS: I am not sure I know what you
mean.
DR. SEARLE: I am being facetious. I am
very happy to see that it does appear that physics is
being to reemerge in the storm tossed waters of this
whole process.
CHAIRMAN APOSTOLAKIS: Are we done?
DR. KRESS: Yes, I can refer it back to
you.
CHAIRMAN APOSTOLAKIS: Well, it seems that
we are doing great today, right? We are 20 minutes
ahead of time.
DR. KRESS: We have a skillful chairman.
CHAIRMAN APOSTOLAKIS: Wow, I'm impressed.
Yes, sir?
DR. POWERS: Let me point out that I had
passed out to members a section of the research report
which involves a pretty categorical disagreement
between two members in assessing three research
programs.
And one of those members is being
vigorously and heavily lobbied, but has not wavered
one iota in his position. We will need to have an
ACRS position.
We need to look at that and be prepared at
least to interrogate the two people on their
positions, or to establish a run.
CHAIRMAN APOSTOLAKIS: Okay. By the way,
are we going to receive the full research report?
DR. POWERS: Undoubtedly at some time.
Like since I don't have at least three of the inputs,
and I am struggling with at least three others.
CHAIRMAN APOSTOLAKIS: But sometime during
this meeting you mean?
DR. POWERS: I didn't say that.
CHAIRMAN APOSTOLAKIS: That's why I asked
you. If you had said it, I would not have asked.
CHAIRMAN APOSTOLAKIS: Okay. So we will
recess and reconvene at one o'clock.
(Whereupon, the committee recessed at
11:40 a.m.)
. A-F-T-E-R-N-O-O-N S-E-S-S-I-O-N
(1:01 p.m.)
CHAIRMAN APOSTOLAKIS: Okay. This
afternoon the first subject is the proposed American
Nuclear Society Standard on Eternal Events PRA, and we
have three members of the group that developed the
standard here, Bob Budnitz, Ravi Ravindra, and Nilesh
Chokshi, right?
Ravi Ravindra is with EQE, and Dr. Budnitz
is with Dr. Budnitz. We all have the draft, and I
understand that it is not out for public comment yet
is it?
And there will be no transparencies, but
we will have a short introduction by Dr. Budnitz, and
then perhaps we can discuss the standard. So, Bob.
MR. BUDNITZ: I am going to spend what I
think is less than 10 minutes with an introduction,
and what I am going to do, because I think it is the
right thing to do, is to outline for you a half-a-
dozen technical issues that we face, and some only a
minute or so about them, so that you will know what we
think are the important issues that confronted us.
And there was some stuff that we did that
wasn't or we don't think was controversial, although
you might, and that's fine. But we are going to at
least outline to you what we think is the central
technical challenge that we faced going in and how we
resolved it.
And then you can ask questions, and we
will be happy to discuss with you whatever. And just
to be sure that you understand, there was a five
member writing group, and the others were Tori Ye from
Southern California Edison, and Bill Henries from M-
Yankee.
But in fact the writers are in front of
you. The three of us wrote everything that you see.
Those others didn't write anything, although they were
very important in review. Nilesh had principal
responsibility for the seismic hazard part, and Ravi
Ravindra wrote the seismic peculiars part, the part on
seismic margins, the part on wind.
I wrote all the rest -- the flooding.
Ravi and I together wrote the part on how you screen
other events, and Nilesh was in there working on all
that stuff, too. So it was a three-person effort.
CHAIRMAN APOSTOLAKIS: Does ANS have a
procedure in which --
MR. BUDNITZ: Yes, I am going to say that
next. The procedure is as follows. The American
Nuclear Society has a committee, a risk committee, a
Risk Informs Standards Consensus Committee, and Paul
Miko chairs it.
It has 24 members on it, and it has been
in existence a couple of years, and that committee in
the ANS is the balloting committee. That committee
appointed us, and we report to them.
The ANS committee recently balloted to
release this standard for public comment. There is
two ballots. There is a ballot to release public
comment, and then a few months from now, or a few
centuries from now, depending on how it goes, they
will ballot to accept the standard, and then it goes
out.
So it was released for public comment on
January 26th, which was just the other day, 60 days,
and anybody can get it. We sent it to you in advance.
It was publicly available about the first of the year,
and we sent it out to a lot of people in the first
year, but the comment period started on the 26th.
And that process will run its course as we
get public comments and we respond to them. We
started this process in the summer of '99. It was
about a year-and-a-half, but all of the serious
writing was done between about the 1st of January a
year ago and August.
We pretty much had this thing wrapped up
in that 7 or 8 month period, and from August until we
released it, we held it up for 4 or 5 months because
we were waiting to watch what happened to the ASME
standard which were coordinated, and I will say
something about that next.
As I hope you know, the ASME, American
Society of Mechanical Engineers, outfit has a
committee on which I serve, which I spent three years
trying to put together a standard for PRA methodology
for internal events, accidents that initiate from
transients and LOCAs, and that's what we mean by
internal events.
And that process after 3 years isn't quite
done, although it is converging very rapidly now. I
am on that committee, and we hope that in another
couple of months we will have that done, because we
are now responding to public comments from the draft
that was issued for public comment in August.
We think in another couple of months that
will be done, and we were waiting to release this for
that period because the ASME standard had not settled
down.
There were some very important questions
that were being discussed, which we will come to in a
minute, which we were hostage to, in the sense that we
rely on, and we waited.
It turned out that that came out in a way
that didn't effect very much of anything that we had
written, and so then we released it for public
comment.
So I am just going to make it real quick
and short so that we have time for discussion. I am
going to talk about the scope. The scope of our thing
is earthquakes, wind, flooding.
And then a section that I am going to call
other external events. They are external to the plan.
Fires is not part of this, unless it is a forest fire.
But aircraft crash, industrial facilities, and so on.
Earthquakes have a separate chapter, and
winds, and flooding, they have separate chapters. And
we have another one on other. Now, the other has two
sections. One is screening, so you can screen
something like hail storms if you are in Arizona.
But if you can't screen it, there is also
a section on how you analyze it if you can't screen
it. And then we also have a separate chapter on
seismic margins, separate from seismic PRA, and I will
describe that in a minute. So that is the scope.
Now, a crucial piece of this is that the
scope also includes important sections that are this
long, just a few lines, because we referenced the ASME
standard by reference. We say do ASME, and I will
just describe what they are.
For example, the whole system's analysis
part of PRA, we reference ASME directly. I mean, we
weren't going to rewrite how you do a Bayesian update
of generic data. ASME does that. Although we have
some places where we supplement ASME because we need
to.
Like, for example, in HRA, human
reliability analysis, if you have to do something a
little different for seismic, we tell them that, but
the rest of it we reference directly.
We reference directly the peer review
requirements of the ASME, and how you put together a
peer review team, but we have supplementary guidance
in there, some requirements about peer review. For
example, emphasizing lockdowns, because that is
something important for us that was not quite so
emphasized in ASME.
We reference the documentation section in
ASME directly, but we have supplemental requirements
and documentation, and how you document seismic
margins, or when, or whatever.
And we also reference the application
section, the crucial section on applications in ASME
which tells you how you go about doing application
once you have got a PRA.
We also in order to make it seamless with
ASME, we use the same format. The ASME has high level
requirements in a broad area, and then what we call
sporting requirements, which are the ones that you
have really got to meet, that are below that and we
did the same thing.
The idea was so that a person who is using
it together would see the same sort of thing. But
crucially we don't have three columns of capability,
and I will come to that in a minute.
We don't have that. We have one, which
was intended to be what ASME's column 2 was in the
first round before they got to three, which is what we
will call a good quality, state-of-the-art PRA today,
and that's what we have. I will talk about that in a
minute.
And also crucially, for almost every
requirement, you will see that we wrote a commentary.
Sometimes short, and sometimes long. Sometimes longer
than the requirement. ASME doesn't have any of that.
We think that is a valuable addition, and you can quiz
us about that if you want.
I mentioned already peer review, and I
will come back to the three columns in a minute. We
wrote a special set of requirements on peer review,
and Ravi actually wrote them, emphasizing the need for
walk-downs to make sure that for external events that
you captured the plant, because plant specificity is
crucial for these events which damage things in the
plant.
PSHA. As you have observed, I'm sure, we
exclusively endorse the Livermore and EPRI hazard
studies of the late '80s by saying that if you did one
of those, you met the standard for probablistic
seismic hazard analysis.
But what we mean by she made the standard
for 1988, you still have to do an update to make sure
that no earthquake information has come along since
then that would invalidate what they did.
But we explicitly endorse that, but we
also have a whole lot of requirements, which if you
are doing it over, or if you didn't do that or
whatever, that you have to meet.
And those requirements are pretty much
tailored to the well-known -- and I will say this
because I was an author, as was our chairman, George
Apostolakis, of the well-known SSHAC process, the
senior seismic hazard analysis committee that I served
on and George did. I chaired it a few years ago.
And which has a process for seismic hazard
analysis, and we explicitly have based our thing on
that. If you are doing it over, or if you hadn't done
it in IPEEE. I am just going on with issues, and then
you can talk later.
Fragilities. Ravi wrote this piece. The
seismic fragilities for PRA are the well-known
standard methods that have been used for some years
with the fragility curves.
But for seismic margins, we explicitly
reference -- and if you do it, your okay, the CDFM,
and if it was used in seismic margins, and if you do
that, that's the acceptable method.
Uncertainties. This is a crucial point.
We explicitly incorporate treatment of uncertainties
in the standard, in the requirements, in the things
that you have to do. You can't meet this standard if
you have not considered uncertainties.
The reason that I am saying that is
because if you don't know, I will tell you, but the
word uncertainty appears almost nowhere in the ASME
standard for internal events.
I am on that committee, and have been for
3 years, and I am unhappy with that, but that's the
way it is. I am a minority there. We have done that.
We don't see how you can do an external events PRA
without that.
One last topic, and then I will turn it
over to you, and you can ask questions, or if my
colleagues want to add something that I went by in
this introduction, they will tell me.
Seismic margins. As you know, more or
less half of the nuclear plants in the United States
did a seismic margin review than a seismic PRA when
they were satisfying the IPEEE around 6 or 8, or 10
years ago.
And if they did a seismic margin review
well, they ought to be able to meet the standard that
we wrote for seismic margin. And the reason that we
did that is because if you have done that rather than
the other, and you meet it, we want to give them the
benefit of that.
Because there are some applications for
which seismic margin is very well tuned, and they
ought to be able to say we met it and we can do those
applications.
By the way, there are many applications
where seismic margins is not well tuned, and I can
mention those if you want, or I can them here, or you
can ask me, and for those, those limitations you can't
do it.
But at least if you did that, then you
ought to get the benefit of that. So we have written
a whole separate section of the standard outlining the
method of seismic margins, and these are IPEEE
margins.
And it is well know that those that did
it, mostly we think did quite well there, and they
ought to be able to meet the standard, and then there
are some applications that they can do. The crucial
limitations of seismic margins are as follows.
Well, I will just say what it is good for,
for sure. If I have got some SSC that is very, very
stout against earthquakes, and the seismic margin
review has revealed that through the analysis, why
that information is just as valid as if you did a
seismic review.
And it turns out that there are a lot of
applications like that, and that is very valuable, and
they can do that. On the other hand, if in your
application you have got some accident sequence in
which a seismic failure combines with a human error,
or with some non-seismic unavailability or something,
seismic margins doesn't do that very well at all.
It doesn't capture it well, and the
analysis isn't structured to do that, and it wasn't
intended to. And certainly seismic margins can't
produce for you a core damage frequency, at least not
as structured.
So those sorts of applications are
unavailable to some plant that has only a seismic
margin review. However, it is our opinion, and when
we issued this now just for comment, it is our opinion
that nevertheless those that have a good one that can
meet it ought to be able to get the benefits of what
they can do, and that is our motivation.
I just have one other thing to say about
that, and then I am done. For at least a decade, the
community of people that play around in this sandbox,
of which the three of us are among them, have
struggled with what we could do to provide additional
guidance, so that if someone with a seismic margin
review could get more out of it.
And the proposal has been around for a
long time, including a very thoughtful proposal that
Ravindra came up with with Bob Murray about 10 years
ago.
Your own staffer, Rich Cherry, ACRS staff,
five years ago wrote a very useful paper about that.
A couple of years ago, Bob Kennedy wrote another one,
and we finally decided that it was time to do
something to explore how to do better.
And this is relevant to this directly, but
about a year ago Ravi and I went to the NRC and ANS --
and Nilesh has got a conflict with this hat on, but
NRC gave a grant to ANS to fund Ravi and me to do that
study.
And Ravi and I have just within the last
few weeks completed a study and published it, and I
have it in front of me -- but you don't have it yet,
although I will send it to you -- whose scope is the
following.
We have explored how you could take a good
quality seismic margin review, and extract more from
it than you would think. For example, more risk type
information, or more CDF type information, by doing in
some cases directly, or by doing some additional work.
And we have written a paper that explores
the things that you can do with it, and what the
limitations are. And that has been published, and
although it was reviewed by a few people, we are going
to send it around the community of people, people like
you, but also the seismic margin side of the community
and get some feedback from them.
Because if what we propose makes sense to
them, then sometime in the next 6, 8, or 12 months, we
are going to propose an amendment to this standard, a
supplement, and in which requirements for that will be
in there.
So that if you can do that, then you can
use the seismic margin review that you have to get
more out of it than you now can. And in order to push
that along, our final report, which I have in front of
me, actually has in it proposed draft standard
language, just like in the requirements that you see.
And it is Ravi's and my cut as to what the
requirements would be, and that if you did them, then
you could get more out of it, but there are some
limitations.
And I will just end with that, and ask my
colleagues, Chokshi and Ravindra, whether they want to
add something that perhaps I went over too quickly,
and then you get to ask us. Anything that I didn't
cover?
Oh, wait, I have left something out. Very
important. When we had what we thought was a
satisfactory draft, about six months ago, we sent it
around to 6 or 8 of our peers; peers meaning they
could have been people like us and been on the
committee, but the committee was just small.
And we got review comments back from most
of them, which were very helpful for us, in terms that
we made some changes. But this was an informal
review.
And because mostly what we got back was,
yeah, you are on the right track, we have a lot of
confidence that what we are doing here is congruent
with the larger community.
Crucially, we got comments back from Bob
Kenney, from Leon Reiter, from Alan Cornell, from Bob
Murray. EPRI actually funded Greg Hardy to do some
work, and Bob Kassawara participated in that.
So we have before the final, we have those
comments and feelings, and so we have a feeling like
I said that that's okay, and then also crucially, John
Stevenson, another important member of this community,
was originally going to be a member of the writing
team.
And then just as we were starting, he
dropped off, but has remained sort of an associate
member right along. We have sent him everything, and
he has made comments. His stuff is here, too. Okay.
I just wanted to be sure that I put that in.
And then finally, this work was partially
supported by the Nuclear Regulatory Commission, who,
besides that special project that Ravi and I did, the
NRC gave a grant to ANS to support this, and the
support by the way paid for staff and so on.
But it also paid for things like our
travel and some administrative expenses. So I want to
recognize that Nuclear Regulatory Commission support
for this effort.
CHAIRMAN APOSTOLAKIS: Thank you, Bob.
MR. LEITCH: My first question I think
relates to exactly the issue that you were talking
about, and perhaps it is my lack of understanding
concerning exactly what a seismic margin assessment
is.
I am referring to page 5, the third
paragraph. I guess these are all paginated the same
way.
MR. BUDNITZ: I hope so.
CHAIRMAN APOSTOLAKIS: Maybe you can use
the page numbers at the top.
MR. LEITCH: Yes, that is page 5.
MR. BUDNITZ: By the way, if you could
also point to the section, like 1.3.2, it will help
us.
CHAIRMAN APOSTOLAKIS: Page 5.
MR. LEITCH: It is the paragraph that
immediately precedes 1.3.3, and it says throughout the
standard the phrase, PRA, is used in a generic sense.
And then the intent is to include SMA
methods, as well as PRA methods within the scope of
the phrase, PRA. So when we see PRA in this document
then, as I understand it, it may mean either what we
normally understand by PRA, or it may mean SMA.
And I'm just not sure that I understand
the distinction between those two. Could you held me
with that a little?
MR. BUDNITZ: Sure. Well, what I meant by
that paragraph, which I wrote or I suppose I'll say
we, is that, for example, it says 1.4 to 1.10, and
when you are talking about peer review or that sort of
stuff, and we just didn't say PRA or SMA everywhere.
But the sections that are explicitly SMA,
differ. You pretty much have to sort it out. Are you
unclear about what an SMA is and what it does?
MR. LEITCH: Yes.
MR. BUDNITZ: There was an expert panel
that I chaired in 1984, '85, and '86, although these
fellows were involved, which invented that method.
The purpose of a seismic margin review is to evaluate
the plant, and ascertain its seismic capacity, defined
in terms of the way we define fragilities; the
fragility curve or fragilities.
But if you have, say, four components,
then you have to combine them in the right way, for
example, and they might even have various other
things.
So the purpose of a seismic margin review
is that you go to your plant and you evaluate its
capacity. You don't pay any attention to the hazard.
Now, the way it was structured was a little different
than that.
You pick what we call a review level
earthquake. In the east, we suggested it, and almost
everybody picked .3G. The idea was that it has to be
higher by a factor than your design basis, which is
3.1 or .15.
But by the way, if you are in Arizona, in
Palo Verde, with a .25G design, you pick .5 as your
review in other words. And you review the plant to
the review level earthquake, and it has guidance in
there that tells you how you can screen out using the
guidance a whole lot of SSCs that clearly are stronger
than that, and then you have to evaluate the ones that
aren't.
So, for example, you might end up in
typical power station with only 2 or 3 dozen
components, SSCs, for which you have to actually do an
evaluation.
And for those, it goes further. You don't
work out in the seismic margin method that everybody
uses. You don't work out the full seismic fragility
curve. You develop what we invented and called the
HCLPFC capacity, the high confidence low property
failure capacity, which is the capacity which
literally on a full fragility curve is the capacity in
which there is a 95 percent confidence that you have
less than a 5 percent probable of failure.
But really it was intended to be the
capacity at which -- we have a very high confidence
that this thing wouldn't fail, because you don't
really believe the tales of these log normals all that
well.
And the notion was -- and there is a
method called the CDFM method, the conservative
deterministic failure margin method, or else you can
get it from your fragility curves, for working out the
HCLPFC capacity of a pump, or a valve, or a wall, or
a large tank.
And what the seismic review margin was or
did was that it worked out the HCLPFC capacities of
every one of those SSCs that wasn't screened out, and
then it combined them to work out the HCLPFC capacity
of the station by choosing two success paths -- they
are supposed to be different if they can be -- that
the operators would use.
Say you need this component, and you need
this component, and you need this component, and you
need that system, and you need this thing.
And then we will imagine that they are called success
paths A and B.
And you would work out the HCLPFC capacity
of the success path A, and the HCLPFC capacity of
success path B. And then the stronger of those was
the HCLPFC capacity of the plant, because if you used
that success path, you had high confidence that with
that HCLPFC capacity that you could shut down, and
that's how it is done.
That is a real tour. There is a little
more to it, but that's a quick tour.
DR. KRESS: Somehow on page 18, your
discussion on the definition of seismic margin doesn't
reflect very much of what you just said, but still it
is not a very satisfactory definition.
MR. BUDNITZ: That's fair.
DR. SHACK: There is nothing about margin
in it.
MR. BUDNITZ: Probably what we should have
done was just said go to the SMA literature.
DR. KRESS: Or something maybe.
MR. RAVINDRA: Initially, we were thinking
of writing an appendix, like what we have for the
seismic PRA, and we were thinking of writing an
appendix that describes the salient features of the
SMA method.
And we were kind of debating whether such
an appendix would help the user or not.
DR. KRESS: Well, would that show up in
your proposed addendum to the thing?
MR. BUDNITZ: No, but we could do that.
There is a lot out there that we could probably pull
together. We ended up not doing it. There are 5 or
6 reports, which taken together, tell you everything
you want to know about it.
CHAIRMAN APOSTOLAKIS: There is a danger
here of jumping back and forth. Why don't we start
with questions on Chapter 1, called the introduction.
Do the members have any questions, which is the first
10 pages. Which numbering scheme are we following?
Let's go with the printed numbers at the--
MR. BUDNITZ: Well, we can do it by
section. If you go to Section 1.9 --
CHAIRMAN APOSTOLAKIS: Well, let's do it
by chapter, the first 10 pages. Any questions? Well,
I have some. On page 5 and 6, Section 1.3.3, there is
a very interesting discussion of LRF, and maybe it is
just an opportunity to comment on it.
You state that some accident sequences
that are externally seismic initiated, which do not
contribute to LRF in the internal event analysis, in
fact now contribute to LRF because you assume there is
no evacuation, right?
MR. BUDNITZ: Yes.
CHAIRMAN APOSTOLAKIS: And the evacuation
was the criterion for whether something is early or
not.
MR. BUDNITZ: Or it is impaired or
something.
CHAIRMAN APOSTOLAKIS: And you assume
there is no evacuation at all?
MR. BUDNITZ: No, we told the analyst to
figure out what they assume.
CHAIRMAN APOSTOLAKIS: To do what?
MR. BUDNITZ: To work it out. If
evacuation is impaired, then you do count for that.
We don't say assume no evacuation.
CHAIRMAN APOSTOLAKIS: But that is a
judgment call isn't it?
MR. BUDNITZ: Absolutely, but that is the
analyst's job.
CHAIRMAN APOSTOLAKIS: They are not going
to do any analysis for that and see what buildings
collapse, and bridges fall down. They will just make
a judgment, that the evacuation is really not
effective against LRF, right?
DR. POWERS: George, I think you could
-- you know, your consequence code, typically you
could have quite a range of inputs to the code.
CHAIRMAN APOSTOLAKIS: But they don't do
that though I don't think. It is just a judgment on
the part of the analyst. You don't run any codes to
see the impact of evacuation, right? It is just a
judgment on your part.
MR. BUDNITZ: Well, just to describe what
we said. If this isn't clear, we have to clarify it.
CHAIRMAN APOSTOLAKIS: No, it is clear.
I just wanted to --
MR. BUDNITZ: To meet the standard, the
analyst is not to take the LRF from the internal
events and swallow it whole. He may observe that in
fact protective actions are impeded, and perhaps not
as effective, or whatever, for a big tornado, let's
say.
And the analyst is explicitly told that
the analyst shall account for that before he
categorizes alert sequence for LRF or not, depending
on the event. Nothing more, but nothing less.
CHAIRMAN APOSTOLAKIS: Very good.
MR. RAVINDRA: I think that after the core
damage, if the containment systems don't function,
then you have a question of release; am I right?
MR. BUDNITZ: Sometimes.
MR. RAVINDRA: So because of that, in the
seismic event, the containment systems are not any
stronger than the actual core itself. Therefore, the
assumption that the containment systems would remain
after the core damage has occurred may be not really
true for seismic events.
MR. BUDNITZ: So that is the reason that
you have to look at that.
MR. RAVINDRA: Yes, that you have to look
at that.
MR. BUDNITZ: Sure.
DR. KRESS: That is accounted for in the
PRA.
VICE CHAIRMAN BONACA: Yes, the PRA
accounts for that.
CHAIRMAN APOSTOLAKIS: That is just a
definition of LRF. Look, there was no question. It's
just that I like it.
DR. KRESS: We are glad that you pointed
that out actually.
CHAIRMAN APOSTOLAKIS: On page 7, you
state what you also told us, that this category was
consistent during the spirit of category 2 of ASME
standard. And I am wondering why the forces that
forced ASME to have a category one did not apply to
you?
MR. RAVINDRA: Currently, they are at
work.
CHAIRMAN APOSTOLAKIS: They are at work
now?
MR. BUDNITZ: Yes.
DR. KRESS: They are working on it.
CHAIRMAN APOSTOLAKIS: Are they doing any
work?
DR. SHACK: You have a seismic margins
analysis, which to a certain extent is a category one.
MR. BUDNITZ: There is actually a
technical thing here. We don't believe, the three of
us, that we know how to write a capability category
three just to start for seismic, or for tornados, or
for the external hazards.
And the reason is because there are
certain technical things that you have to do that we
don't do, and we don't know how to do. A category
three --
CHAIRMAN APOSTOLAKIS: One is the
question, not three.
MR. BUDNITZ: How about three? So we
don't know how to write three, okay? We don't know,
for example, how to capture in a way that you would
actually like to capture the correlations. We don't
know how to do HRA well for post-seismic or post-
tornado human actions.
Those are major things that are pointed
out there. There are a whole lot of things. We don't
know how to do three. If somebody came in and said
they put down a category and claimed it was a category
three, I would have real trouble trying to agree with
that unless we had done something that no one has ever
done.
And by the way the notion of category
three in ASME was somebody at least had to have done
it sometime so we would have an example. It can't be
something that is a dream.
Now, category one is different, capability
category one. The idea of capability category one is
that you have compromised your peer rate compared to
the middle column, which is the -- by one of several
things.
For example, you might use generic rather
than plant specific things. You might be conservative
rather than realistic. I don't mean conservative for
screening, but actually conservative in your analysis.
Or you might not have much level of detail.
VICE CHAIRMAN BONACA: But you are
allowing this in the standard, too.
MR. BUDNITZ: Right. So we could have
candidly written another category for that, but
instead we have allowed it here. If you do less, why
is there some stuff that you can't do. But we didn't
write a whole column for it. We could have, but we
didn't.
VICE CHAIRMAN BONACA: I would just like
to express a personal opinion. I was very impressed
by this standard, for the simple reason that it looked
like a standard, and the ASME standard, I don't feel
like it looks like one.
And yet you are allowing the kind of
latitude in the approach in the context of the
standard, which should be there. But I felt that it
was well developed, and by the way, the commentary
that you have is a key issue, because it allows you to
pull out all the discussion that I saw in the ASME
rev. 10, which I think was the one impeding somewhat
the document.
And here it is separated that way, but I
think that one of the great strengths of this thing is
that you don't have category one, two, or three.
DR. KRESS: With respect to your one
category, in your Chapter 6, you strictly endorse the
ASME, which is Chapter 3 in the ASME.
MR. BUDNITZ: We don't endorse it. We
incorporate it by reference.
DR. KRESS: Incorporate it by reference.
I'm sorry.
MR. BUDNITZ: But that is an important way
to say it.
DR. KRESS: That is. But my question is
that chapter in the ASME document is cast in terms of
the three categories. Is there any conflict between
yours and that, or can one just assume that you are
looking at the category two parts?
MR. BUDNITZ: I'm glad you asked that.
DR. KRESS: Okay.
MR. BUDNITZ: I am sure that after the
ASME process has run its course, but before we
finalize this finally, because that is probably a few
months away, we are going to have to rewrite some of
this to clarify that.
But the problem is that the ASME process
-- I'm part of it and I'm on the committee -- is still
in the works, and so we are trying to get this thing
out and gone, and go through the process and get the
technical stuff right.
And later on we are going to have to
clarify exactly what by reference means, because the
thing that we referred to when we wrote this -- that
Chapter 3 is different. It changed last week, and we
have another meeting in two weeks and it is going to
change again. And sometimes the change is
significant.
DR. KRESS: I had that problem trying to
relate the two, because I don't have the latest one
either.
MR. BUDNITZ: If you think about the
middle category as a good quality pairing, that's --
CHAIRMAN APOSTOLAKIS: But maybe what Dr.
Shack said is really the reason why you don't have
pressure. You have a screening, you have the SMA,
which the main idea behind category one was to use a
quick calculation to rank systems, structures, and
components, and to do certain things very quickly for
which those people felt that you didn't need a
detailed uncertainty analysis.
MR. BUDNITZ: Actually, I don't agree with
you. That is only one of the motivations behind the
capability of one in ASME. Unfortunately -- and I say
this now as a member of that team -- some people think
they can do more with it than that, and that is
partially true.
But I don't think there is an awful lot
more -- some people think there is a lot more that
they can do than I can think of.
CHAIRMAN APOSTOLAKIS: Well, that is the
essence of it. The idea was to be able to screen
quickly and identify contributors --
MR. BUDNITZ: Well, some people think the
idea of it was more than that. Some people think that
you can do everything you can do in PRA with that
thing.
CHAIRMAN APOSTOLAKIS: No, no, no. That's
not reality.
MR. BUDNITZ: I am just exaggerating.
Nobody really thinks that.
CHAIRMAN APOSTOLAKIS: Ravi.
MR. RAVINDRA: Now, the chapter on the
external event screening, if the analyst wants to use
it, he can treat that as any other external event, and
decide based on the frequency of the break itself, he
can screen it out, and based on some bounding
calculations, he can screen it out.
Or he can use the seismic margin approach
to go a little further to screen out initial
components and systems. So this is a continuous
screening process. So the method is already there.
Now just to reformat it into category one or category
two.
We were also waiting for the ASME to
complete his work, and for the dust to settle down,
and only then can we do it.
CHAIRMAN APOSTOLAKIS: Okay. On page 7,
you say that the ASME, the bottom paragraph -- well,
first of all, you say that a well executed SMA
represents a good fit to many of the applications
contemplated for ASME category one.
MR. BUDNITZ: Yes.
CHAIRMAN APOSTOLAKIS: And then you go on
and say especially insofar as an SMA generally is well
suited to the categorization of SSCs according to
their size and capacity, and to the screening of SSCs
according to their safety significance.
This refers to what you said earlier, Bob,
with the two parts and so on?
MR. BUDNITZ: Yes.
CHAIRMAN APOSTOLAKIS: Okay.
MR. BUDNITZ: But there are limitations.
CHAIRMAN APOSTOLAKIS: But the
categorization, you see, that word means something
about internal event PRA. You don't mean
categorization according to some importance measure
and so on do you?
MR. BUDNITZ: I think it says
categorization of SSCs according to their capacity.
CHAIRMAN APOSTOLAKIS: And what does that
mean?
MR. BUDNITZ: HCLPF above .3G.
CHAIRMAN APOSTOLAKIS: Okay. That kind of
thing.
MR. BUDNITZ: The sort of things that SMA
does for you.
DR. SHACK: Just coming back in my
absolutely simple-minded view of these things, as I
don't know anything about it, it seemed to me that in
a seismic margin analysis, you have a simple amount of
-- I mean, you have identified two of the ways that
you can succeed. So you have sort of set a bound on
things.
MR. BUDNITZ: Correct.
DR. SHACK: And so to that extent, you
actually have some PRA like information, but what you
don't have is a complete set of event trees. But you
have picked your two success paths, and so to that
extent you do have some bounding information.
MR. BUDNITZ: Correct.
DR. SHACK: For example, suppose you
stupidly forgot about the strongest success path,
where you had a whole bunch of SSCs that were
extremely stout earthquakes, but had no human
intervention, and it was all automatic, and you knew
it and so on.
You might completely misunderstand your
seismic capacity. You might think it is smaller, when
it is really very strong.
MR. BUDNITZ: Correct.
CHAIRMAN APOSTOLAKIS: Anything else on
chapter one?
(No audible response.)
CHAIRMAN APOSTOLAKIS: Okay. Chapter 2 is
definitions. Any comments?
MR. WALLIS: Well, the seismic margin one
didn't help me, and then when I looked at Chapter 3.5,
you launch into methodology without saying what it is
that you are doing.
In seismic fragility, there is a very nice
definition of Seismic fragility.
MR. BUDNITZ: Point taken.
MR. WALLIS: It is not there for seismic
margin.
MR. BUDNITZ: Perhaps Ravi and Nilesh, we
have to go back and rewrite that appendix on seismic
margin that we wrote for --
VICE CHAIRMAN BONACA: On the positive
side, I didn't see any glaring error or mistake.
CHAIRMAN APOSTOLAKIS: Oh, you mean on
chapter two?
VICE CHAIRMAN BONACA: Yes.
CHAIRMAN APOSTOLAKIS: I have a couple of
comments on chapter two. Anybody else wants to go
ahead of me? The definitions?
DR. KRESS: The definition of core damage.
MR. BUDNITZ: Oh, wait. We took that
straight from ASME. All the systems stuff, and just
so you understand, but we were constrained and so we
decided to make it seamless with the ASME standard
that we are going to use, and if their definition is
changing, and it has changed a little bit, we are
going to incorporate it.
Perhaps I need to say that up front. The
idea was --
DR. KRESS: That might be helpful.
MR. BUDNITZ: We are doing a low power
shutdown standard by the way, too. If all the
definitions aren't the same, you sure won't be able to
use them together for applications. Let me make a
point to be sure to say that.
CHAIRMAN APOSTOLAKIS: On the same page,
page 13, the discussion of epistemic uncertainty
focuses on model uncertainty, but part of epistemic is
perimeter uncertainty as well.
And maybe if you can make that a little
bit clearer, because the definition seems to focus
almost exclusively on the modeling assumptions.
MR. BUDNITZ: I don't see that.
CHAIRMAN APOSTOLAKIS: I see it.
MR. BUDNITZ: Okay. Thank you.
CHAIRMAN APOSTOLAKIS: Anything from the
members on definitions? That's it. Now, Chapter 3 is
very big. So maybe we can break it up. Maybe go and
include 3.1 and 3.2, and 3.3, first; is that
reasonable? Technical requirements, general. Now,
which pages are these?
It starts from page 20. Does anybody have
any comments on that?
(No audible response.)
CHAIRMAN APOSTOLAKIS: No? Okay. 3.4,
technical requirements. Yes, let's do the whole 3.4.
MR. BUDNITZ: Oh, by the way, I want to
make a comment about context here that I think will
help you. Having just spent 3 years with the ASME
team, one of the complexities that the ASME team faced
and faces in writing its standard is that there are a
hundred plants out there that have had a PRA, and
because of twins, there are 60 or 70 PRAs.
For many of the sections of internal PSHA,
the plants use different methods, very different HRA
methods, very different methods for success criteria,
and very different methods for this and that.
And to try and write a standard that
captures and enables those with good quality to still
meet it turned out to be a difficult trick. And it
was a big struggle.
It is important for you to understand that
it is our opinion here as writers of this that there
is far less variability in the way the size of PRAs
that were done were done and were accomplished.
They are mostly similar; the fragilities
part, the hazard part. So it was simpler for us. We
didn't have to struggle in very many places. We are
trying to write a requirement and that we know someone
had done quality work different ways. That was good
luck for us.
CHAIRMAN APOSTOLAKIS: Let's go to page
24. I mean, it may be an unfair question, but I think
we have to see if we can resolve it. The seismic
hazard analysis high level requirement, (a), says that
the frequency of earthquakes shall be based on a site
specific PSHA that reflects the composite distribution
of the informed technical community.
Now I know where that comes from, but
somebody who takes this and is innocent in the ways of
life, how does he make sure that this is a composite
distribution of the informed technical community? I
mean, are you imposing an impossible requirement?
MR. CHOKSHI: I think if you go, in terms
of how you meet the requirement, it basically says the
SSHAC approach is one --
CHAIRMAN APOSTOLAKIS: But the whole thing
rests on experts you choose, right?
MR. CHOKSHI: Yes.
CHAIRMAN APOSTOLAKIS: And how you define
the community.
MR. CHOKSHI: Yes, but the chart lays out
the selection of experts, and the process of how you
go about doing that.
CHAIRMAN APOSTOLAKIS: So that is really
the intent?
MR. CHOKSHI: Yes.
MR. BUDNITZ: And in fact if you turn to
the detail a few pages later, it goes to that
directly.
CHAIRMAN APOSTOLAKIS: And the same thing
on the same page, page 24, there are words like
credible and I wonder. I mean, that becomes clearer
--
DR. SHACK: The big difference here is
their commentary gets a lot of that in, and the high
level requirements become very concrete --
CHAIRMAN APOSTOLAKIS: When you go to
their comments.
DR. SHACK: Yes, and their commentary is
a very strong suggestion, like do it.
MR. CHOKSHI: We were struggling how to
get some of these ideas across and the commentary was
a good vehicle to do it.
VICE CHAIRMAN BONACA: It is a good
commentary, and I agree, but it gives you the written
path, which is the standard way to do it.
DR. SHACK: It gave you guidance.
VICE CHAIRMAN BONACA: It doesn't say you
can't do it otherwise.
CHAIRMAN APOSTOLAKIS: I am a little
confused though. If you go to page 26, where the
commentary says existing LNEL and EPRI hazard studies
and many hazard studies conducted for plant PSHAs also
meet this overall requirement.
Now, are these two studies, studies that
differ by a factor of 10?
MR. BUDNITZ: It is Livermore '93.
CHAIRMAN APOSTOLAKIS: So it is the
updated Livermore?
MR. BUDNITZ: Yes.
CHAIRMAN APOSTOLAKIS: So they don't
differ that much anymore.
MR. BUDNITZ: Except for details.
CHAIRMAN APOSTOLAKIS: All right. On page
31, the last note at the bottom of the page, HA-D3,
somewhere in the middle it says that the
characterization of ground motion includes an
epistemic uncertainty in the ground motion model.
Have people done that? Have people
developed an epistemic uncertainly in the ground
motion model?
MR. RAVINDRA: For each ground motion --
well, there are many ground motion models, and so the
collection of that represents the distribution.
CHAIRMAN APOSTOLAKIS: But who developed
the distribution? I mean, I can have five models with
uncertainties, and as you know, people have been
arguing about a particular model from Southern
California and so on.
But if I pick one, how do I develop the
epistemic uncertainty in the model itself given that
I have the other six models floating around? Is there
a methodology that tells me how to do that, or do I
have to --
MR. BUDNITZ: I understand your point,
George. Suppose the word said, "an epistemic
uncertainty amongst the several ground motion models."
Suppose it said that.
CHAIRMAN APOSTOLAKIS: Yes, we need a
better word. I am not sure that is the best, because
that is related to another question that I have
regarding sensitivity studies.
MR. BUDNITZ: But that is not a small
point. In fact --
CHAIRMAN APOSTOLAKIS: It is not a small
point, no.
MR. BUDNITZ: In fact, if you go with --
let's say you go with Dave Boore's model.
CHAIRMAN APOSTOLAKIS: Yes, good fellow.
MR. BUDNITZ: Then if you are ignorant
that Abramson has done a different model, then you may
not capture this model epistemic uncertainty.
CHAIRMAN APOSTOLAKIS: Exactly. But if I
am aware though that Abramson has another model, I
still don't know how to meet the standard. You know,
how do I develop my epistemic uncertainly now, and I
think that is something that needs elaboration,
because I don't think we should ask the user of the
standard to do research.
By the way, I am focusing on things that
I thought required discussion. I think this is a very
good standard.
MR. BUDNITZ: Well, George, let's go on
then. Just keeping reading, because --
MR. RAVINDRA: In terms of the person
writing the commentary, I think the civil engineering
professionals are the first one that came up with that
concept. Most of the civil engineering standards and
building codes come with commentary so that the user
knows the basis, and not just the requirements.
CHAIRMAN APOSTOLAKIS: Wonderful. It is
about time we learned something from you guys.
MR. BUDNITZ: George, let's keep going.
The next sentence --
CHAIRMAN APOSTOLAKIS: I read the next
sentence.
MR. BUDNITZ: But it says that SSHAC gives
guidance on an acceptable process to be used for
determination of -- and in fact you and I were
authors, and that guidance isn't really enough.
CHAIRMAN APOSTOLAKIS: It is not. I think
you need to soften a little bit what you are saying
here, and find a way around it.
Now, on page 33 -- and this is something
that is not unique to the standard, but something that
bothers me in general, but look at the requirement HA-
F2, which I think is a reasonable thing to say, but I
will voice my concern.
The PSHA shall include the appropriate
sensitivity studies, and then you have a commentary,
which is fine. It says examples of useful sensitivity
studies include an evaluation of alternate schemes
used to assign weights to experts, and so on, and so
on.
My problem with sensitivity studies is
that I don't know what to do with them. What if some
combination of these things shows a core damage
frequency that is way out of this world, or it is
above the goal? Now what do I do?
I did the sensitivity study, and I am
above the goal, and everything else that I have done
shows that I am below the goal. What good is it? I
mean, shouldn't we be through Bayesians assign
probabilities to all of these things, and include
them? I mean, Bob, I don't know what to do with that.
MR. BUDNITZ: Read the sentence.
CHAIRMAN APOSTOLAKIS: I read the
sentence.
MR. BUDNITZ: It tells you why. The PSHA
shall include appropriate sensitivity studies and
intermediate results. Why? To identify factors that
are important to the site hazard, and that make the
analysis traceable and reviewable.
Now, here is the point. If you do a
sensitivity study and find out that Factor 44 is not
important, then you have learned something. If you do
a sensitivity study and find out that Factor 44 is
important and you didn't include it, you have actually
erred. So that is how it is used.
CHAIRMAN APOSTOLAKIS: No, that is not the
way that I read it. If I assign different weights to
individual expert models, or an evaluation of the way
different experts make different assignments, and I
find a result that is an order of magnitude greater
than what I believe is a realistic estimate, I don't
know what to do with it.
What do I do? Do I report it to the
regulator, for example? And what is the regulator
going to do? Because you give a realistic
distribution and they say, well, if I do this gain
here, I am a factor of 10 higher.
MR. CHOKSHI: I think within the
sensitivity studies you still have to be realistic.
You still have to use realistic assumptions and
values.
CHAIRMAN APOSTOLAKIS: I think this is a
relic of traditional engineering, where they were not
doing uncertainly analysis, and let's play with the
variables a little bit to see what happens.
When you do a rigorous uncertainty
analysis the way you guys demand it, I think you have
to be very careful with what kinds of sensitivity
studies you are asking.
I mean, I can see saying, you know, maybe
the distribution has a higher this or that, but it has
to be constrained. Otherwise, I can see it getting
out of hand, and that is not the intent for sure.
MR. BUDNITZ: Look at the note.
Sensitivity studies in the intermediate results
provide important information to reviewers. And by
the way, you might also say the analysts, of course.
CHAIRMAN APOSTOLAKIS: Yes.
MR. BUDNITZ: About how some of the key
assumptions affect the final results of this complex
process.
CHAIRMAN APOSTOLAKIS: Right.
MR. BUDNITZ: It is no more, but it is no
less.
CHAIRMAN APOSTOLAKIS: Bob, let's say I do
find that I have two key assumptions, and then I
change things. I assume something else and the thing
jumps up. Am I under a requirement here that says,
no, you are not going to play that game.
If you want this factor to become six, you
also have to tell me what is the probability that it
will be come six. That's where I am going. Otherwise,
I don't know what to do with it.
MR. BUDNITZ: George, this is a deep
intellectual challenge. Let me give you an example,
all right?
CHAIRMAN APOSTOLAKIS: All you have to do
is say do I look --
MR. BUDNITZ: No, let me just give you an
example straight from seismic hazard. Suppose there
were five of us at the table, and --
CHAIRMAN APOSTOLAKIS: When in fact you
are only three. But go ahead.
MR. BUDNITZ: But suppose there were five
of us at the table who were ground motion experts, and
who had different ground motion models, and even
though they are all different, A, B, C, and D, whether
you used A, B, C, or D models didn't make much
difference to the results.
But you went to E, and you used hers, or
his -- it doesn't make a difference -- and it made a
big difference. Now you know something. What you
know is -- first of all, you know what I just said,
but you also know that there is the possibility that
the other four might be wrong, and so then you have
got to go and inquire.
So what you do with it depends on what you
learn, but we don't tell you to incorporate it in the
analysis. It is nothing more than important
information about how some of the assumptions effect
the result.
CHAIRMAN APOSTOLAKIS: But you know that
the paralysis that this --
MR. BUDNITZ: Are you suggesting that we
should not have done that? Are you suggesting not to
do sensitivity analysis? Are you suggesting not to
publish intermediate results?
CHAIRMAN APOSTOLAKIS: No, I want you to
define them better, and tell me what to do if I find
a situation like that.
MR. BUDNITZ: I can't tell you what a
decision maker would do.
DR. KRESS: If you are requiring a good
vigorous uncertainty analysis, what do you need with
a sensitivity analysis?
CHAIRMAN APOSTOLAKIS: Exactly.
DR. KRESS: I think that is the point.
CHAIRMAN APOSTOLAKIS: And you guys do
require a vigorous uncertainly analysis.
MR. CHOKSHI: Even if you do a vigorous
uncertainty analysis in something like hazard, you
will be making some --
CHAIRMAN APOSTOLAKIS: You can say
identify what is important.
MR. CHOKSHI: Exactly.
CHAIRMAN APOSTOLAKIS: I think it is time
that we abandon this.
MR. BUDNITZ: George, you and I both
understand how difficult it is to deal with the inside
that you got from this assumption that you made that
you know is wrong. I mean, sometimes you can assume
something that is physically incorrect, and couldn't
happen.
You say, gee, let's suppose the water has
density, too, or something.
CHAIRMAN APOSTOLAKIS: It's not always
easy.
MR. BUDNITZ: And it is not going to make
any difference, and if it doesn't make any difference,
it doesn't.
CHAIRMAN APOSTOLAKIS: But this is
related, Bob, to the issue of assigning equal weights
to the experts, I think. All of these things go
together.
MR. BUDNITZ: It is all related.
CHAIRMAN APOSTOLAKIS: We have to finally
say, look, this is the probability that I am assigning
to this, okay? Whatever that is. You don't disagree
with that do you?
MR. BUDNITZ: I am not going to argue
that. Let me describe. In the end, George, there is
an analyst, a person, or perhaps it is a team.
CHAIRMAN APOSTOLAKIS: Yes.
MR. BUDNITZ: And if they sign the thing
and say I, we, take professional responsibility for
what we did, and the sensitivity study showed
something cockeyed, and we don't believe it.
CHAIRMAN APOSTOLAKIS: Oh, they say we
don't believe it.
MR. BUDNITZ: No, they might say. Let's
assume that, or else they might say, gee, maybe we
should believe it. In other words, it comes down to
professional responsibility doesn't it?
CHAIRMAN APOSTOLAKIS: And eventually
maybe --
MR. BUDNITZ: Well, he assigns the
probabilities after it. If he finds out that it
doesn't make any difference, then he is not going to
worry a priori about assigning probabilities to these
things. I mean, I look at it as a way of narrowing
down --
DR. KRESS: You can't really do that
because in order to do a sensitivity analysis, you
have to put ranges on these things. And you are not
going to just arbitrarily choose those. You are going
to choose something that is within the range of
probability.
CHAIRMAN APOSTOLAKIS: Exactly.
DR. KRESS: So you do assign some sort of
probability to it.
MR. BUDNITZ: There is assigned
probabilities, and there is assigned probabilities.
CHAIRMAN APOSTOLAKIS: I would like to see
a discussion or part of the commentary here that
reflects what we just said.
MR. BUDNITZ: That is very helpful.
CHAIRMAN APOSTOLAKIS: That's all I am
saying.
MR. BUDNITZ: That is very helpful.
CHAIRMAN APOSTOLAKIS: Page 43. You
already talked about it, the HRA thing, and you
recognize that this aspect can represent an important
source of uncertainty in the numerical results.
You are silent regarding references here,
where I see in other places that you are more than
willing to provide references.
DR. KRESS: Does George have a lot of
references on this?
CHAIRMAN APOSTOLAKIS: No, but for
example, if --
MR. BUDNITZ: Are you looking at SAB2?
CHAIRMAN APOSTOLAKIS: I am looking at
SAB2, yes, the very last sentence.
MR. BUDNITZ: The point is well taken. It
seems to me that we could and should provide some
citations.
CHAIRMAN APOSTOLAKIS: Especially if there
some studies that are particularly related.
MR. BUDNITZ: Absolutely. It is an
omission.
CHAIRMAN APOSTOLAKIS: Any other comments
on 3.1, .2, .3, from my colleagues?
MR. LEITCH: I have a question about 37.
CHAIRMAN APOSTOLAKIS: Page 37?
MR. LEITCH: Page 37, yes.
MR. BUDNITZ: Can you cite the
requirement, like SM-A1 or something?
CHAIRMAN APOSTOLAKIS: Yes, we can do
that.
MR. BUDNITZ: It is somehow different from
yours because of the printer.
MR. LEITCH: This is 3.4.2.1,
Introduction.
CHAIRMAN APOSTOLAKIS: Introduction to the
seismic PRA technical requirements.
MR. LEITCH: And it speaks about the
trimming of certain events and the adding of certain
events. And it gives some examples of trimming.
Could you help me with an example of adding?
MR. BUDNITZ: Of course, and perhaps we
can add that. The internal events PRA model basically
has in their basic events no structures. Walls don't
fail. But there can be a basic event of wall fails,
and then of course harm is pump, or piping, for
example.
That is an example of where one must
expand the horizon of the SSCs concerned. There are
others, but that's an obvious one.
MR. LEITCH: That helps my understanding
of it. Thank you.
CHAIRMAN APOSTOLAKIS: Any other comments
on technical requirements for systems analysis,
seismic fragility analysis? I don't have any, except
that it seems to me that it would require a specialist
to do this analysis. It is not like the internal
events.
MR. BUDNITZ: I would argue that you
require a specialist to do internal events, too.
DR. KRESS: I was kind of shocked to hear
that.
CHAIRMAN APOSTOLAKIS: Well, what I mean
is that you can have a systems engineer spending some
time learning what the fault trees and the event tree,
and he can develop those and do a decent job.
I don't think you can take a systems
engineer, train them a little bit, and have him do
this. This is really a specialist's job. That is
what I mean.
VICE CHAIRMAN BONACA: The hazard analysis
has to be done by specialists.
CHAIRMAN APOSTOLAKIS: Yes, because there
are so many disciplines that have to come today. Bob,
you have been with it for too long, and you think it
is trivial.
MR. BUDNITZ: Obviously, unless you know
how buildings respond to ground motion, you can't do
the response analysis. That's a specialty.
CHAIRMAN APOSTOLAKIS: Well, even
understanding the fragility curve. So, shall we move
on? I don't see -- well, Jack?
MR. SIEBER: I think that one of the
problems here is that because a lot of confluence have
fragility associated with them that the event trees
change. You end up blocking off success paths as you
go through. That has to be by a person more
knowledgeable than system engineers that I know.
MR. BUDNITZ: The appendix on seismatary
explicitly tells you that this must be done by a team
of systems fragilities and so on people interacting,
and short of that, it won't be successful, and it
tells you that in plain English.
CHAIRMAN APOSTOLAKIS: Okay. So we will
move on to -- I'm sorry.
MR. LEITCH: page 45, and it is
requirement SAE8. There is a sentence there that
puzzles me a little bit. It says that while this
standard does not require the analyst to assume an
unrecoverable loss of off-site power after a large A
earthquake, the general practice in seismic PRAs has
been to make such an assumption.
That seems a little confusing to me. Why
doesn't this standard require that?
MR. BUDNITZ: We permit the analyst to
argue if a basis can be established for the recovery
of off-site power after the earthquake. They have to
have basis. So I would just say that it does not
require the analyst to assume that loss of off-site
power is unrecoverable.
MR. WALLIS: Isn't this where you need one
of your little notes that peer review will look over
this assumption real closely?
MR. BUDNITZ: Well, just to give an
example, there are some exit sequences that run up to
120 hours and one might successfully argue that at my
plant I will recover one of those through some --
well, we just -- we didn't want to require that
conservatism if there was a basis, and so we
explicitly permitted it.
MR. LEITCH: Okay. I understand.
MR. BUDNITZ: And I am quite sure that is
the right thing. You don't want to require something
that they could argue for.
MR. RAVINDRA: Also, it is a function of
the size of the earthquake. If it is a small
earthquake, you make be able to quickly record some
off-site power.
MR. LEITCH: This specifically says a
large earthquake. But I understand.
CHAIRMAN APOSTOLAKIS: All right. 3.5,
seismic margining assessment. We already have a
comment from Dr. Wallis that he hasn't seen a
beautiful description of what it is. Can you guys
provide a beautiful description of what it is?
MR. BUDNITZ: We said we were going to
write that appendix that we sort of didn't do yet.
CHAIRMAN APOSTOLAKIS: All right.
MR. UHRIG: I am a little bit confused
here. 3.5.1 has the feed high level requirements. If
you go to the definition of success paths, it talks
about bringing the plant to a stable hot or cold
shutdown condition, and maintain it in this condition
for 72 hours.
And then seismic requirement B here is the
minimum of two diverse success paths, and so two of
those methods, shall be developed consistent with
structures and equipment that can be used to bring the
plant to a safe stable shutdown, and maintain this
condition for a period of 72 hours following an
earthquake larger than the RLE, which is the review
level earthquake.
Whereas, it doesn't talk about the review
level earthquake in the definition of the success
paths.
MR. BUDNITZ: Correct. The success path
-- you are looking at the definition section, back in
the definition section?
MR. UHRIG: Yes.
DR. POWERS: Page 65.
MR. UHRIG: Well, that's where the
requirements are. The definitions are back about 10
pages.
MR. BUDNITZ: No, page 18 says a success
path is a set of components that can be used to bring
the path to a stable condition in 72 conditions.
DR. POWERS: Right.
MR. BUDNITZ: Now, this says -- oh, you
are talking about the hot or cold? Maybe we need to
add that.
MR. UHRIG: No, no.
MR. BUDNITZ: This says --
MR. UHRIG: You want two sets of
components.
MR. BUDNITZ: -- this requires. So that
defines or requires that you shall develop two of them
that can do it after an earthquake larger than the
RLE. So that is more restrictive, except for the --
MR. UHRIG: It really doesn't define the
level of earthquake.
MR. BUDNITZ: Correct. It just tells what
the path is.
MR. WALLIS: And what is this review level
earthquake? It seems to have a pretty wishy-washy --
MR. UHRIG: Well, it is about a factor of
two greater than your safe shutdown isn't it?
CHAIRMAN APOSTOLAKIS: Let's answer this
question.
MR. BUDNITZ: Nilesh, do you want to
answer that?
CHAIRMAN APOSTOLAKIS: Let's finish this
question first
MR. UHRIG: No, I think it is pertinent
here, but the way I interpret this is that roughly a
factor of two greater than these safe shutdown
earthquake is what you are defining as the review
level earthquake.
Certainly at least 50 percent greater; .3
versus .5, and you have a .5 for the review level, and
the .3 is your SSE. Or if you have a .5 as a safe
shutdown, then what would you say, a .8?
MR. BUDNITZ: Well, of course, we don't
use it up there, but that's right. It is specifically
instructed that the margin method doesn't apply for
places where the design basis of this earthquake would
be way above high-G. It just doesn't. Go ahead. You
are looking at requirement SM-A1 is where it tells you
about that.
CHAIRMAN APOSTOLAKIS: Page what?
MR. BUDNITZ: SM-A1. It is sort of page
66 in my version. The requirement is that it just has
to be larger, and then the guidance says more.
MR. WALLIS: But how much larger? Larger
by a fraction, or by a factor of two?
CHAIRMAN APOSTOLAKIS: The note tells you
more.
MR. CHOKSHI: I think the background of
the matter, and based on similar experiences used in
nuclear power plants, there has been two level
earthquakes that have been established, 0.3G, and
0.5G, and basically they look at those two, because
that provides a very good level for screening.
You can screen a number of margins at
0.3G, and you can screen fewer at .5G. So primarily
in the margin matter it is 0.3G or 0.5G are used if
anyone wants to know what your design basis was.
So if you are at 0.2G, you can still use
0.3G, but if your design basis was much greater than
0.3G, most likely you will have to use 0.5G. So the
practical is 0.3G and 0.5G dealing with earthquakes.
MR. WALLIS: Is your standard saying that
you shall use 0.3G and 0.5G?
CHAIRMAN APOSTOLAKIS: No.
MR. CHOKSHI: Well, by reference,
referencing the matters. You know, if you go to the
definition, and if you look at page 17, and in the
note it refers to that point; that the majority of
plans in the eastern and midwestern United States held
reviews of 0.3G, because their design basis is
generally lower than 0.3G.
And then if you go to the seismic margin
methods, which are referenced here in the EPRI
reports, they explicitly talk about 0.3G and 0.5G.
MR. BUDNITZ: But the requirement is only
that the hourly shall be selected greater than the
SSE. That is the only thing that is required.
Now, if you select a review level
earthquake that is 20 percent above your SSE, you
don't get as much information.
MR. WALLIS: So don't you need more
guidance about how to select?
CHAIRMAN APOSTOLAKIS: There is a whole
NUREG.
MR. WALLIS: So there is a whole NUREG,
which I don't have the benefit of.
CHAIRMAN APOSTOLAKIS: There is a whole
NUREG.
MR. UHRIG: The other issue that was
confusing me here on page 66, and this issue is that
you have a high level requirement E, which says the
seismic margin calculations shall be performed for
critical failure modes in structures, systems, and
components, such as structure failure modes, et
cetera, and failure modes again.
And then down to requirement G, the
seismic margin shall be reported based on margins
calculated for the success paths. And I am confused
by the shift in emphasis here.
MR. BUDNITZ: Oh, let me -- let's go to
that.
MR. UHRIG: Require E versus Require G.
MR. RAVINDRA: Do you want me to answer
that?
MR. BUDNITZ: Go ahead.
MR. RAVINDRA: For every component that is
on the success path, we either screen the component
out because it has a high capacity, or we make a
calculation as to the seismic capacity of the
component.
Now, the success path is a chain of a
series of components, and so when you calculate the
success path capacity, generally you take the lowest
of the capacities of the components that appear on the
success path.
MR. BUDNITZ: The weakest of them.
MR. RAVINDRA: The weakest.
DR. SHACK: The ones looking at a
component margin is looking at the plant seismic
margin.
MR. BUDNITZ: So, you see, G says the
plant seismic margin shall be reported based on the
margins calculated for the success paths. I mean, if
it is four components -- A, B, C, and D -- and let's
say that three of them have a HCLPF capacity of 0.1G,
and one of them has a HCLPF capacity of 0.2G, then
0.2G is the capacity of the success path because that
is the weakest link.
MR. UHRIG: Yes.
MR. BUDNITZ: I mean, it is a little more
complicated than that. If you do and's and or's, you
have to take the strongest of the or's, and the
weakest of the and's. Maybe I said that backwards.
MR. UHRIG: Is there anything magic about
72 hours? Is that when all the after shocks have
gone?
MR. BUDNITZ: No.
MR. UHRIG: So is that just an arbitrary
number?
MR. BUDNITZ: No. It is what the systems
people have always used. Nothing more than that. It
comes straight from the systems, and not from the
after shocks.
MR. SIEBER: That's right.
CHAIRMAN APOSTOLAKIS: Anything else?
(No audible response.)
CHAIRMAN APOSTOLAKIS: Where are we now?
Oh, 3.6 and 3.7., other external events. Comments on
this?
(No audible response.)
CHAIRMAN APOSTOLAKIS: And 3.8, high
winds.
VICE CHAIRMAN BONACA: I thought those
were very good sections.
CHAIRMAN APOSTOLAKIS: I thought so, too.
VICE CHAIRMAN BONACA: And particularly
the commentary. It is so helpful because it gives you
a lot of reference. It is almost like hands-on, and
it is succinct enough. The other thing is that it
provides a clear understanding of how you are looking
missiles and how you are looking for targets. So it
is well done.
CHAIRMAN APOSTOLAKIS: Have there been any
PRAs with high winds?
MR. RAVINDRA: The example is Indian
Point.
CHAIRMAN APOSTOLAKIS: High winds?
MR. RAVINDRA: For high winds, yes,
because there were some structures that were not
designed for the missile and for the loading, and they
had the potential to fade and collapse on other
structures.
And so the high wind was considered as an
important external event for Indian Point. There was
a partial look into some systems that are affected by
the high winds.
But the experience is somewhat limited
compared to the seismic. And when it comes to the
external flooding, the experience is much more
limited.
MR. BUDNITZ: There are 3 or 4 external
flooding PRAs that I happen to know about.
CHAIRMAN APOSTOLAKIS: That dominate?
MR. BUDNITZ: That are important enough
that they actually carried it through.
MR. UHRIG: Quad Cities?
CHAIRMAN APOSTOLAKIS: No, that was
internal.
MR. BUDNITZ: The one I know is the
Westinghouse plant in Kishko, in Slovenia. But by the
way, it is a perfectly good Westinghouse plant. It
just happens to be on a river that floods every
hundred years.
And although the dike is big enough, they
had to do the whole analysis because it wasn't all
that big.
VICE CHAIRMAN BONACA: And winds with the
early plants, they really had no screening, and so
they were very vulnerable. Adam Neck was a perfect
example. It had plenty of missiles and plenty of
targets. So it was really a dominant contributor.
MR. BUDNITZ: And ANO did a complete
flooding analysis right down to the end, and then
found that it wasn't important, and so it didn't
matter much. But they actually did this some years
ago.
MR. UHRIG: George, can I go back to one
quick question here. In 3.5, you talk about generic
data. What is the source of this generic data? Page
66. It says that it must be justified if you use it.
There is two or three places in here where
it refers to generic data, and I just wondered.
MR. RAVINDRA: Over the years, there has
been a collection of data from sources, either the
qualification test data, which has gone beyond the
qualification level for components, and --
MR. UHRIG: Is this coming out of the reg
guides?
MR. RAVINDRA: No, this is the sanction
qualification data. The industry has collected data
on the seismic qualification of different kinds of
components, and that part of the database.
Then we have also collected the data on
the earthquake experience, looking at how the nuclear
plant type equipment were found in the large real
earthquakes.
And then there have also been some tests
conducted by Lawrence Livermore Lab and Sandia, and
Brookhaven, sponsored by NRC, to do the fragility
testing. All that information forms a database that
is generic, and not specific to any particular
component in the plant.
So if someone wants to use generic data,
he has to certify that it is really applicable to the
particular component.
MR. UHRIG: So he has to show that the
numerical values in his plant are comparable to those
that are being used there?
MR. RAVINDRA: Yes.
MR. BUDNITZ: Which comes around to saying
that my compact valve is similar enough to those that
were tested or observed.
CHAIRMAN APOSTOLAKIS: All right.
MR. UHRIG: Thank you.
MR. BUDNITZ: I mean, that's what it comes
down to in terms of the engineering.
CHAIRMAN APOSTOLAKIS: All right. 3.9,
external flooding.
DR. KRESS: Just a general question, and
not on 3.9, but when you incorporate references to
acceptable methodologies -- for example, in the high
winds, you have three or four.
Now, the NRC, I don't know how they will
use this standard, but if they say we want you to use
this standard for the quality of your PRA, are they
going to have to go in and study all these references,
and decide whether or not they really think they are
acceptable?
What was the criteria for deciding that
they were acceptable methodologies? Was it just the
expert judgment of you three, which I figure it was.
That's probably good enough for me, but I don't know
if it is good enough for NRC or not.
MR. BUDNITZ: We decided that a particular
methodology or in some cases an application, go there
and see what they did, would be acceptable. And what
we are seeking is a review of our peer community to
make sure that they also agree.
CHAIRMAN APOSTOLAKIS: But eventually the
staff will have to decide whether to adopt this,
right?
MR. BUDNITZ: Whether they also agree.
CHAIRMAN APOSTOLAKIS: And that's when
this question will come up.
DR. KRESS: I would hate to have to go to
every one of these and review every method on them.
CHAIRMAN APOSTOLAKIS: They have already
members who know that. There is some knowledge within
the staff. PRA configuration control. Fine?
DR. SEARLE: Yes.
CHAIRMAN APOSTOLAKIS: Risk assessment
application process.
DR. KRESS: That's fine. They didn't
reference. They incorporated by reference the --
MR. BUDNITZ: You skipped right over peer
review.
CHAIRMAN APOSTOLAKIS: I skipped what?
MR. BUDNITZ: The peer review.
CHAIRMAN APOSTOLAKIS: Because it is
unimportant. Peer review.
VICE CHAIRMAN BONACA: Here I think you
are making a reference to the ASME description of
that, and that is somewhat of a contested issue here.
You know, what do you mean by -- I mean,
you seem to impose additional requirements here just
because I expect the expertise that you need in
seismicity, and special exception events is somewhat
different than the one that you use for the level one.
MR. BUDNITZ: Right, but --
VICE CHAIRMAN BONACA: And so maybe that
is a moot issue here.
MR. BUDNITZ: But the general requirements
are taken from ASME by reference. For example, ASME
has a section that describes how you pick two of your
that don't have a conflict of interest, or that type
of requirement. That requirement, we just are not
going to do it over.
CHAIRMAN APOSTOLAKIS: So, application
process and documentation. I don't know --
DR. KRESS: You skipped over my section
again.
CHAIRMAN APOSTOLAKIS: No, risk assessment
and application process?
DR. KRESS: I wanted him to reiterate this
is incorporated by reference to Chapter 3 of the ASME,
and go back to it to see if there was any
incapabilities or any inconsistencies. I don't know
what Chapter 3 now looks like in the ASME.
And the version that I had, there did seem
to be some inconsistencies, and so I don't know if
they will stay or not.
MR. BUDNITZ: The only person around this
table that knows is I, because I am on the team.
DR. KRESS: Yes.
MR. BUDNITZ: But it is not a secret, and
I can tell you. It is very important that you should
understand that there has been a change in the words,
which may or may not represent a change in the
philosophy, but let me describe.
When the three columns first came out a
year-and-a-half ago, they were described as
application categories.
DR. KRESS: Right.
MR. BUDNITZ: Like somebody thought that
ISI would be in category one, and core damage
frequency application is in category two. Over the
last 18 months, it has become transparent that that is
not the right way to think about it, and those three
columns are now capability categories for the PRA, or
for elements of the PRA.
Now, what that means is that you grade
your PRA once, just once. You go find out your
capability one for this, or capability two for that.
Or by the way, in our case, you either meet it or you
don't.
If you don't meet a piece of this, you can
still the thing if that piece you don't meet doesn't
matter.
Now, what Section 3 in ASME does is it
says, okay, you have an application. You go to the
application and you decide which pieces of the PRA you
need for that application.
For example, you may not need the HRA
piece, or maybe it is at the center of your
application. So you decide which piece, and then you
decide whether or not for your application that you
need capability two or capability one, or capability
three, although I kind of think it will always be two,
but let's not argue, except for screening.
And then you go to the PRA, and see what
you have got. If you need capability two for the
application that you have got, and everything that
needs it is two, then you are home.
If you need capability two, HRA, and you
have a capability one, then you can't do it. You have
to either upgrade it or do something else. So that's
exactly what it is, and this is just the same.
DR. KRESS: Okay. It sounds like they are
consistent now.
MR. BUDNITZ: Now, here, what you do is
that since we don't have three categories, you are
going to decide whether you need -- for example,
suppose in the application you don't need the hazard,
because the only thing you are worrying about is the
capacity of a large pump.
Then if you meet the standard for the
fragility's part, then you can use it, even if you
don't meet the standard for the hazard part. It is
just as simple as that, and I think it is pretty
straightforward.
CHAIRMAN APOSTOLAKIS: Does the industry
certification process include external events?
MR. BUDNITZ: No.
CHAIRMAN APOSTOLAKIS: And do they plan to
use this?
MR. BUDNITZ: No, they have made an
informal commitment, and it is not in writing, but
they have said the words; that they will add to the
certification process review requirements that cover
this topic, and also low power shutdown when it comes
along, and also fire when it comes along, so that they
would have the same scope in the end.
CHAIRMAN APOSTOLAKIS: Okay.
MR. WALLIS: And what about the Section 7
documentation? I found the small print part, the
note, useful, and it deserves bigger print. And we
might even borrow some of your remarks, speaking about
documentation for other purposes, such as thermal
hydraulics.
CHAIRMAN APOSTOLAKIS: Are you allowing us
to do this?
MR. BUDNITZ: I don't run anything, but it
is my view that if you cite any American National
Standard, as is in anything else, you can do anything
that you want with it. It is a public document, and
you just have to reference where it came from.
MR. WALLIS: We have a bit of a struggle
with documentation requirements in other fields, and
not just in this one, and we find that a surprising
reluctance on the part of the originators of documents
to make sure that they are right, and it is
surprising.
DR. SEARLE: One is moved to wonder
whether or not the clientele that will use this
standard is any more competent in reading these words
than these other people have been decoding similar
remarks.
DR. KRESS: I don't think this needs much
decoding. It is pretty clear.
CHAIRMAN APOSTOLAKIS: Okay. We have two
minutes. Does anyone have a comment that is of great
significance?
DR. KRESS: I think they did a good job.
MR. WALLIS: They did a good job.
CHAIRMAN APOSTOLAKIS: It is a good job,
but that is not of great significance.
(Laughter.)
DR. KRESS: For this committee, that is.
DR. SEARLE: It is, and actually, George,
I was surprised.
CHAIRMAN APOSTOLAKIS: Okay. We don't
even have two minutes because NEI wants to say a few
words. Bob, real quick.
MR. BUDNITZ: I need 10 seconds. I just
turned to page 109 as I was turning through.
CHAIRMAN APOSTOLAKIS: And you have a
question.
MR. BUDNITZ: And in the middle of the
page is two references to Bernard, et al.
CHAIRMAN APOSTOLAKIS: Two references to
what?
MR. BUDNITZ: Bernard, et al, and I want
to tell you that Don Bernard died a month ago, and I
miss him, and I just want to say that I miss him. He
was a terrific guy, and this field we are in is richer
for his work, and I just wanted to say that for 10
seconds, okay?
CHAIRMAN APOSTOLAKIS: Thank you. Okay.
Mr. Heymer. Do you want to come sit up front, or --
MR. HEYMER: I will just make comments
here, George, and it will be very quick. My name is
Adrian Heymer, and I am project manager at NEI with
the Reg Reform Group.
The reason why I am here and some of the
other people aren't is because they are out of town.
The standard has only been out for a couple of days,
and we have got some preliminary feedback. We did
call some people when it came out.
We have had some feedback from EPRI,
preliminary feedback, and preliminary feedback from a
couple of the other groups.
And that feedback which came in this
morning by a telephone call -- and as I sat here
listening to the presentation, I just wondered if we
were looking at the same documents.
And it may be because when you do a quick
read, you read from the dark side and think the worst,
and have not had time to digest it. But the gut feel,
or at least the initial feel from the industry that
have looked at it is that for reasons best known to
themselves, I guess, judging by the discussions that
have gone on, they feel they are precluded from using
seismic margins approach.
DR. SEARLE: By this?
MR. HEYMER: Yes.
CHAIRMAN APOSTOLAKIS: Why?
MR. HEYMER: They just -- the comment I
got back is that we have invested a lot of time and
effort in seismic margins, and we failed to use this
in a risk informed approach, and we would have to go
to a seismic PRA.
So that is -- and I think that may be a
process of the way that they have read it, and how
they think they might have to apply it. But I think
that might need some interaction, and we will provide
you some comments on that as we will, and there will
be some interaction on that as we go.
CHAIRMAN APOSTOLAKIS: Thank you.
MR. BUDNITZ: Just to say, about 50 plants
did a seismic margin review using the EPRI method. We
wrote these requirements to track the EPRI method. It
is our judgment without knowing in detail that most of
the plants that use the EPRI method will be able to
show that they meet the standard.
Now, you don't go any further than that.
If they have a competent margin review, it is our
opinion that we have written the standards so that
they will meet it.
Now, once they have met the standard, if
they can't use it, that's not a fault of our having
written the standard to tell them what they did, and
to check it right, I think. In other words, I don't
quite understand the match here.
MR. HEYMER: Well, you will have to take
the comment in the sense of people are reading it for
a couple of days, and they need to think about it, and
sit down, and produce some comments, and there is
going to be some industry iteration.
Because it was also interesting to note
that the same people that made that comment said now
what would really be good in this standard is if we
had some additional guidance to take the seismic
margins approach further.
And that's what I heard you were going to
do anyway. So I encourage you to work on that and
incorporate in the standard if you can.
MR. RAVINDRA: Can I add one thing to
that?
MR. HEYMER: Yes.
MR. RAVINDRA: This committee has a
subcommittee that endorsed our earlier draft of the
standard.
CHAIRMAN APOSTOLAKIS: I think Adrian made
the point. Thank you.
MR. HEYMER: There was a comment on the
uniform hazards spectra, and there was a feeling that
you are asking us to reevaluate that, and verify it,
and there was significant effort and resources
expended in doing that some time ago.
And it wasn't clear to the people who were
reading it why we have to go back and reassess that.
MR. CHOKSHI: I also got an informal
feedback on that point, and all it needs is a little
bit more guidance and explanation.
MR. HEYMER: I think some of the other
points that have been mentioned here have been good.
I think on the plus side, I think the commentary
section, I think if you expand on that, a lot of
people found that very useful and a very good
addition.
And there was a lot of positive comment in
that regard. And I guess if we are saying that we are
going to allow seismic margins, or at least not to
cover seismic margins in the standard. And there is
also going to be a section in there on the seismic
PRA.
Perhaps we need some insights or some
screening criteria of when one would be appropriate,
and when you should move to a seismic PRA. And that's
about it with regards to the extent of the comments.
CHAIRMAN APOSTOLAKIS: Thank you, Adrian.
MR. BUDNITZ: Thank you.
CHAIRMAN APOSTOLAKIS: Thank you,
gentlemen, very much. This has been very enlightening
and useful, and very friendly. You did a great job.
MR. BUDNITZ: Can I ask one further
question?
CHAIRMAN APOSTOLAKIS: Yes.
MR. BUDNITZ: I have no idea what to
expect. Are you going to consider writing a letter?
CHAIRMAN APOSTOLAKIS: Yes, we will
consider writing a letter.
MR. BUDNITZ: Thank you. I just didn't
know.
CHAIRMAN APOSTOLAKIS: We will recess
until 2:50.
(Whereupon, the committee hearing
recessed at 2:33 p.m., and was resumed at 2:50 p.m.)
CHAIRMAN APOSTOLAKIS: Okay. The next
issue is Reprioritization of Generic Safety Issue 152,
Design Basis for Valves that Might be Subjected to
Significant Blowdown Loads. Mr. Leitch is our leader
on this. Graham.
MR. LEITCH: Dr. Apostolakis, the purpose
of this session is to hear a presentation from the NRC
staff regarding the proposed resolution of generic
safety issue 152.
And that issue is the design basis for
valves that might be subjected to significant blowdown
loads. It is of particular interest for HPCI and
RCIC, and reactor water cleanout valves on boiling
water reactors.
And the concern was that while the valves
might meet the NRC approved design basis, the design
basis might not address the need for the valves to
close against the differential pressure resulting from
a large sized high energy pipe break.
So with those words of introduction, I
will turn it over to Mr. Michael Mayfield, who will
introduce the staff's presentation on this topic.
MR. MAYFIELD: Thank you. I am here this
afternoon, and Ken Karwoski, who has recently joined
my division, is going to make the presentation.
He is supported this afternoon by Sher
Bhatar, the Chief of the Engineering Research
Applications Branch, and Tom Scarborough from NRR.
So we are here to talk about the closeout,
and not just reprioritization of this generic safety
issue. So with that, Ken, why don't you go ahead.
MR. KARWOSKI: Good afternoon. My name is
Ken Karwoski, and I will be discussing the staff's
basis for proposing the closeout of generic safety
issue 152 and seek ACRS endorsement on this proposal.
Generic safety issue 152 was raised by the
ACRS back in the 1989 time frame, and as a result of
its review of the staff activities related to generic
safety issue 87, which had to do with the failure of
the high pressure coolant injection isolation valves
to close following a postulated pipe break.
GSI-87 is closed and it was closed in-part
as a result of industry activities in response to
Generic Letter 89-10 and its supplements, and in
particular Supplement 3 to Generic Letter 89-10.
Generic Letter 89-10 focused on the
ability of valves to function as designed. What the
ACRS was concerned about though was the adequacy of
that design, were those valves capable of closing
following a postulated high energy line break.
In order to understand the staff's basis
for closing out generic safety issue 152, I would like
to spend a few minutes on Generic Letter 89-10. In
the mid-to-late '80s, the Office of Research did some
testing on motor operated valves and identified a
number of valve performance weaknesses.
As a result of that, they issued Generic
Letter 89-10, and once again focusing on the ability
of the valves to function as designed.
However, as part of that, licensees had to
resurrect what the design basis for these valves were,
and how to dig out the information to say what are
these valves, or how are these valves supposed to
operate, and under what conditions.
After the research testing results became
available, the industry also did some additional
testings on motor operated valves. They confirmed a
lot of the problems that were identified in the
research sponsored tests, and as a result of that,
they started to develop working groups and users
groups.
And there currently is still a joint
owners group addressing valve issues, not only motor
operated valves, but air operated valves. Generic
Letter 89-10 had seven supplements, and those
supplements were -- the first one was issued in '89,
and the last one in 1996.
Although Generic Letter 89-10 focused on
the ability of the valves to operate as designed, the
capabilities of the valves, that is, the actual design
basis of the valves, was captured as a result of
industry activities.
And the adequacy of the design was
confirmed in part based on NRC inspections performed
in response to 89-10, and confirmed through review
of various documents, including the inspection
reports, FSARs, and other licensee and NRC documents.
The NRC inspections did evaluate the
reasonableness of the design pressures. If there
looked like there was an indication where the valves
were not designed to a full differential pressure,
some of those issues were flagged to ONRR.
One of the examples that we provided in
our write-up was Big Rock Point, where the valves were
not designed for a full differential pressure, and
ONRR subsequently evaluated those exceptions on a
case-by-case basis and determined that in the case of
Big Rock Point that even though the valves were not
designed for that condition, it was acceptable from a
safety standpoint.
Priority focus of many of the early
inspections in response to Generic Letter 89-10 were
the more risk significant valves of HPCI, RCIC and
reactive water cleanup.
Although the inspections focused on those,
the lessons learned from the inspections applied to
all motor operated valves, and in some cases applied
to other valve types.
The staff briefed the ARCS numerous times
in the 1990s regarding motor operated valves. In
particular, in October of '93, the staff briefed the
ARCS subcommittee on mechanical components, and at
that time the chairman of the subcommittee, who
happened to be the individual that raised the concern,
indicated that he believed that the issue had been
addressed and would recommend closure to that.
Subsequent to that, research confirmed
many of the results and analysis presented to the ACRS
at that time, and we confirmed basically that the
actions taken by the licensees and by the industry in
general, that we believed that there was sufficient
evidence to close Generic Safety Issue 152.
And that concludes my presentation. If
there are any questions, I will be glad to try to
address them.
MR. LEITCH: So the reason for our
confidence then is that Supplement 3 to Generic Letter
89-10 basically focused the industry's attention in
this area. The industry did get the message, and
investigated these valves and corrected them, if
necessary.
And that was all backed up by NRC
inspection activities?
MR. KARWOSKI: Yes. Basically, although
89-10 focused on the adequacy or the capability of the
valves to function as designed, the industry took the
initiative on their own, and in some cases upgraded
the design of some of these valves.
So we are confident that those valves are
capable of operating under a postulated pipe break
event. And the industry has and continues to take an
initiative in MOVs.
There is a periodic valve verification
program currently underway. So they have taken those
lessons, and they continue to apply them, and as they
identify weaknesses, they improve their programs.
MR. WALLIS: Can I ask you about the
distinction between design and performance? I mean,
you have used the word design a lot. And they may
well be designed to do something. Do they actually do
it?
I mean, if they were tested against the
full differential pressure several times, did they
still work?
MR. KARWOSKI: In the early days, I think
the early testing indicated that, no, they wouldn't
work under those conditions. As a result of 89-10 and
the work done in response to that, that is where the
licensee said, okay, here is the pressure that I need
to operate against. Do they operate.
And that's where -- and so that is the
performance aspect, and that is what the whole purpose
of the 89-10 program was; is do they function as they
were designed.
MR. SIEBER: Yeah, but they were relied on
testable prototypes, as opposed to valves in a plant,
in order to establish the relationship between design
and actual performance; is that not correct?
For example, there was an industry
program, and they did it in some steam plant
someplace, a coal plant, where they tested prototype
valves of various types in order to see whether the
valve would lock up under these high DPs and high
flows, or how much force it took in order to move the
stem.
And a lot of utilities found that the
motors were too small, or the gear trains were wrong.
And then when they changed the gear train, it was too
slow to perform the isolation in the time frame called
for by the safety analysis.
Or if they changed the motor and didn't
change the valve, the motor was so strong that it
would drive the valve disk through the bottom of the
valve.
Or you would overheat the wiring to it,
and so this was not without a lot of problems. I
presume that in the inspection process that every BWR
was evaluated as to whether they did in fact determine
what the design conditions were, and did it have an
appropriate prototype test to say that their valve was
good or not good.
And did either leave things as is, or
change gear trains, motor operators, or the valves
themselves. But that's what I gathered from the
inspection material that I reviewed. Is that correct?
MR. KARWOSKI: Tom Scarborough may be able
to add more, but the inspection did look at the
reasonableness of the design and focused on valve
factors and whether or not the licensees were
implementing the latest lessons learned.
With respect to the actual testing of the
valves, I know that there were some concerns expressed
by licensees regarding the reasonableness to test all
the motor operated valves under postulated pipe break
events.
And so in some cases there may be
groupings of valves, where they tried to group valves
in order to limit the amount of testing based on the
limitations in the plant. But Tom may be able to add
more.
MR. SCARBOROUGH: Yes. This is Tom
Scarborough. One of the things that you mentioned,
that earlier program on the prototypes. Once they got
89-10, they realized that they needed a better way of
learning more about blowdown flow conditions.
And the Electric Power Research Institute
established that multi-million dollar program to do a
number of blowdown tests and develop a first
principle's model to look for blowdown performance.
And they found that there were critical
perimeters of the sharpness of the edges, internal
edges, and the clearances, in part in running the
models.
And so what happened, especially for the
HPCI, RCIC, a lot of licensees ended up running the
EPRI model, and determining if they had any concerns
regarding performance under blowdown conditions.
And then if they did, they would go in and
adjust the internal clearances, or round off the
internal edges to the valve. So that is how they were
able to address performance, because of the definite
concerns of trying to run a test on those type valves.
And during the inspections which I
participated in, a large number of them, we did look
at the difference of pressures that they were
assuming, and how they came up with the thrust
requirements for the used valve factors in the EPRI
model, and then what actions did they take to address
those. So those are the types of things that we
looked at.
MR. WALLIS: So the assurance that they
will work is based on the fact that they conform with
an EPRI model?
MR. SCARBOROUGH: That's part of the
basis. They would run the model, and we would prepare
a safety evaluation on the model, and we evaluated it,
and the licensees would use that as part of their
determination.
Some licensees --Comanche Peak, for
example -- actually did run some blowdown tests on
their unit two when they were starting up to get
direct information that they could apply to unit one.
So there was some actual test data that
people had, but in a large number of cases it was
using the EPRI model for the blowdown conditions.
MR. SIEBER: I guess in an operating plant
you just can't create the conditions necessary to test
the valves without taking a saw and sawing them off.
MR. SCARBOROUGH: Right.
MR. WALLIS: It is a basic problem, but it
is very difficult to be sure a valve will work without
testing. I mean, you can't just compute, and you're
not always sure it will always do exactly what you
thought it would do.
MR. SIEBER: Well, the EPRI models is an
empirical model, and it is based on tests of prototype
valves under a variety of conditions. So it is
probably the best thing that you can do.
DR. POWERS: If I recall the SDR on that
model properly, and I may not, my recollection is that
there were questions about the length of upstream and
downstream piping around the valve. Did those get
resolved?
MR. SCARBOROUGH: Yes. As part of the
evaluation of the model itself, and adjustments to the
model, this was like a 2 or 3 year process when we
reviewed it, and we were able to resolve those.
There were some changes to the model
that were made, and the model, as it turned out, seems
to be reasonable, and it seems to be tracking pretty
well.
DR. POWERS: And I further recall that
there were questions about whether the valves being
tested had experienced the kind of aging and
degradation that valves in the plants would have
experienced. Did that issue get resolved?
MR. SCARBOROUGH: Right. And in the case
of the Board Warner valves, we were concerned that
there wasn't enough, and so EPRI did add an additional
5 percent margin any time that you are using a Board
Warner valve with the EPRI model.
But we did evaluate and thought there was
enough preconditioning of those valves as part of
that, and that was part of our evaluation of what the
model was predicting, and what the actual thrust
requirements were.
So we went back and looked at all of
those, and in the final analysis, whether or not we
accepted the model was based on having enough margin
to account for any preconditioning that the valves had
not achieved as part of the test process.
DR. POWERS: I guess the question always
arises on how you decide how much margin to ascribe
the phenomena that are of aging and degradation kind
of nature.
MR. SCARBOROUGH: And that is part of what
the joint owners group program that Ken mentioned are
doing. Right now they are doing testing of valves in
the plants under flow conditions -- not blowdown, but
flow conditions -- and looking for changes in the
thrust requirements.
And at the end of October of 2002, their
five year testing program will be complete, and they
will be preparing an updated report to establish a
long term periodic verification program, with some
potential need for testing either static or dynamic,
but with diagnostics to evaluate that.
And one of the things that they are
finding so far is if you open the valve up and do any
maintenance, internal maintenance on the valve, the
thrust requirements drop dramatically immediately.
But then they rise back up, and so that is
something that had been found during the EPRI testing,
and they are confirming it through the JOG program,
and that will probably be part of their long term
program when they come in in 2002.
DR. KRESS: Since the subject of your
report is adequacy of the design basis, I guess your
basic conclusion is that the design basis was
inadequate?
MR. KARWOSKI: No, the conclusion is that
the licensees, as a result of 89-10, and the emphasis
on MOVs during the 1990s, that we confirmed that
licensees did design the valves, or the valves are
capable of operating under blowdown conditions.
DR. KRESS: With the improvements?
MR. KARWOSKI: With the improvements.
DR. KRESS: But those improvements didn't
come about because of the design basis?
MR. KARWOSKI: Well, you see, that is
where the concern has broken up into two phases; the
adequacy of the design, which is GSI-152, and then the
capability of the valves to function as designed,
which was the focus of 89-10. It is hard to separate
the two, but that's the distinction between the two
points.
DR. KRESS: But my conclusion would have
been that the design basis was inadequate.
MR. SIEBER: In some plants.
MR. MAYFIELD: This is Mike Mayfield.
That was the question that was put on the table, and
as I understand from looking and reading that I have
done, and in the briefings that I have had, what the
89-10 determined was that in general things were okay.
And where there were some difficulties, the licensees
had taken action to correct those.
DR. KRESS: But they weren't required to?
MR. MAYFIELD: They weren't required to.
And as it turns out, the resolution to this generic
safety issue doesn't require any subsequent action on
the part of the staff because the licensees had
already taken that action.
DR. KRESS: That is what I was going to
get to; do we need to change the design basis.
MR. MAYFIELD: And I think the answer to
that is that when you say design basis, as I
understand it, these were -- did they correctly
estimate how big the opening would be, and had they
fully expected the full break, the full opening break,
downstream in the valve.
And in some cases -- and I understand that
the answer to that was no, and they have gone back and
fixed that.
MR. KARWOSKI: Or like I mentioned at Big
Rock Point, where they analyzed and determined that
even though they weren't designed for that, that it
was not a safety factor.
DR. KRESS: I guess in another world,
where we might be getting new reactors every 3 or 4
months, or something, you would be constrained to go
back and change the rule, or change the design basis.
And under the situation now, you don't have that.
MR. MAYFIELD: I don't think it is so much
that they -- I don't know that there is anything that
we would go change other than we would look a lot
harder perhaps at specifics that were included in the
design.
MR. KARWOSKI: And I think the concern,
the original concern was for older plants rather than
the newer, because in the newer plants, they are
frequently analyzed for pipe breaks.
MR. SIEBER: And the actual requirement
comes from the ASME code does it not, and which says
that under certain conditions you classify this as a
high energy line, and if you need to be able to
isolate it, as opposed to someplace in a rule or a reg
guide saying that.
So if you endorse the code, and you have
a code book plant, the requirement is embedded in your
license, in your FSAR.
MR. KARWOSKI: But also from a practical
standpoint, if a licensee says they are going to
operate this system in this fashion, and it calls for
the valve to close -- and this was part of the 89-10
review --
MR. SIEBER: Right.
MR. KARWOSKI: -- they have to show that
the valve is in fact capable of doing that.
MR. SIEBER: Of closing.
MR. KARWOSKI: Because they were supposed
to review the procedures and determine under what
conditions the valves were expected to operate.
MR. WALLIS: It is a little tricky,
because your are asked to demonstrate that a valve
will do something which it never does, and so you
never have a realistic test really in the plant.
So it must be rather difficult to give
such conclusive proof when this thing has been sitting
there all this time, and it is always going to work
when it is called upon to work when it never does it
routinely.
MR. KARWOSKI: That's correct, but that's
the purpose for the testing and the monitoring, to
provide you added assurance. And there is in most
cases redundancy.
MR. WALLIS: But then you don't test
something that has been sitting in the plant for 10
years.
MR. SIEBER: I think that 89-10 requires
licensees to commit to periodic testing.
MR. WALLIS: At full pressure.
MR. SIEBER: Well, to test the torque
requirement and the stem factor, and so on, and that's
what MOVATS and MOVs, and all those are required to
do.
MR. SCARBOROUGH: The new Generic Letter
96-05, which is sort of the follow-on of 89-10, is the
periodic verification, and that is part of the joint
owners group program; is that now that they have
established the design basis capability for these
valves, how do we monitor them and make sure that we
don't have degradation.
And those programs have in place where
they use diagnostic testing, and there is a dynamic
diagnostic testing program going on to look for areas
of degradation.
And then they are going to have an ongoing
static diagnostic, with possible some dynamic
diagnostic testing in the future. So they have a
program established to look for that type degradation.
MR. LEITCH: Any further questions?
MR. WALLIS: Why do valves stick?
MR. SIEBER: Packing glands dry out and
operators pull up on the nuts. They rust.
MR. KARWOSKI: And pressure locking from
a binding.
DR. KRESS: The perversity of nature.
MR. WALLIS: Maybe small leaks that build
up oil or something?
MR. SIEBER: No, that's the BWR.
MR. WALLIS: So if you knew why they
deteriorated, you could specifically look for those
things?
MR. KARWOSKI: That is correct, whether it
be the grease deteriorating or whatever, correct.
MR. UHRIG: Isn't most of that done with
signature analysis in the testing?
MR. SCARBOROUGH: Yes. Now a lot of the
plants use stem mounted string gages for a direct
measure of the torque and thrust. But in the future,
especially for the low risk valves, they are looking
for a motor control center improvements in that area
that have been made in the last 2 or 3 years, which
are quite dramatic.
And which they can actually get a good
impression of what the thrusts are that are coming out
of the motor, and so there are a lot of improvements
in that area that they are looking for as well.
MR. LEITCH: Okay. Any other comments or
questions?
(No audible response.)
MR. LEITCH: Thank you.
MR. KARWOSKI: Thank you.
MR. LEITCH: Dr. Apostolakis is away from
us for a few minutes. I think the next thing on the
agenda is writing reports.
DR. POWERS: Fortunately, FACA prevents
you from starting anything until it's time.
MR. LEITCH: So let's adjourn until four
o'clock then.
(Whereupon, the meeting was concluded at
3:15 p.m.)
Page Last Reviewed/Updated Tuesday, December 24, 2024