460th Meeting - March 10, 1999
UNITED STATES OF AMERICA
NUCLEAR REGULATORY COMMISSION
MEETING: 460TH ADVISORY COMMITTEE ON REACTOR SAFEGUARDS (ACRS)
U.S. Nuclear Regulatory Commission
Conference Room 2B3
Two White Flint North
Wednesday, March 10, 1999
The committee met, pursuant to notice, at 8:30 a.m.
DANA A. POWERS, Chairman, ACRS
GEORGE APOSTOLAKIS, Member, ACRS
JOHN J. BARTON, Member, ACRS
MARIO H. FONTANA, Member, ACRS
DON W. MILLER, Member, ACRS
THOMAS S. KRESS, Member, ACRS
ROBERT L. SEALE, Member, ACRS
WILLIAM L. SHACK, Member, ACRS
ROBERT E. UHRIG, Member, ACRS
GRAHAM B. WALLIS, Member, ACRS
MARIO V. BONACA, Member, ACRS. PARTICIPANTS:
MR. AKSTULEWICZ. P R O C E E D I N G S
CHAIRMAN POWERS: The meeting will now come to order. This
is the first day of what may be the interminable 460th meeting of the
Advisory Committee on Reactor Safeguards. During today's meeting, the
Committee will consider the following:
(1) Our all-time, indoor-outdoor favorite, SECY-99-054
associated with 10 CFR 50.59 (changes, tests, and experiments)
(2) Westinghouse best-estimate large-break LOCA methodology
(3) Proposed phase 1 standard for PRA quality
(4) We'll deal with a variety of proposed ACRS reports.
A portion of today's meeting may be closed to discuss
Westinghouse proprietary information.
This meeting is being conducted in accordance with the
provisions of the Federal Advisory Committee Act.
Dr. John T. Larkins is supposed to be the designated Federal
official, but I believe --
DR. LARKINS: I'm here early.
CHAIRMAN POWERS: Good. For the initial portion of the
We have received no written statements or requests for time
to make oral statements from the members of the public regarding today's
session. A transcript of portions of the meeting is being kept, and it
is requested that the speakers use one of the microphones, identify
themselves, and speak with sufficient clarity and volume so that they
can be readily heard.
The Members have before them a list entitled "Items of
Interest." The only thing I'll call your attention to is that the
cooperative severe accident research program meeting will be held
coincident with our main meeting in Albuquerque to assure that no
Members attend any portion of that.
DR. KRESS: They manage to do that every year.
CHAIRMAN POWERS: They've done it especially effectively
this year. They moved the location so that -- there are too many of you
sneaking out to go to the meeting in the past.
I'll remind the Members that several of you have asked to
tour the ops center. That tour is to take place at noon on Thursday,
and we've expanded the lunch hour a bit so that you can do that.
We are also trying for Friday at noon to have a showing of
the PBS documentary on TMI. It's a bit of revisionist history. Members
should also have before them the by-laws, and I hope you'll take time to
look at those and mark any questions or comments you have on those.
What I've been told is they're still typing up the revised
by-laws, but they'll at some point be distributed in front of the
desk -- in front of your positions.
I'll also remind the Members that I'm looking for input on
what we do with the adopt-a-plant program. We'll discuss that and the
Planning and Procedures Committee's recommendations in our future
There should also be distributed to the Members at some time
during the course of the meeting a copy of our letter to the Commission
on self-assessment. I hope Members will look at that and mark up any
comments they have. We're trying to get that out of here by March 26.
With that I think we can turn to the agenda, and the first
item on our agenda is the proposed Commission paper on 10 CFR 50.59
(Changes, Tests, and Experiments).
Mr. Barton, I believe that you are the cognizant
subcommittee chairman, and of course we do have an SRM to us on this
subject. So it will be a high-priority issue for the day.
DR. BARTON: Thank you, Mr. Chairman.
The purpose of this meeting today is to review with the
staff the proposed Commission paper on the summary of public comments
and staff recommendations revising 10 CFR 50.59.
As Yogi Berra so eloquently said at one time, deja vu all
over again. We have been discussing changes to this rule for well over
a year now and have seen some progress but have not seen much movement
in eliminating the de facto zero increase criteria and to allow minimal
increase in probability of occurrence or consequences of an accident or
malfunction of equipment and minimal reduction in the margin of safety.
The Commission clearly directed that the new rule allow for
those small changes that are lost in the margin of error. It appears
that what the staff has done is come up with a rule that is now
minimally different from NEI 96-07. The question I have is why then
does the staff see the protracted period in implementing the revisions
they are proposing. Maybe they'll discuss that this morning.
What the Committee would like the staff to focus on and
discuss today are those key issues that the staff has not been able to
reconcile and has asked for Commission direction. We would also like to
understand (1) the staff's decision regarding use of "minimal" or
"negligible" in the final rule -- in the proposed final rule; (2) the
choice of "frequency" vis-a-vis "probability"; and (3) the staff's
proposed position on margin of safety.
I understand industry will be making a presentation
following the staff's presentation this morning. Expected from the
Committee out of this meeting is a report to the Commission, and I
solicit Members' comments after the presentations this morning on what
they would like to see in the report that we send to the Commission and
any comments that Members of the Committee have should be given to Mike
Markley today to help me get a draft letter before the Committee
At this time I'd like to turn the meeting over to the NRC
staff, and I guess Eileen McKenna is the lead for the staff.
CHAIRMAN POWERS: While the speaker prepares, John, it
strikes me that we've had a fairly dramatic change in the thrust in this
current issue that's not along the lines of our recent communication to
the Commission in this, and I think we were looking for a definition of
the margin of safety in terms of fairly bright lines.
DR. BARTON: Right.
CHAIRMAN POWERS: Rather than -- and in fact a removal of
language that dealt with probability and the minimals and things like
that. Is this a lost course? I mean, is it just not possible to pursue
the course that we were advocating in the form of our questions?
DR. BARTON: I don't know, Dana. Maybe Eileen can help us
MS. McKENNA: This is Eileen McKenna, NRR staff. Also with
me today are Scott Newberry, who is our deputy director in our new
division, Regulatory Recruitment Programs, and Frank Akstulewicz, who is
the acting branch chief for the Generic Issues and Environmental
I'm not sure if that was the question -- it started to be a
question I think to the Committee and then it switched to a question to
me about your letter and one of the approaches that you were offering in
there. Is that the question? I just want to make sure --
CHAIRMAN POWERS: Well, I think the question will reemerge
as you go through the presentation.
MS. McKENNA: Okay. That's fine.
CHAIRMAN POWERS: In essence, what I find distressing is we
don't seem to evolve between iterations on this rule, we seem to go
through sea changes. And at our last discussion on this there was a
major pain to how we need to preserve the inputs because they're the
basis of the staff's independent analyses. What seems to have gone
away, and without so much as a whimper, despite loud protestations last
time we discussed this -- what we're not seeing is something that's
getting refined, we're seeing something that's going through -- there's
sensibly no connection between one issue and the next, and I'm -- the
idea of convergence, this is what -- most functions that behave this way
would be termed oscillatory, not convergent.
DR. KRESS: Vassle functions.
CHAIRMAN POWERS: No, they're not vassle functions.
I'm not going to go there. We might pursue that discussion
I hope you tell us some of the background on why it's been
necessary to make quantum changes rather than evolutionary changes.
MS. McKENNA: Okay. We'll try to speak to that.
Okay. Just very briefly in the way of background, most of
which we covered at our last meeting, so I really only focus on the last
bullet, that we are at the stage as we mentioned in February of
providing recommendations to the Commission on how the final rule ought
to be structured. And as we discussed before, because of some of these
issues where there had been considerable involvement and interaction
with the Commission as to what should be the -- well, the conceptual
nature of the criteria and then the specific language that would provide
for that -- we felt that we were not ready to give -- we could spend a
lot of time polishing a final rule that didn't meet the Commission's
needs, so we instead provided this paper with our recommendations
hopefully to draw out were there any additional issues and problems that
before we put together the final rule which then hopefully would be able
to proceed through in a very smooth course.
These were the reasons that we sent forward the paper
SECY-99-054 in February that you have before you.
In your opening remarks you asked us to talk about the key
issues that were still remaining, and that's what we tried to address in
the body of the paper, and these were the issues that we did include in
there, and we will talk about these -- most of these in a little more
The question of minimal increases in probability, the
criteria for, as it is in quotation, margin of safety. I'll touch on
the Part 72, 71, I don't plan to spend a lot of time, but it is
something I think that, you know, the Committee should be aware of. The
implementation and enforcement, which you've also mentioned, and we'll
talk about a little more, and I don't plan to spend any time on it, but
the paper did give a recommendation with respect to the scope in
response to a previous SRM, and as indicated we do not propose a change
to scope as part of this rulemaking.
The paper also has some attachments, where we discuss the
comments that we received and the nature of the type of resolution that
we intended for those comments, provided the -- we called it the draft
final rule language that we anticipate may adjust a little bit as we
complete the process and as we get Commission feedback, and finally,
some other issues that we discussed in an attachment three that I will
also mention in the course of the briefing.
Two of the criteria in the rule speak to the question of
minimal increases in probability of occurrence of accidents and minimal
increases in probability of occurrence of malfunctions of equipment
important to safety, previously evaluated in the FSAR to be complete.
In the comments that we received, most individuals and
entities indicated that they felt that, because of the nature of the
FSAR analyses and information that presently exist, that the
determination as to whether these probabilities of occurrence had
changed would be done on a qualitative basis, because the original
determinations were qualitative, and looking at the effects of a change
they may be making would also be -- continue to be done on a qualitative
We had tried in the proposed rule to give -- I guess we had
called them attributes of how one might judge whether you had minimally
increased your probability of these things, and we did receive some
comments that will help us refine some of that guidance, but where we
really were not able to go was to any kind of quantitative measure of
change in probability with respect to the design basis accidents and
malfunctions considered in the FSAR.
So, we felt that this was more akin to the concept of
negligible within the margin of error of the engineering judgement
rather than a discrete size larger than negligible, which I think was
what the Commission intended, but we really felt we didn't have a basis
DR. KRESS: When you talk about frequency of occurrence of
accidents, you're talking about design basis accidents.
MS. McKENNA: Yes. This is all premised on the accidents
evaluated in the FSAR, which are the design basis accidents.
DR. KRESS: But those frequencies have never really been
MS. McKENNA: In more of a relative sense rather than in a
DR. KRESS: So, the idea is to look at those accidents, look
at the changes, make some sort of quantitative judgement as to whether
that affected the frequency.
MS. McKENNA: Yes. I think they look at what's the change
and how does that impact on initiation of one of these -- whether it's
losing some kind of equipment, initiating some kind of an event in the
plant, that's how you would determine whether there was a change in the
frequency or probability, whichever word you choose to use, trying to do
it in a quantitative way would be very difficult.
DR. KRESS: So, the concept of some percentage change would
just be non-applicable.
MS. McKENNA: Right. We had considered that early on, but
there really was -- in terms of coming up with a number that had
meaning, it was not something -- it was not very fruitful.
CHAIRMAN POWERS: There is a fairly elaborate scheme of
qualitative probabilities that is used within the nuclear facility
community that consists of two decades of probability -- that is,
there's things that run from about guaranteed 1 probability to 10 to the
minus 2, that is something that's likely to happen over the lifetime of
the facility; things from 10 to the minus 2 to 10 to the minus 4, things
that it's possible it could happen but not really expected; and things
that are just extremely unlikely, 10 to the minus 4 to 10 to the minus
Is that the kind of ranges that you're having in mind here?
MS. McKENNA: I think that is the kind of ranges that
establish the spectrum of events that were considered in the FSAR in the
first place, that there are those anticipated operational occurrences, I
think is sometimes the language that's used for the first set that you
were talking about, all the way up to so-called limiting faults or other
language that you might use to refer to those events that are likely but
you consider them for purposes of design, and of course, beyond whatever
that number is, the cases that you agree could happen but are
sufficiently unlikely that you're not going to require.
That was how most plants' analyses were done in the first
place, and you still want to keep those events in those ranges.
You don't want to move those things that you don't -- you
thought were very unlikely into a more likely category, and that, I
think, was what the criteria was looking to preserve in the first place.
To say that -- to move from that kind of a view to how many
decades could you move in your assessment before you would trigger it, I
think, is a more difficult step to take.
DR. APOSTOLAKIS: Isn't that what NEI wanted? If you go
back to the categorization of initiating events, which is essentially
order of magnitude classification, is that, you know, it's minimal as
long as you stay within the assigned category.
If you move higher, than it's not minimal, which essentially
means an order of magnitude.
MS. McKENNA: We did have comments along that lines. I'd
have to check back as to whether NEI, in particular, was one that --
DR. APOSTOLAKIS: I remember Mr. Gitangelo making that
DR. SHACK: But that is the guidance in 96-07, which is
MS. McKENNA: Well, I think --
DR. SHACK: At least that's what I read here.
MS. McKENNA: It does indicate that -- it talks about the
frequency classifications, that if you move outside the classifications,
you have clearly exceeded the criteria.
I think what's less agreed on is movement within the
classifications and how far can you move, if you will, from the top of
the range to the bottom of the range and whether that's still
acceptable, because that goes to the question of the rigor of the ranges
and how agreed-upon they were in the first place, and I think that's
where the debate, if there is any, may arise.
I think everybody agrees that moving classifications would
be outside the band. It's, you know, how much movement can you make
within that we may still need to try to reach some agreement on.
DR. APOSTOLAKIS: I will come back to my recommendation that
was summarily dismissed.
CHAIRMAN POWERS: Categorically dismissed here.
DR. APOSTOLAKIS: I see you're changing the word
"probability" now to "frequency," "likelihood," which really doesn't do
I thought you had a very good sentence in the version I saw
last time, where you said that the probability of malfunction of
equipment is changed more than minimally if a new failure mode is
So, why don't we take that sentence and say the change will
have to be reviewed if a new failure mode of the equipment is identified
and drop "probability" completely?
That is not a very rigorous definition, but at least it
moved -- it takes away the probabilistic part. It's now entirely within
the traditional deterministic framework.
There is an element of likelihood. When you say a new
failure mode, you know, there is something -- a judgement there that it
makes a difference, right, but you don't put it explicitly.
Now, coming back to the initiating events, can do simply the
same thing. If a new mechanism for a particular initiating event or a
new -- a new mechanism is identified, a new way that it can occur, then
again it has to be reviewed.
In that way you avoid frequencies, likelihoods, explicitly
Now, we were told when we met with the Commission that there
may be a -- there might be a legal problem, and Ms. Cyr was supposed to
look into it, but I don't see anything happening. I mean we didn't get
an answer that this is something we cannot do because of such-and-such a
So, have you thought about it and rejected the idea for some
reason which I'm sure would make sense, or you ignore it or what?
I think it would make your life easier. That's my
MS. McKENNA: I think -- I'm not sure what your discussions
with Ms. Cyr might have been, but obviously, the further we move away
from some of the concepts that we have in the proposed rule, it
certainly opens up the question of do we need to re-notice -- you know,
if we say, well, we're going to abandon any notion of probability, we're
going to replace it with something that focuses on failure modes or new
mechanisms exclusively or -- obviously, a judgement would have to be
made as to whether that was within the bands of what was offered for
notice, and that's an issue the Commission would need to consider.
Now, interestingly enough, on your point about the new
failure modes with respect to increases in probability, we had some
comments that were concerned that that was too restrictive, that just
because you had a new failure mode -- it kind of goes to the criteria on
the malfunction with a different result.
People were -- where presently the rule talks about
malfunction of a different type, where new failure mode could presumably
trip that criteria right away, and there were concerns that, by -- if we
had a new failure mode that really led you to the same failure, should
that, by necessity, require the review, or was that already bounded by
So, you have to be a little cautious even with the new
DR. APOSTOLAKIS: Yes. I have two answers.
MS. McKENNA: Okay.
DR. APOSTOLAKIS: First, I think the actual sentence in the
right have included the word "significant."
MS. McKENNA: I think we said the likelihood of the ones
that were previously considered. You're not talking about the meteor
strike or something.
DR. APOSTOLAKIS: As you know very well, it's very easy to
be negative. So, by taking each one of these proposals in isolation and
criticizing it, we're not going very far.
I think we should put on the table two or three proposals
for solving one issue and then pick the one that is the best among
those, which can still be very bad, but at least it's the best.
So, the question here is, is it better to struggle with a
concept like probability that doesn't belong in this framework or to
have to live with a situation where perhaps, you know, there is a new
failure mode, it doesn't really -- it is bounded, but what can you do?
You pay a price. You will be reviewed, because it's a new failure mode.
That's really the question. That's the decision, not
whether each one individually makes sense.
DR. KRESS: Well, George, I think the problem is the
objectives of this thing is really in risk space. You do not want to
increase the risk beyond a certain level. So, your objectives are
risk-based, but you're dealing with it in design basis space, and if
your objectives are in risk space, I think you almost have to talk about
frequency and probability.
Even though they don't show up in your design basis, they're
implicit in there some way and you have to deal with it, because I think
they are the bottom-line objective.
DR. APOSTOLAKIS: I believe that's true, Tom, for the
long-term revision, which would be risk-informed. Eileen gave us last
time a very nicely-stated objective. She wants to preserve the
integrity of the licensing basis, which I thought was great, because
it's again a statement within the framework.
See, if we start mixing probabilities and risk and then
deterministic criteria and so on, then we're --
DR. KRESS: -- mixing apples and oranges, are we?
DR. APOSTOLAKIS: The whole idea of this, I thought, was to
have a short-term solution until we have something more permanent. Is
DR. KRESS: Yes, I think that's true.
DR. APOSTOLAKIS: So, by bringing in risk concepts, we are
not helping that goal.
I don't know legally why we can't change the word
"probability" to "frequency" but we cannot drop it completely. That is
something to think about.
MS. McKENNA: Well, again, I think the dropping it
completely would not so much be the legal question. That would be more
the policy question. The substitution, if you will, of something like
"new failure mechanism" for "probability" may be more -- kind of
somewhere in between, yes.
DR. APOSTOLAKIS: It seems to me that is something people
should be able to live with because again if you state in boldface
letters that the objective you just told us, you know, that the
objective of this is to preserve the integrity of the licensing basis
then you know we have to make judgments. This is not a new philosophy.
This is now these things were built and reviewed and so on, but at least
the judgments are confined within a certain boundary of the traditional
engineering judgments without bringing in this new stuff.
Now to make the situation worse in my opinion, on page 5 now
there is a new minimal change. The result -- a change would require NRC
approval if the change would result in more than a minimum of change in
the method of analysis. There we are now.
What is the meaning of change in the method of analysis?
CHAIRMAN POWERS: The important thing is the method of their
DR. APOSTOLAKIS: But how do you define minimum of change in
the method of analysis?
CHAIRMAN POWERS: As facilely as you define minimal change
in anything else.
DR. APOSTOLAKIS: Well, this is new to me. Was it there
CHAIRMAN POWERS: No.
DR. APOSTOLAKIS: Oh, okay.
CHAIRMAN POWERS: This is all new --
MS. McKENNA: And we will get to that later but yes, yes,
that is --
DR. APOSTOLAKIS: To finish the other thing, I don't think
that we are gaining anything by changing the word "probability" to
MS. McKENNA: Recognize they are words with shades of
meaning. We are trying to match a little more closely the meaning to
the concept. I think what some of the members may have been suggesting
was a more -- a somewhat different step in terms of whether you consider
this parameter or this effect of the change at all in your
consideration, and that is -- I think that is a judgment as to whether
that is an important element of the licensing basis that if it is
changed in this particular manner it needs to be reviewed, and I think
from the Staff's view, and this where the difficulty arises, is that
some changes in that probability would have no effect on the situation,
others might, and it is trying write a criteria that makes sure that we
see for review the changes that we need to I think is what has been
challenging us from the beginning, that you want to have something that
is sufficiently comprehensive that it captures the ones that are
necessary but doesn't unnecessarily drag in the ones that we don't need.
I think going back to the question on margin, I think why we
struggled with that one is trying to write down in words what are those
situations that we do think is important for us to review.
CHAIRMAN POWERS: It seems to me that the committee and you
have different views on what the margin actually is and that has an
impact on this question of whether you use the language of probability
or not, so I would appreciate it if we could discuss more on this what
the Staff thinks the margin actually is.
In our last discussion the Staff clearly had an idea and
stoutly defended it and then rolled over.
MS. McKENNA: Well, again, I think it goes to the point I
was making about writing a criteria that captures what you need and not
too much. I think we felt the criteria we offered in the proposed rule
clearly captured what we needed. I think what we were hearing was that
it was capturing too much and that we somehow needed to adjust that so
that we wouldn't include in there things that really didn't matter and
so we made another attempt to come at it in terms of the design basis
capability which we'll get to in a moment, which I think goes to your
question about where is the margin, saying that the margin is what is
provided by reaching -- maintaining your design basis.
I think there are obviously other margins that go beyond
that but in terms of the margins that are important to the NRC for its
decision-making we are seeing that as being connected to the design
basis. That is why in this final rule paper we tried to write down
explicitly and say, okay, I think this is an area where we may have
converged to a certain degree of saying where do you measure or
determine that margin is it is not from kind of your starting point
necessary. It is kind of the point that is necessary for it to function
in the way that we thought it would when we review the application in
the first place.
I am not sure I totally understand the connection with that,
with the probability. Perhaps I am -- when we get to that, you can see
if there's an additional question because you were suggesting that
understanding where we saw the margin would help in terms of whether we
want probability as a factor and I am not sure I understand how you are
making that connection, so if somebody else has a view that would like
CHAIRMAN POWERS: I would like to understand what you think
margin is, because that clearly affects whether you have to work in a
probabilistic framework or not. I can define a margin so that there is
no probability that comes in.
MS. McKENNA: I think that goes back to an earlier point.
In the criteria we are trying to capture several different aspects of
the analysis and the plant design.
One is the initiating event type of probability. The other
is the equipment performance malfunction or new types or new results or
increase is found and the other is the consequences, if you will, the
outcomes of the analyses and that the margin then is the, kind of the --
I think we have talked about it sometimes as being the confidence factor
that by keeping your events in a particular framework and your results
in a particular framework that you have taken into account the factors
that you need to in the analysis. That was the thinking, that you put
them together as a package.
I think maybe we have finished this slide. We can go on to
the margin one in a little more detail.
DR. SHACK: I'm looking back at the summary of the comments
on page 21 and again I am not so worried about the definitions of
whether it is probability or frequency. I look at the changes in the
rule as a way to get you away from zero.
MS. McKENNA: Yes, that is the major intent, yes.
DR. SHACK: Then it resolves down into the guidance as to
how you determine whether something is minimal, whether it is
probability or frequency. It is the guidance that seems to be the real
When I read -- there's certainly no statement at 21 whether
we have clarified this position or a minimal change is within the
frequency classification or not.
Have you settled on the guidance that is going to be
MS. McKENNA: Not totally, and that is one of the reasons I
think, going to one of the other questions about implementation as to
why we weren't recommending an immediate implementation date is that we
do think there are some of these issues that we need to agree on in
guidance so that down the road when people aren't having so many of
these debates about -- because of the nature of the rule where a
licensee is implementing and NRC is overseeing it, and if we don't have
an agreed upon set of groundrules on some of these details we are going
to have debates and we are trying to minimize that.
I think it goes back to the discussion we were having
earlier about the frequency classifications that outside of them you are
clearly more than minimal. It's whether you say anywhere inside is
still minimal. That could be a very large change, and that I think was
something we need to decide whether -- and how that was determined if
you are doing that in a qualitative sense.
DR. SHACK: I mean you had some other guidance on whether
you were still -- if your design basis assumptions and requirements were
still met, and you don't feel that is binding or bounding enough?
MS. McKENNA: Well, I think that will help quite a bit but
that doesn't necessarily -- I think where the accident initiator perhaps
is the one that is a little harder to determine that because that would
get to perhaps the thing about what factors could contribute to that and
how do they relate to the design basis. That would be the area where I
think perhaps it might be a little less clear.
By putting the design basis language we thought that that
would help in terms of giving a way of judging that if you are meeting
that you are clearly meeting the rule. On the other hand, if you are
beyond the frequency classification you are clearly not. Whether there
is a gap in the middle is something I think that we need to try to look
at a little more closely and we just didn't have the time to do that
with the schedule we are on.
DR. APOSTOLAKIS: Eileen, do we have a situation here where
the industry and the Staff understand what this regulation is supposed
to be all about? We have been very successful for a number of years
implementing it, but the greatest difficulty is putting on paper that
MS. McKENNA: That is probably a fair characterization.
DR. APOSTOLAKIS: That is really what is going on.
MS. McKENNA: And the 95 percent space probably that is the
DR. APOSTOLAKIS: So it is pornography all over again,
CHAIRMAN POWERS: Obscenity.
MS. McKENNA: I think that was some of the discussion in
terms of these minimals or negligible increases in probability is that
you, you know, once it gets beyond a certain point everybody agrees that
that is too far, but in a somewhat smaller space it is in the view of
the beholder to a certain degree.
DR. APOSTOLAKIS: Now one other thing that will be just a
comment, on page 4, you say that -- the top paragraph -- about the
greater use of PRA information and possibly using Regulatory Guide
1.174, which would have to be re-examined for applicability to changes
being made by licensee without NRC review -- parenthesis -- for example,
criteria for preservation of sufficient defense-in-depth might be
If this was actually a discussion in the risk-informed, the
59, we would have a lot of discussion on this. I don't know why you
threw that sentence in there, but I am just pointing out that you may
not need these criteria, though some of my colleagues may disagree.
MS. McKENNA: That's fine.
DR. APOSTOLAKIS: But I am not sure that defense-in-depth
should survive all the way down to that level, so that is just a side
MS. McKENNA: Okay, that's fine.
Okay. I started to put up the slide on margin of safety.
The proposed rule offered a number of different options on
margin and also solicited comment on options or approaches that others
may choose to offer and we did receive, as you I think saw in Attachment
1, a number of comments on this approach and there were some different
approaches put forward. There were some that supported the view that
margin criteria was not necessary, that the other criteria in
combination with the tech specs and other regulations provided a
sufficient set of controls upon changes that having a specific criterion
on margin was not necessary.
We had a proposal offered by NEI that I think you may have
seen that focused on ensuring that design basis limits for fission
product barriers, in particular fuel clad, reactor coolant system,
pressure boundary and containment boundary, were maintained.
Staff reviewed that proposal carefully. We sent it to a
wide range of our staff for comment and to see was it -- asking them
basically is this clear or is it something that can be implemented, does
this capture the right type of information that should be reviewed. We
got a number of comments back.
I think conceptually in many cases people liked the idea of
looking at, looking to design basis limits, determining when the margin
may have been impacted and therefore requires the review. I think the
concept of protecting the fission product barriers was viewed as a
positive element where there was some concern was with respect to
completeness as to, you know, the language talked about being directly
related to the barrier and the questions then in the Staff's mind about
changes affecting mitigation systems or the systems that support the
functions of those systems.
These considerations led the Staff to try to formulate the
criteria that took those elements that I mentioned, the design basis
capability and the systems and their support systems and to write that
down in a criterion and that is what we attempted to do in this paper.
DR. BARTON: Can you sit down with NEI and go through, you
know, your modified version of their barrier system and maybe come to
MS. McKENNA: Well, as a matter of fact, we have a meeting
scheduled immediately after this meeting to try to do that, to go
through with them their criteria to make sure we're -- we have that with
them in terms of understanding how certain kinds of changes would be
I think we want to look a little more closely about, you
know, are we misunderstanding something about how other changes -- for
instance, mitigation systems and things -- would be handled and also to
try to work through the criteria we offered and see, well, you know, is
this going to work, is this -- to do what we think it's going to do.
So, we do have that meeting scheduled, as I said, starting
DR. BARTON: Thank you.
DR. KRESS: Eileen?
MS. McKENNA: Yes.
DR. KRESS: I thought the problem with margins was that we
have a bunch of different plants out there with their own analysis
methodologies and that, when they calculate these values for the design
basis accidents and focus on the things associated with the integrity of
the fission product barriers, that we come with the number that the
calculation -- that is lower -- usually lower than these limits, and the
industry says that's margin, we ought to be able to make changes till we
approach the limit, but my feeling was that the NRC says, well, we have
no idea where you are in that range, because you have one code and
another vendor has another code, and we didn't approve it on the basis
of your code calculation, we approved it on the basis of our judgement.
We don't really know where you are in that range. It's not
that number you show us. So, therefore, you can't use up that margin.
Have you abandoned that concept? That's what I thought the
discussion was about.
MS. McKENNA: There are those that have that view, and I
think this goes to something I'll talk about in a minute in terms of the
-- looking at the changes in methods, that when staff reviewed an
application, they did look at all of the information.
They looked at how the analysis was done, in great or lesser
detail, depending on, you know, what the particular event or analysis
was, looked at what were the input assumptions and data that went into
that, and looked at the answer and kind of where it fell out with
respect to criteria and limits that apply to that.
DR. KRESS: When you say "looked at," a technical
replacement for that word would be you had an uncertainty analysis made
of the code. That wasn't done, right?
MS. McKENNA: I don't believe -- again, you have to consider
the vintage of many of these reviews.
DR. KRESS: Uncertainty analyses weren't really done.
MS. McKENNA: I don't think it was in that kind of rigor. I
think the judgements of the people, you know, they look at, I'm sure,
sensitivity type of information.
DR. KRESS: But if you had to go back and replicate this
"look at," wouldn't now be the right thing to do, say what is the
uncertainty in your code, so I know what that number means that you're
giving in your calculation?
MS. McKENNA: I guess that could be one way of going at it.
I think what we were suggesting was a little more, that when we
established those limits that had to be satisfied, that there was
consideration of making sure there was margin in there, that you didn't
set your limits at the point where you had a concern that anything would
DR. KRESS: It was a standard conservative determination.
MS. McKENNA: Yes, correct. The quantification of the
margins may vary depending on the limit, but then, kind of between where
the calculation came out and the limit -- and this is, I think, where
the staff is, I think, giving a little more flexibility to say we
establish margin in the limit, and we will allow changes from where you
calculated going towards the limit, but --
DR. KRESS: The staff is now saying we have enough margin in
the limit itself.
MS. McKENNA: Yes, but you also want to make sure that that
margin and limit is not undermined by an approach on the method that
takes it away, shall we say.
DR. KRESS: I have a little problem with that concept,
because -- let's presume I used my code to calculate right up to my
limit, I don't exceed it, and you tell me, well, I've got plenty of
margin in that limit, so I'm still okay, but you have no idea what this
calculation is that says it's the limit. It may be well above, because
unless you're absolutely certain it's highly conservative calculation,
you haven't determined what the uncertainty is in the upward direction.
So, I may have wiped out my margin even though I got this
margin between the limit and what I'm really concerned about. You may
have wiped that out even though the calculation is still below it,
because you haven't done an uncertainty analysis.
MS. McKENNA: I guess it's possible. I think the -- you
know, were looking at the overall rigor of, you know, the limits being
set in a manner that would take that kind of thing into account and that
there was some consideration of the analysis in the first place in
saying that you tried to at least maintain that degree of analysis that
you had, and if you wanted to have a greater departure, yes, I think the
question of, well, there may be some new method that has reduced
uncertainties or greater accuracy or whatever, that may be appropriate
to do but not maybe -- perhaps not as take myself to the limit and find
-- and use this new, better or more accurate or however you want to
characterize the new analysis.
DR. KRESS: I'm just going to use the same old analysis.
I'm not going to change it at all. All I'm going to do is do some
changes to my plant, so that my new analysis takes me up to the limit.
I still say that I have no idea where I am in real space,
because I don't know what that number is that I've calculated, because I
don't -- for that particular code, I don't know what the uncertainty is.
You know, I've been told it's conservative, but it still has
uncertainty, uncertainty in the upward direction.
CHAIRMAN POWERS: Dr. Kress, is it not true that most of the
limits that you speak of are, in fact, demonstrably conservative?
DR. KRESS: Not all of them. I wouldn't say that the peak
clad temperature is that conservative, but certainly, the containment
pressure is highly conservative.
There are some -- you're pretty close on the reactor coolant
system. It's not that conservative. You know, it depends on what
you're talking about. You know, those are the three main things.
CHAIRMAN POWERS: When the limits aren't, is it not true
that the analyses that you find in FSARs are demonstrably conservative?
DR. SHACK: I mean wasn't that our approach to uncertainty
MS. McKENNA: That's what I tried to indicate, that
depending on the type of the analysis and the importance of the
analysis, then the level of conservatism that was looked for in that
calculation or assessment originally, I think, was intended to capture
some of those uncertainties that you're referring to.
DR. KRESS: Certainly when you talk about conservative in
that sense, you're saying that the expected value is well above what you
calculate, but there's still uncertainty, and you may, at a 95-percent
probability, exceed that acceptance value. You may exceed your limits,
and the question is there is no concept of what level of confidence I
want to have in this thing. This is what I'm saying. There's no
All we're saying is that the expected value is going to be
well above the calculated value. It's going to be above it. I don't
know how far. We don't know how far, because you haven't quantified any
CHAIRMAN POWERS: The words "above" and "below" get
confusing in this context, but we'll take that as conceptual.
It's just that it seems to me, certainly most of the FSARs
that I've reviewed, there is clearly, between the limit that they're
shooting at and where you might have some conceivable damage, some
margin there, and the analysis that's done, it strikes me, is -- at
every conceivable turn, they have taken a demonstratively conservative
value and that most of the FSARs go out of their way to point out that
they have done so, so that, yes, your point is well-taken that there is
clearly some conservatism and that it's clearly possible that you could
-- if we could run the experiment thousands of times, that there would
be occasions when this all fell apart, but given that we can live with
that situation and we don't make more than minimal changes in the
methods of analysis, then can we live with that result?
DR. KRESS: Yes. You could make a case.
CHAIRMAN POWERS: Then you could write a standard in which
the only minimal that you have to refer to is the minimal change in the
methods of analysis, which actually, I think, is fairly feasible to
define, because it would be -- minimal could easily be that I used a
more modern analytic tool or something like that, without much change in
the boundedness of the analysis.
DR. APOSTOLAKIS: So, if I have a method of analysis that's
conservative -- I think we're jumping to six here --
MS. McKENNA: Yes, that's fine.
DR. APOSTOLAKIS: -- and I did that in 1976 and now I have a
better tool -- but then it's not 50.59 if it's better, right? How do
they know it's better?
MS. McKENNA: I think the question is the term "better,"
better in what sense?
DR. APOSTOLAKIS: It's a judgement of the community that
they are better.
MS. McKENNA: Yes. I think, going back to the earlier
discussion, that the FSAR analyses were done in a manner to be
conservative and they look at worst-case kinds of things and put some
factors in for uncertainties, that kind of thing.
Now we may have a better idea of what those uncertainties
are, which could be viewed as a better analysis, but does that mean that
there is agreement that -- everybody may have a better analysis. At
some point, you say, well, I don't think we need to take -- put any
factor in for uncertainty, because -- is that really an appropriate
thing to do under 50.59?
I think that's what we're trying to say, that we're not
trying to limit the use of new methodologies, but we want to try to keep
that within a certain range.
CHAIRMAN POWERS: I just recently reviewed one for a nuclear
facility in which the original analysis -- they knew there was a
Prandial number dependence on the heat transfer coefficient.
They took it as one-third because they didn't know what it
was, a bunch of experiments done and they changed it, it was a .23 based
on the experiments.
That's what I would call a minimal change in method of
analysis, simply taking into account the better information that you had
now. It's still a very bounding analysis.
DR. SHACK: It seems to me a better code would certainly
probably not be a minimal change in the analysis.
CHAIRMAN POWERS: Yes, but most of these things, with the
exception of the peak fuel clad and things like that are really pretty
much mass and energy balance-type analyses. I mean most of the analyses
you can repeat in the margin of the FSAR you're reading.
DR. SHACK: So, you're arguing they're not likely to change
CHAIRMAN POWERS: It's not a code.
DR. SHACK: Well, okay.
CHAIRMAN POWERS: It's a mass and energy balance.
DR. SHACK: There's no change then. If they did use a new
code, that would seem to me, ipso facto, more than a minimal change.
DR. KRESS: It's certainly a code when you calculate the
peak clad temperature.
CHAIRMAN POWERS: Yes, and we have guidance that those codes
have to be reviewed and approved.
DR. SHACK: I mean I think the intent of this is obviously
to address your concern, that you know, you don't -- you've dealt with
the uncertainties one way once upon a time, and since you don't know
them, you know, this is their solution.
You might prefer to go off and calculate the uncertainties,
but I don't think you'd get a whole lot of sympathy.
DR. KRESS: That's a pretty big deal to ask for.
MS. McKENNA: Yes. I think the reason we put this criterion
in was to -- I think, otherwise, we're trying to look at how would you
judge changes to methods under the other criteria, and it wasn't obvious
how one could do that and reach -- you know, how would a change in
method affect the probability of an accident?
Well, it's not going to, but does that mean that changes in
methods don't need to be reviewed under any circumstances? We felt that
that was going a little too far, so this was an attempt just to give --
take some middle ground on that.
DR. SHACK: It seems to me that your changing in wording,
though, from the NEI criterion is sort of modest in the swift sense, you
know. I mean it seems to me that they had a very limited sort of thing,
and this thing almost looks like the maintenance rule now, and I sort of
see 10,000 components.
MS. McKENNA: Well, I think that's one of the concerns that
has been expressed.
I think that the staff was trying to indicate by looking at
mitigation systems and the systems that directly support that, that it's
more like the way we approach operability of systems in tech specs, that
the -- if the system -- the system isn't going to work unless the things
that give it its power and air and cooling water and whatever that are
also working within the capabilities that they need to provide.
So, that was why we worded it the way we did in terms of
those systems and the mitigation systems and the support systems, but we
have had, you know, some concerns, I think you may have heard at the
Commission meeting, that this maybe was going too big.
So, we thought these were too small. This is kind of
Goldilocks, almost. We think theirs was maybe a little small; they
think ours is a little too large.
I think that it may be a manageable set, but we may have to
try to make sure that we're -- we tried to write this in the paper, but
perhaps we can do a little job of saying what are those systems and that
it's not everything that's in the maintenance rule scope, which is
clearly not the intent.
I think we've pretty much covered -- I think the first
bullet was back to the criteria seven on the systems and the statement
of design basis capability, which we did try to write down in the paper
as to what that meant in terms of being the functional capability
necessary to -- I'll read the words in the paper, I think they're better
than my words, page 6, paragraph three, where it said "The design basis
capability is the lowest functional capability that accomplishes the
required functions for all required conditions, including the range of
accidents and events that the facility is required to withstand," and
this really comes right out of the definition of design basis, that it's
the functions and it's the values that they need to satisfy in order to
perform those functions, and that that's what we were looking to make
sure was not changed by these changes that a licensee may be planning to
That's how we got to the words of design basis capability,
and I mentioned the words on the scope of the systems and the support
systems that go with them, and I think we've already talked about the
next bullet, which is the question of methods, and I think the last
bullet was just a recognition that, since this is -- well, I think the
concepts are things that perhaps have been discussed before in terms of
writing it down in a way that can be implemented consistently and
agreeing upon specifically what's included in the systems, that that is
going to take some guidance and a little time to work through.
DR. BARTON: You're talking about a reg guide here?
MS. McKENNA: Yes.
DR. KRESS: When you say valuations are generally system
functional level, what does that "generally" mean? Does that mean
sometimes they're at the --
MS. McKENNA: This is also -- I think we tried to allude to
in the paper. We had some discussion about whether it should say
systems, structure, or component or say system, and we used systems,
structure, and component for a couple of reasons.
One was more consistency with other things.
Second was that there may -- and this kind of arose in some
of our design basis discussions, that there may be some components where
there is a function the component itself has to provide, but we think
that's a limited set, and we indicated some examples here, but that, in
general, when you're looking at capabilities, you're looking at the
system level, and a component may have -- perform in a little different
way and the system will still function but that it's the system that
you're concerned about delivering some capability.
So, that was what we were trying to get across with this
point, is that we would expect that those kind of assessments would be
done at the system level for almost all things, and I think we had some
specific examples where people pointed out that you might have to
consider as a component, and some of that was also people's terminology.
Some of our engineering staff were saying, well, this thing
is considered to be a component rather than a system, and reactor
coolant system piping, you know, is that a component or is that a
system, and depending on which engineer you asked, apparently you were
going to get a different answer, and we didn't want to have to be making
those kind of debates, but the concept is that you're not trying to look
at, well, does this check valve have a unique function that it provides
versus the system being able to deliver flow and not lose flow in the
reverse direction, which is a little more of an overall capability.
DR. KRESS: Well, if I were making a change to some
component, it's clearly a component of the system.
MS. McKENNA: Correct, yes.
DR. KRESS: And I have to decide whether it's 50.59 or not.
Do I have to make an evaluation that says what this change to that
component does to the system function?
MS. McKENNA: Yes.
DR. KRESS: So, I don't understand the distinction, frankly.
I have to do it anyway, and then I make the judgement of whether it
affected the system function.
MS. McKENNA: I think the concern that was expressed was
that if you say, okay, I look at the component or I look at the piece of
the component and do I say, well, now, do I have to figure out what
design basis capability does that component itself have to provide
versus the component is part of a system that has a function to be
DR. KRESS: Normally, those are not spelled out.
MS. McKENNA: Right. And that was the point that we were
trying to make, that usually you look at the functional capability on
the system level, and that would be the expectation under this criteria.
Any other comments on this particular point?
Let me just shift gears very briefly. I mentioned -- I'll
just go over this in passing, but unless there's questions, I won't
spend a lot of time.
I think one of the things that was in the paper is that we
are also proposing changes to Part 72, which is the independent spent
fuel storage, which presently has language very, very similar to 50.59,
and in the comments that we received, there were those who wanted even
closer alignment of the language between the two, and the staff and NMSS
are in favor of making those kinds of changes.
They primarily affect a couple of additional criteria in
72.48 that are not contained in 50.59 and also certain of the provisions
on how the FSARs for the cask designs were updated.
The other issue that arose in this context had to do with
Part 71, which is the transportation requirements, and the specific
issue that came up was for casks that serve a dual purpose both for
storage and for transportation, that if you had the authority in 72 to
make the changes but didn't have it in 71, that you were not really
giving the flexibility, and staff is supportive of trying to give them
some flexibility in that area.
There are some considerations that were mentioned in the
paper with respect to limiting it to domestic shipment, so we don't get
ourselves cross-wise with some of the IAEA standards and that also what
they're proposing is that they would limit this to fuel transportation
rather than other types of transport packages.
That's something they're recommending go forward with the
proposed rule to make similar kinds of changes to what we're doing in 50
DR. KRESS: Transportation. My understanding is now they're
virtually identical to the IEA rule.
MS. McKENNA: That's my understanding. I think there is a
rulemaking plan going forward that would make some changes, and these
kind of things would be considered within that context, the broader
question of whether you should change Part 71. But the specific part on
the dual purpose, I think that can be handled in a simpler rulemaking.
This was one of the other topics that had come up was the
question of implementation, and we know that there is some existing
guidance, 96-07, much of which is consistent with what we're proposing,
many of the changes in the rule make kind of to move them closer
together. There would need to be some adjustment of language just
because of terminology, things like that. In a couple of aspects more
substantial changes would be necessary, particularly with respect to
whatever language ends up in the margin, if you will, criteria.
And the second part of the guidance bullet is whether the
question of the guidance for Part 72, I mean, they've had -- the rule
has been that way for a while, but there has not really been any
specific guidance that speaks to how it would be implemented for those
kinds of changes. So looking at whether there would be a benefit and
how we could go about getting some guidance that would be a little more
specific to those types of facilities.
Because of the need for guidance, getting agreement between
the staff and the industry if we were to endorse say a successor to
96-07 or a revision to 96-07, that's going to take some period of time.
Once agreement is reached on guidance there is also a need for some time
for licensees to confirm that their programs are consistent with the new
rule language and to train appropriate people. There's also a need for
the staff to make sure that we've reached out to all the right people
and communicated what the new requirements of the rule are and how it's
to be implemented.
These were the considerations that led us to suggest an
implementation period of 18 months, just that we felt we needed time to
make those things happen. It's also recognized that since many of the
changes in the rule are such that the existing programs would satisfy
the rule as we're proposing to modify it, that there may be those who
would want to implement sooner, and therefore we suggested that if that
was the case, if they felt ready to do that, that we would allow that to
happen. And that was the thinking that lay behind what we proposed in
DR. KRESS: My impression was that if the licensee is
following the 96-07 guidance as it currently exists, that it would
probably be all right under the new rule.
MS. McKENNA: I say in most instances. I think a couple of
areas where it may not be the case was specifically for instance in
consequences or depending on exactly how their program is structured
now, it may not be consistent with the rule, that area, and then of
course in this margin area that that might be something that would
require some adjustment just because the way their guidance is worded
may not totally line up with what we're suggesting here in the paper.
So those I think would be the primary areas, and the others
would be more a matter of consistency of terminology, which really don't
impact upon the quality of the evaluations, per se, but just from a
point of view of having procedures consistent.
DR. BARTON: Same question I had.
MS. McKENNA: Okay.
I will just touch on the enforcement side, unless there are
specific questions. I think what we tried to suggest is a recognition
that, you know, during this transition period, I mean, we need to be a
little willing to take into account that there may be some bumps in the
road as we try to move to a slightly different rule, and that we would
not -- we try to exercise discretion when the issues that are underlying
the violations are not of significance.
And, for instance, the question about -- you know, if you --
something that was done under the existing rule that if you -- presently
it would be viewed perhaps as a violation because it, you know, was an
increase in consequences, for example, but was not more than a minimal
increase in consequences, that while it, you know, might be not
literally in compliance with the rule as we're viewing, the significant
is such that we would not issue an enforcement action on that particular
type of issue. And that's what we tried to indicate here
DR. KRESS: That's the --
MS. McKENNA: Yes. Yes, I think we indicate in the paper
that after the period of transition that if a change was made that
should have received review and did not that that would be typically
severity level 3, which is what's written in the current policy right
The last slide I had was these -- I call them additional
topics, items that were in Attachment 3 that were issues that either the
Commission had specifically asked us about in the notice or otherwise we
thought were worthy of some particular highlighting, so we put them in
this attachment, and a couple of these I'll mention.
The first one is a matter of definitions. We had a lot of
comments along the lines that the definition of change and the
definition of facility needed to be modified to allow for the
possibility of screening of changes, that because changing something
that's in the FSAR but the change would not in any way impact the
function of the performance of that system should not require an
evaluation against the criteria, that that should be something that
could be screened and that the concern was that the language as written
would not allow for that to happen. So what we did was modify the
definitions to essentially allow that changes that don't affect
functions don't need to have the evaluation against the criteria.
That second bullet was just some additional clarifications
on the definition of what facility as described in the FSAR was, that
is, what are those changes that need to be considered in the first place
as opposed to other kind of information that may be contained in the
FSAR that it doesn't really contribute to the facility description.
The third one we had here was the consequences. We've had
some proposals for how you would judge a minimal increase in
consequences, and in Attachment 3 we kind of finalized our proposal as
being looking at the regulatory limits, the Part 100 or the GDC 19 and
looking at your existing situation, and that 10 percent of that delta
would be what we view as a minimal change in consequences.
And we also, you saw in there, wanted to also take not of
these other guideline values that were established for particular events
such as, for instance, a fuel-handling event where the standard review
plan would limit the consequences to 25 percent of the full Part 100
type of limit, and we felt that those should also be taken into account
in considering whether the change being made has more than a minimal
The fourth bullet was one that had a lot of controversy
associated with it. The proposed rule, we suggested that the 50.71(e)
language be modified to explicitly say that the FSAR update would need
to reflect cumulative effects, changes on probabilities and consequences
and margins, and I think the concern was that this would perhaps -- a
couple of concerns. One, that it would require some other analysis to
look at the combination of effects rather than the individual
evaluations that were done, going to some of our discussions on
probability, that since those are done qualitatively, trying to reflect
combined effect, you know, would be too high an expectation for the
And I think finally that to the extent that you're looking
at consequences of some of these other issues where there is an analysis
presented in the FSAR that the existing language which says that your
update reflect the effects of analyses that you did would already
capture that, that we didn't need to have additional language in 50.71.
So we have dropped that specific change to the language.
The last bullet I had was an item that the Commission had
asked as to whether there was a need for definition in the rule of
"accident," whether that's an accident of a different type or an
accident previously evaluated. And we received a number of comments
about what those definitions might be and how they might be viewed, but
there was not any great sense that there was a need to include them in
the rule, and so at this point we were not proposing to include such a
DR. KRESS: The 25 percent -- that is just an engineering
MS. McKENNA: On the standard review plan? Yes, I think
that it was a judgment. Again I think it goes back to the view that
since that was an event that was viewed as perhaps more likely than the
LOCAs and other events for which Part 100 is the value of merit that a
smaller value for that more likely event was I think the judgement from
an overall risk perspective, that you would want to keep the more
frequent events with lower consequences.
That was the material that I had prepared to present to the
committee. I believe we addressed most of the issues. I just want to
make sure that there wasn't anything else.
MR. BARTON: I think your slides did cover the issues that I
asked, the minimal and the frequency versus probability, and George
spoke to that, and your position on margin of safety and I think you did
address that. I think those were the major issues.
If I look through the Commission paper and the resolution of
comments, summaries, et cetera, is it fair to say that we are pretty
close to agreement with the industry except for policy issues that you
are asking for some direction from the Commission?
MS. McKENNA: I think that is the case. That is certainly a
question you may ask NEI as well.
MR. BARTON: Russ Bell is up right after you.
MS. McKENNA: Yes.
MR. BARTON: I am sure he is going to talk about that as
MS. McKENNA: Yes.
MR. BARTON: And the policy issues are the minimal increase
in probability, margin of safety, implementation, enforcement
strategies, Staff recommendations on scope.
MS. McKENNA: By calling them policy issues in the paper I
was not suggesting that those are necessarily areas where we and NEI
differ. I think in at least one area that is true. The others were --
the margin one is certainly I think the one I am referring to, but the
other areas I think we just felt that the nature was such that they were
things that the Commission needed to consider.
MR. BARTON: Okay. Any other questions from the committee to
Eileen at this time?
DR. SHACK: Since the Commission asked you to define
"minimal" rather than "negligible" you went back and punted on that. Do
you have any suggestions?
MS. McKENNA: Well, there was a fair amount of discussion on
this point at the Commission meeting. In the paper we suggested maybe
we would only be able to get to negligible rather than minimal, and it's
kind of -- there was some question about, well, should we then say
negligible in the rule and that is certainly a possibility.
MR. BARTON: Well, at least it is something the industry has
used for 30 years and seems to understand.
DR. SHACK: But we can all agree that negligible is minimal.
MS. McKENNA: Well, except that the Commission had indicated
that they thought that minimal could in some way be larger --
MR. BARTON: Than negligible.
MS. McKENNA: -- than negligible.
DR. SHACK: Well, negligible is a subset of minimal, yes.
MS. McKENNA: Yes, that I think we agree on. It's how much
MR. BARTON: If the industry is satisfied with negligible,
why muck it up, you know?
CHAIRMAN POWERS: Two orders of magnitude difference between
the two, as I recall the discussion.
MR. BARTON: That's about it.
MS. McKENNA: But that kind of goes to if you could quantify
those orders of magnitude, then we wouldn't be having some of these
discussions, I think, but yes, we did have some dialogue and it was
possible that the word would end up being negligible -- it was a
question of whether you want to have your language more closely match
how you want to implement it or leave open the possibility that we may
be able to move beyond the negligible to the minimal with suitable
guidance in the future, so that I think is kind of where the decision
DR. KRESS: With all your comments from the industry did you
get any from the public?
MS. McKENNA: I'd say if you look at the list that they were
by and large the power reactor licensees, the major vendors both for
casks and -- and NSSS, a few from like law firms primarily representing
reactor licensees, a couple of letters from people who signed themselves
as individuals so I can't speak to their affiliations, and a couple from
nonpower reactor facilities -- but nothing that was obvious as, if you
will, a member of the public or public interest group.
DR. KRESS: Like CS?
MS. McKENNA: No.
MR. BONACA: I have a question. Assume that on the docket
is an SER that has a commitment in it. For example, say distance
between panel in the suppression criteria and that you have new
technology that comes in that shows that less distance between panels is
not a degradation and there is no increase in probability of
malfunction. Would the rule now allow for this to be called a minimal
increase and therefore not to be in a review safety question, or would
it be still requiring that?
What I am concerned about is the backfitting that some
people are concerned about. Backfitting means that you are saying there
is an increasing probability because you have to say it, otherwise you
cannot have a USQ and go for NRC review on a commitment that exists on
MS. McKENNA: Okay. I think I want to just comment on a
couple of points.
One is the way the rule is structured. It goes to the
facility description in the FSAR. If you are talking about a commitment
that is not in the FSAR -- is that --
MR. BONACA: Yes. I am talking about an SER.
MS. McKENNA: Then under the rule as written, 50.59 is not
the process that would apply.
MR. BONACA: That answers the question. That is a change.
MS. McKENNA: It is a change. I think you see out there --
the Staff has written a recent paper on commitment management. It a
topic that has been of some discussion between the Staff and the
industry and with NEI of how to deal with commitments that may appear in
other documents other than the FSAR.
MR. BONACA: In general I want to say that the changes I see
are very meaningful. I mean I believe that they are going to reduce
significantly the burden to licensees and also make the process more
meaningful. I really think so.
MS. McKENNA: Okay, thank you.
MR. BONACA: I want to recognize that.
MR. BARTON: On that good comment from Dr. Bonaca I think we
will let you off the hook.
MS. McKENNA: Thank you.
MR. BARTON: Russ Bell -- is NEI here?
MR. BELL: Present, sir.
MR. BARTON: All right, you're on.
MR. BELL: Thank you. In the time allotted I think I have
charted a course through this topic. I have few or no slides depending
on how it goes, so rather than a presentation maybe this is a
My name is Russell Bell and I am with NEI. I am a project
manager on the 50.59 issue. I echo the main objective that I think
everyone is clear on that we started out to fix, and that was moving off
the zero standard and providing the flexibility that was always intended
by this rule for licensees to make changes.
We have got to remember that that was the main objective.
It has been achieved.
The Commission gave us a big boost last summer when they
came down on that and come up with the term "minimal" and directed that
that be incorporated in the rule.
Secondarily I think there's been other objectives at work
here. Let's define the terms that are in there, that we have been using
and ensure that we have a common understanding of the rule provisions.
I think we have come a long way since -- was it '97 when the
SECY 035 came out and NUREG-1606, which raised a lot of concerns. I
think we have come a long way. It has taken some time between now and
then to get to this point where I think we are poised to come to a final
A couple reasons for that. I call it -- we call it the
"should to shall syndrome" where we thought we had a simple fix to a
rule we had been using, but it didn't turn out to be so simple. There
were other things that, well, while we are at it why don't we clarify
this and let's make sure we understand that, and so there has been more
at play here than just that inserting the word minimal.
Secondly, I don't think you can underestimate the reach and
importance of this rule to both the industry and the NRC. For the
industry they use this rule every day, many, many times a day, and it
cycles the entirety of the plant staffs from the people doing the real
work up through the senior management. There are reporting requirements
associated with it.
For the NRC this is viewed as their mechanism that ensures
that changes are controlled and there is a degree of control in the
changes that are allowed to be made without prior NRC involvement and
approval, so there is a lot invested in this rule and it has taken some
time to talk through the number of -- the changes.
They haven't changed this rule in 30 years and we are not
doing so lightly now is how I would sum that up, but that said, we are
down to one key issue, which you spent most of the time with Eileen
talking about -- that is the margin of safety issue -- and perhaps some
lesser issues that are a concern perhaps in the consequence area and a
question about the enforcement discretion, but those are not major
issues I don't think. We will continue to say a few more words to the
Staff about that I think but margin of safety is the key one.
By the way, I have some cavalry in the back here. We have
members of our task force on this issue that are here today. Eileen
mentioned there is a meeting immediately following this session for us
and we are going to -- it's been said we are going to lock ourselves in
a room for the rest of the day and get to the bottom of this one
remaining key issue.
MR. BARTON: That is encouraging.
MR. BELL: I think it is an interesting idea and if it works
we will do it more often.
DR. KRESS: On that subject, it seems like ideally you would
like to have it come down on pretty much what Dr. Powers' concept was,
that the limit itself already has enough margin in it that even though
the codes are parable and uncertain that there is enough margin already
there that just about any of the codes' calculated you could let it go
up to the limit and not worry too much about it.
MR. BELL: I think that's true.
DR. KRESS: That would be a pretty simple, practical way to
MR. BELL: I think that is a core thrust really of our
proposal and what I want to say is promising is I think the NRC Staff
proposal reflects a focus on the design -- we call it the "design basis
limit." They have the phrase "design basis capability."
DR. KRESS: If that were the overall concept, you wouldn't
have too much of a problem then with them focusing a little bit on
components versus systems, maybe?
MR. BELL: They have come at that maybe a little different
way -- more explicit way than we have in terms of more implicit
consideration of those things.
DR. SHACK: Is your problem with their expanded scope? Is
that the real difference?
MR. BELL: Yes and I will get to that in just a minute.
I shouldn't move off without acknowledging the issue of
minimal increase on probability and I think Commissioner Diaz expressed
some disappointment that neither we or the NRC Staff had gone farther in
As a first priority we are going to work out this margin of
safety issue where there is still some distance between us and the Staff
that we need to close to get this rule done, but there may yet be ways
to address Commissioner Diaz's specific concern on minimal increase in
probability either in the rule itself or in the guidance so we are going
keep trying on that as well.
DR. KRESS: That will be very interesting, any thoughts you
have on that, especially when they get around to the future of
risk-informing Part 50.
MR. BELL: Yes. Whatever we do, we want to tee ourselves up
nicely with that, but in any case on margin of safety, if I did have a
slide you could just pretend you see the word scope, criteria, and
methodology up there.
I think the margin of safety approaches that are out there
have these constituent parts to them and on two of those I think we are
in very -- two of the three I think we are in very close proximity to
the Staff -- scope, criteria, methodology. On criteria I think when we
are talking about the limits, going up to the limits and keeping them
holy and not violating them or changing them without prior approval, I
think we are in agreement. That is the right criterion for determining
when you would need to go seek a license amendment.
We also agree on methodology, where the proposal is to
change from an NRC-approved methodology and some re-analysis or
I think we agree that kind of a change ought to -- truly is
a change in that sense of the word and ought to be subject to prior
Commission approval, and they've incorporated a -- they've proposed a
criterion that gets to that.
Our approach was to focus on that in terms of the guidance,
by the way as we have done. The guidance already addresses this. We
would propose to enhance it. And whether it's in the rule or in the
guidance is something I think we'll discuss this afternoon, again at
this locked-door meeting, but on this one, I think we'll -- we're,
again, in close proximity, and that's not say we're far apart on that
first item scope, but I think that is where we need to continue to
Without having the time to go through it in detail, I would
just say that our approach would -- you would perform this margin of
safety review if the change you propose directly affects a parameter
that ensures fission product barrier integrity -- RCS, the fuel clad,
If the change you're proposing violates one of those limits
that we talked about, a design basis limit associated with those
barriers, that's when it would trigger a need for a prior review.
The staff approach, I think, and the point of contrast is
they have explicitly written out that, yes, okay, if you're going to
change one of those parameters that ensures a fission product barrier --
fission product barrier integrity, yes, you need to come see us, but
also -- or if the parameter you're affecting is associated with a
mitigation system or a support system, and so, that's a considerably
broader, you know, scope explicitly than we had proposed, and so, I
think that is the point of principle discussion later today.
You know, I take some heart in that, you know, sometimes it
takes a while to narrow down to the key remaining point of difference,
and I think --
DR. APOSTOLAKIS: Just let me understand the disagreement.
So, suppose that the staff accepted your proposal and a
licensee is about to make a change that would affect the support system.
Then what would you do? What would they do? I mean would that be under
50.59 someplace? Would there be a screening?
MR. BELL: Let's presume it screens into the 50.59 process
Now, that gets you into all six or seven or eight or
whatever number, the criteria that are under that, and you would
evaluate the change for each of those criteria.
DR. APOSTOLAKIS: Right.
MR. BELL: Only in the event that the change you're doing to
the support system has -- has the ultimate effect of affecting a fission
product barrier parameter, like DNB or RCS peak pressure or containment
peak pressure, if it directly affected one of those parameters that
ensure the integrity of that barrier, then you would go ahead and
complete this margin -- I won't say margin -- this additional, we'll
call it, criterion seven review and determine whether you come up to the
limit, the design basis limit for that parameter, or exceed it, or are
in some way changing the limit, okay?
But what we've done is -- I think the task force -- and we
took to heart Commissioner Diaz's proposal last summer, which was in the
SRM, just delete this criterion. It's redundant to the tech specs and
the rules and the other 50.59 -- it's redundant. We don't need it.
Obviously, that would be a pretty appealing proposal to most
folks, and it was to us, but we did feel we needed to test that approach
and found that there may be gaps -- if you deleted that criterion, there
may be gaps left in the coverage of -- as Dave Matthews might say, in
the fabric of the regulatory controls, and by that I mean the tech
specs, the rules and regs, and the 50.59 criteria, all of them.
DR. KRESS: A little redundancy doesn't hurt.
MR. BELL: No, it doesn't hurt, but you know, this is the
industry talking, and so, we'll probably not go out and seek a redundant
regulation where we don't think it's necessary or serves --
DR. APOSTOLAKIS: Do you have any quantitative criteria?
DR. SEALE: Pardon me. Your comment brings a question.
Have you ever caught anything in that net when you've applied it in the
past? I mean is it truly redundant in the sense that -- has it been a
useful aid in identifying problems that otherwise you would not have
MR. BELL: In fact, based on the experience of the task
force members -- and that's eight or 10 utilities represented -- the
margin of safety criteria that exist today -- that's the third of three
criteria in the existing rule -- has gotten very little exercise. It's
rarely been the determinant one.
DR. SEALE: It might be interesting to see what the staff's
response to a similar question would be.
MR. BELL: They haven't disagreed with us in the past when
we've -- and it's sort of a qualitative answer I'm giving you. We have
not been able to ascertain that. And there may be a reason for that.
In some cases, where people -- based on the way the criteria
is worded today, they feel they might trip over it, and they do an
initial test. They may withdraw the change that they were
contemplating. They might do something else to avoid that.
And so, you'd have to take any data or result you found from
surveying that with that grain of salt.
But in an case, if you took away the criteria completely,
there may have been gaps, or at a minimum, to fill in those gaps or gray
area, you may have had to complicate the remaining six criteria through
the guidance or through changes in the rule words, and again, these are
criteria that people have been answering questions and the people in the
field have been filling out these evaluations for years, and there was a
reluctance to -- I think to meddle too much with the other criteria for
the sole purpose of doing away with this one.
Rather, our premise, I think, was to comment that, well, if
we had the properly focused criteria here, it might be useful, it might
be the right thing to do, it might be the most straightforward way to
capture this small set of things that might have been left out had the
criteria been deleted.
So, that was the task force approach, to design a criteria
that was complementary and not redundant to the fabric of the regulatory
controls, and so, with that going in, I guess we're understandably
sensitive to, you know, a proposal that would appear to be, you know,
If a change to a mitigation system or a support function, as
I said, ultimately affected one of those key controlling parameters that
ensure the barrier integrity, then it does trip over our criteria and
would -- you'd have to evaluate the -- in terms of the design basis.
DR. BARTON: If you can come to agreement with the staff on
this, you're probably going down a right path on this margin question, I
MR. BELL: I think so. We're very comfortable. In fact,
you know, the staff has said that our proposal might be too narrow, and
I think that word "might" is the reason that we're going to meet later
today and for the rest of the day.
Is it or isn't it? We'd like to know. We've been asking
and testing our proposal ourselves for several weeks and continue to
feel good about it.
The staff has provided us another set of nine or so examples
that we'll talk through this afternoon and we'll exercise. That's our
intent, is to exercise our approach on those examples and demonstrate
how, again, viewed in the entirety of the regulatory fabric, you know,
none of them would slip through any cracks by going with the industry
proposal, and that's the idea for the remainder of the day.
In fact, as I said, I don't think there's time, even I could
do it justice, go through and detail our proposal, but to the extent
you're available or any members of your staff, the meeting is over in,
what, 10B4, you know, right after this.
I suppose you have a full agenda today, the committee does.
DR. BARTON: Maybe we can get somebody, one of the staff
members or something, to sit in on part of the meeting.
MR. BELL: That concludes my remarks. We're encouraged over
all, and we're encouraged that we're down to the fine point of a single
DR. BARTON: Well, it's encouraging to see that the industry
is encouraged on this issue.
MR. BELL: Nobody would like to be done with this issue more
than I would.
DR. APOSTOLAKIS: So, you are satisfied, then, that you
understand how a minimal change in probability will be determined, or
you're trusting that there will be no problem because, in the past,
there was no problem.
MR. BELL: I think there was even a legal concern brought
up, can this rule go forward using that term, which --
DR. APOSTOLAKIS: Technically, it cannot. Now, I don't know
MR. BELL: Somebody said here, at a minimum, negligible
meets the definition, and so, I am comfortable, for what that's worth.
I think the industry is comfortable moving forward using the term
In fact, we'd recommend it to the Commission. We left this
message with them, that they keep the term minimal. It serves good
It expresses the Commission's desire to be a bit more
generous in the flexibility, and it's implementable, because we have at
least one known definition for the term, and that is the one in 96-07,
and I guess two other ways to look at that, the term "minimal" builds in
room to grow as an industry and as an agency was we move down the
risk-informed path, perhaps we can better define or push the envelope on
that, and the only other thing I'd say is I think we -- the industry
routinely conservatively implements agency regulations, and this would
be another example of that, where we are just on the conservative side
of a --
DR. APOSTOLAKIS: So, you trust that the inspectors will
interpret this in a fairly consistent way.
DR. BARTON: I didn't hear him say that, George.
DR. APOSTOLAKIS: That's why I'm saying you trust.
DR. SEALE: It has room to give and to take, and that's the
problem as far as you're concerned.
DR. BARTON: I think part of this implementation, 18-month
period that Eileen talked about would involve --
DR. SEALE: -- giving and taking.
DR. BARTON: Yes, sure, and a lot of training of the
inspectors as to how to interpret or how to enforce or inspect this new
DR. APOSTOLAKIS: I don't know how you can train people to
enforce a fuzzy concept, but if you guys are successful, I would like to
participate in one of those and learn myself how to do that.
This is really so ill-defined that I really believe that the
only reason why you want it is because it has worked in the past, and
you are saying now, well, you know, why not in the future.
Unless people start misinterpreting the word "minimal" or
"negligible" now, and then you're going to have problems with that.
CHAIRMAN POWERS: Especially when you have somewhat
different treatments of "minimal" with respect to consequences, with
respect to frequencies, with respect to methodologies.
My question on that comes down to one of, is this really
complying with your idea of clarify in regulations when you have
multiple definitions of the same word?
MR. BELL: The way I look at that is I think it's a phrase
we're defining. Let's take the phrase "minimal increase" and
"probability" -- or "likelihood"? Is that the word now?
DR. APOSTOLAKIS: Yes. For the record, "likelihood" and
"probability" mean exactly the same thing. Let's not create new theory
MR. BELL: Let's take the phrase minimal change in methods,
and, by the way, that was the staff's -- but that is another phrase we
could define, and a minimal increase in consequence, let's take that as
another separate and distinct phrase that we can define. It does use --
it does have a common word in there.
DR. BONACA: For the record, I mean minimal increase in
probability, it has always been assumed that it meant that it did not
impact the conclusions or considerations which were embedded in the
original licensing. Okay.
DR. APOSTOLAKIS: Integrity.
DR. BONACA: Exactly. So, to some degree, that should be
able -- we should be able to translate that in some concept in the
guidance because it was not a stupid concept, I mean it had that
specific meaning, which is conclusions you draw before and you put
someone are still supportable because it changes its meaning. Well,
now, I don't like negligible implies also subjective judgment in it,
and, you know, what is negligible to me may not be negligible to
Now, minimal, conversely, you could put a definition, and I
really still believe that, based on, you know, some reference to the
previous conclusions and considerations, some definition of minimal
should be feasible. Without it, I totally agree with you that it --
DR. APOSTOLAKIS: Mario you just said, and Eileen also has
told us that, that the purpose here is to preserve the integrity of the
DR. BONACA: Yes.
DR. APOSTOLAKIS: Where in the original license did they
talk about probabilities and their function?
DR. KRESS: Well, it is implied, George.
DR. APOSTOLAKIS: But do they use the word?
DR. KRESS: Yes, it is actually in -- it is implied in the
CD, in the design basis accidents, and they use the word probabilities.
DR. APOSTOLAKIS: It shows up?
DR. KRESS: Yes.
DR. APOSTOLAKIS: Clearly, something they have to say.
DR. BONACA: Not having capability, they were -- for
example, in accidents, you had very large classifications of accidents
and you separated in anticipation.
DR. APOSTOLAKIS: That I know.
DR. BONACA: But also -- but also the other, there were many
applications of it. It was not based on quantitative assessment in all
cases. In some cases it was. I mean there were evaluations done and
low expectations set on systems. So I don't think we should be -- but I
agree with you that I am uncomfortable with no definition whatsoever of
minimal, and I also believe that some definition may be possible in the
terms I described before.
DR. APOSTOLAKIS: I think they already have something that
is good, possible new failure mode. I think that makes sense.
MR. BELL: You were saying before do away with the term.
DR. APOSTOLAKIS: Yeah, do away with the term and just say,
you know, if you have something new there that is significant, --
DR. KRESS: I am little uncomfortable with that, George,
because I can change magnitude of an existing failure without having a
DR. APOSTOLAKIS: Yeah, but that is a little difficult,
DR. KRESS: That's why I thought --
DR. APOSTOLAKIS: Conceptually, you can do it.
DR. KRESS: That's why I thought they said it was too
limiting, it doesn't include all the changes.
DR. APOSTOLAKIS: Can you give me an example where you can
actually change the probability of an existing failure mode
DR. KRESS: I can't come up with one.
DR. APOSTOLAKIS: It depends on what you call failure mode.
See, all these are fuzzy concepts.
DR. BONACA: You use fuzzy logic.
DR. APOSTOLAKIS: Then you are really fuzzified. Okay.
SPEAKER: Take a train out of service.
DR. BARTON: Do we have any other comments, questions for
MR. BELL: Thank you very much.
DR. BARTON: Thank you, Russ, and thank you, Eileen.
Any other comments from industry or the public?
DR. BARTON: Hearing none, I turn it back to Mr. Chairman.
CHAIRMAN POWERS: We are going to take a break now until 20
of the hour. I note that our cognizant member for the next presentation
is still snowbound someplace. I wonder if any other members of the
Thermal-Hydraulics Committee are prepared to assume his role.
DR. KRESS: Yes, I will take care of that, Dana.
CHAIRMAN POWERS: Fair enough. We will address till 20
minutes of the hours.
CHAIRMAN POWERS: Let's come back into session. Our next
discussion is on some of the new -- one of the new technologies, and I
will turn to you, Dr. Kress.
DR. KRESS: It is not necessarily new. This session is
about use of the Westinghouse COBRA/TRAC code. It is a best estimate
methodology for application to the upper plenum injections plants, which
are the two loop Westinghouse plants. The methodology is not much
different for this than the previously approved methodology for
Westinghouse COBRA/TRAC that was used for the four and three loop
plants. So this is an extension of a an already approved methodology,
and the difference is, of course, when you inject in the upper head, you
are injecting the ECCS water against the steam flow and then the
conditions, the physics may be quite different than when you injected in
the cold leg or another region.
So the meeting we had on this was a subcommittee meeting.
There were three of us there, the Chairman was Graham Wallis. He had a
little flight trouble, so I am sitting in in his stead at the last
moment for this. But Graham was there.
CHAIRMAN POWERS: Actually, to be clear, it is the first
moment. Could you give us some background on why people would go on to
these best estimate methodologies?
DR. KRESS: Well, in the first place, the Appendix K
methodology is purposefully conservative. If you go to the -- and it
has unquantified margin. If you go to the best estimate methodology,
you know better where you are at, you have a better idea of where you
are at, and you are able to get rid of some of those conservatisms, or
at least you can have a better quantification of what those
conservatisms are, and it will -- it can allow the licensee a little
more leeway doing things to the plant that -- like power uprates and
things. It gives him a little more room if he uses best estimate
methodology, because it gets rid of some of the unnecessary
conservatisms that are in the other process. So it is attractive to go
that route because it gives them some flexibility.
The meeting we had on, I guess it was February 23rd -- we
had a previous one, I forget which date, sometime in December.
MR. BOEHNERT: December 16th.
DR. KRESS: That's okay. Sixteenth. Where we are focusing
mostly on the differences between upper plenum injection and the
previously approved methodology. With the three and four loops, the
injected water goes down and floods the core from the bottom up, and you
are not -- you are not really injecting it against the steam flow. So
the question is, well, if you inject the water in the upper head, how
does it get into the core to cool it? Does it -- there are
counter-current flow limits and does it manage to get into the core
against the steam flow and do an effective job of cooling?
Well, it sort of reminds you of our reviews of the old AP600
-- not the old one, the AP600 passive ECCS systems. The physics in the
codes were not all that great, at least the consultants, they weren't
too happy with the physics, but the system itself is robust. In fact,
in an AP600, no matter what you did, you kept the core covered. I think
this is a little bit similar, even you have got a lot of strange physics
going on, you have got condensation in the upper head, you have got
counter-current flows, you have got breakthrough, you have got flows
that are crossing over in the core itself and stuff, it just so happens
that, based on some of the experiments, no matter what you do, that flow
does eventually get to the bottom, an injected water, and it does seem
to cool from the bottom up. So that you end up with the figure of merit
being the peak clad temperature. It seems to be rather insensitive to
CHAIRMAN POWERS: Now, I take it from your comments that we
have large integral experimental data that allow us to understand these
physics and to know they affect?
DR. KRESS: We have some integral experiments. They are the
upper plenum test facility and the CCTF, one in Germany and one in
Japan, as part of the 2D, 3D program, that had some tests were upper
plenum injections. They weren't all upper plenum injections, so there's
a limiting amount of test. There's other data out in the field on
counter-current flows, they think, in two small openings and things,
which was called into play.
But with that being -- a sort of perspective I have is that
the physics may not be all that great, but the figure of merit is not
very sensitive to the physics that play. And the question is, how does
that impact whether or not you approve a best estimate code for a
specific application? If it doesn't matter, why, how good do the
physics have to be, is the question.
DR. SEALE: Tom, I have -- listening to that presentation at
that last subcommittee meeting, I am coming to a personal conclusion
that what it amounts to is that we do mass and energy balances and that
is about the only physics that we have any rigor with. The rest of it,
we desensitize the calculation in the name of conservatism, to the rest
of the physics. And so what you have is a mass and energy balance and
that's what we call the margin calculation that was the basis for
And if you think about it that way, and you put in --
recognize there are one or two cases like choke flow and so on, where
you do add a little bit more physics to the analysis of these kinds of
things. It is not surprising that the rest of the physics doesn't
DR. KRESS: That's one way to look at it. But with that
convoluted -- you didn't want to make any comments. Boyer was also at
this meeting. The three of us were there along with Graham and our two
DR. FONTANA: Let's see what Dr. Nissley has to say.
DR. KRESS: Okay.
DR. POWERS: Well, let me ask you another question. It
seems to me that hallmark of the best estimate calculations is the
ability to confront uncertainties, and though you've told us that this
is insensitive, will we go into how they confront uncertainties?
DR. KRESS: Well, they certainly did at the meeting, and it
was more -- more of a sensitivity analysis than an uncertainties
analysis, more in -- but there was some uncertainties analysis done. I
don't know if we have that on the agenda to cover or not, but it was
certainly covered at the meeting.
DR. FONTANA: Part of the problem there is that you start an
uncertainties analysis or a sensitivity analysis from some baseline
point. If you're not real sure what that baseline point is, you're not
real sure where the rest of it is either.
DR. POWERS: I guess I don't follow that, I don't
understand why sensitivity analyses are useful in speaking of
uncertainties. Maybe I'll be smarter once I've listened to the full
DR. KRESS: Well, that can be useful in uncertainties if
your sensitivity variation is over a range that -- for specific
parameters that cover what you think the uncertainty range -- the
sensitivity tends to be a type of uncertainties analysis also, but it's
-- you know, it's generally one at a time sensitivity as opposed to --
DR. POWERS: I think that's the thing that bothers me most.
DR. KRESS: Yeah, and that's the shortcoming in it.
DR. POWERS: I think that would be unacceptable, in my mind.
DR. KRESS: But there were some true uncertainties --
DR. POWERS: Okay.
DR. KRESS: -- also, though, in the uncertainty sense you're
thinking of. But I'll turn it over to Mr. Nissley and see what he has
MR. NISSLEY: Thank you.
My name is --
DR. KRESS: Did we do too much damage to your talk?
MR. NISSLEY: No. I think you set it up pretty nicely.
There are -- a lot of the details of the
PWR predictions, the presentation as prepared does not have a lot of
information there. Most of the debate had to do with the adequacy of
the Code assessment matrix and the predictions of the data, and the bulk
of the material does focus on that. I do have some backup slides that
we can go off and elaborate on some of those issues if need be.
My name is Mitch Nissley, I'm with the Westinghouse Electric
Corporation, and as already indicated, the purpose of this presentation
is to discuss the extension of the best estimate methodology to plants
with upper plenum injection.
The objectives of this presentation foremost is to review
the application of the Code scaling, applicability and uncertainty
methodology to plants with upper plenum injection. The CSAU methodology
consists of 14 major steps. The ones I'm going to focus on are the ones
where the phenomena unique to upper plenum injection plants require the
The first of these is the identification and ranking of
phenomena, which you've heard referred to frequently as the PIRT. One
outcome of the PIRT is the important phenomena that need to be addressed
and identification of a code assessment matrix, step number seven, which
allows you to compare the code capabilities against tests that have the
Once you've set up that assessment matrix, you then look at
both separate effects test and integral effects tests. One of the key
features of the CSAU methodology with regard to separate effects test is
to range parameters and models to look at the sensitivity of the
separate effects response to variations in the code models. That is
done in step 9.
In step 12 and 13, this is -- once you take the results of
the ranging studies done with separate effects tests and also the
integral effects tests comparisons, applying those ranged parameters to
the PWR calculation, and use that information to determine how to
combine the uncertainties in the model with other uncertainties -- there
was a point made earlier about these being one-at-a-time sensitivities.
In the early part of the application of the methodology, the
uncertainties are done one at a time, but before determination of a
final overall uncertainty, they are actually looked at in combination.
Another objective is to demonstrate some of our major
conclusions here. We have a number of conclusions, but I would like to
just stress two that we're going to visit several times in the
One is the cooling of the high-power regions in the core is
by bottom-up reflood, that you do get water down in portions of the core
that are the low-power regions, but the cooling of the high-power
regions is by a bottom-up reflood process.
Also, we intend to demonstrate that ranging of the
interfacial condensation and the interfacial drag is an appropriate way
of dealing with the uncertainties and phenomena unique to upper plenum
DR. SEALE: I need to go back to a question you raised
earlier or a point you made earlier. You suggested that what you were
doing in this case was to take the calculation in pretty much the way it
has been used with the code and so forth and look specifically at those
things that had to do with the unique aspects of upper plenum injection.
MR. NISSLEY: Correct.
DR. SEALE: I can understand how that might be attractive to
you, but on the other hand, you only get a kind of a bobtail results in
the sense that you may have a closer to best estimate result on the
upper plenum injection part of the analysis, but it's imbedded in an
overall analysis that's a bounding analysis. And so I'm not convinced
you wind up with a best estimate result in a fuller sense of the word.
MR. NISSLEY: I believe I would agree that there are some
conservative aspects of this model that are maintained.
DR. SEALE: Sort of sailed through in spite of the --
MR. NISSLEY: That are maintained so that when you do
estimate a 95th percentile PCT< there is a -- it's a conservative
estimate of that probability. We would agree with that, yes.
DR. SEALE: Okay. So we shouldn't try to convince ourselves
that it is a fully best estimate result. Okay.
DR. POWERS: What seems to me -- this is very significant,
what you've said here, is that despite all these complexities that might
exist on counter-current flow on the top, that what really counts is the
MR. NISSLEY: That's one piece of it. Another piece of it
is that the accumulators are still located in the cold leg, as is the
high pressure coolant injection. Now, the high pressure coolant
injection is only about 25 percent of low pressure injection in the
upper plenum. However, the accumulator flow is about eight to ten times
that and is sufficient to refill the downcomer to the loop level and
start off reflood before the upper plenum injection even becomes part of
So the early part of reflood and a lot of the transient is
really dominated by that accumulator performance, which is in the cold
leg as it was before.
DR. POWERS: Clever, those guys that put in accumulators,
DR. KRESS: My original thought on that, Dana, was that the
real difference is that when you inject into the upper plenum, there is
a possibility that you can carry some of that stuff out by the steam.
DR. POWERS: Yeah. You would never get it in at all.
DR. KRESS: You would never get it in down there.
DR. POWERS: Understood.
DR. KRESS: The question is how you deal with that issue,
how do you show that that's not a significant part of the problem.
DR. POWERS: It's different than worrying about whether you
get 90 or 95 percent in. If it were that sensitive, then it's a very
big deal. If it's whether you get 50 or 25 percent in, it's much less
sensitive on the physics presumably, and this idea that it's the
bottom-up flooding of the high-power regions I find very significant
because that's technology you have a better handle on.
DR. KRESS: And I think the value of the tests we talked
about were just to give us some assurance that that sure enough is the
way it seems to work in the test.
DR. POWERS: Good. Good. Because these counter-current
flow things in the chemical industry are a nightmare. But the
difference between 90 and 95 percent is the difference between profit
and going broke.
MR. NISSLEY: This figure shows the overall picture of the
reactor vessel in an upper plenum injection plant. I would like to
point out just a few general features, and then I'll go on to a
schematic that illustrates more or less the general flow pattern in a --
following a loss of coolant accident during the upper plenum injection
The core is located in this region below an upper core
plate, which is a large plate that has openings in it which we will
refer to as either the upper core plate or a perforated plate throughout
The upper plenum injection actually takes place at the same
axial location as the hot leg nozzles. It comes through the core barrel
as a jet goes in and impinges on a forest of structures in the upper
plenum which are the guide tubes which the control rods ride up and down
within, and also what are called support columns, which provide
structural support between the upper internals up here, the upper head,
and the upper core plate. So we have a jet coming in here impinging on
these structures, breaking up into films, drops, whatnot, and we will
show some parametric studies where we looked at how you model that
injection source because that's a very complicated flow pattern,
actually, you know, a range of types of liquid, sheets, films, draining
films and droplets, and we've done some studies to try and see how
sensitive the behavior in the upper plenum is to how you assume that
injection is introduced.
This is a schematic illustrating the behavior during the
upper-plenum injection phase. As you can see, the accumulators at this
point are done injecting, has established a level in the downcomer well
above -- above the top of the core, and so you do have a significant
driving head from that water that was delivered by the accumulator.
Within the upper plenum you have subcooled liquid being
injected. You have steam upflow, so you have condensation going on.
You also have the potential for the upflowing steam to entrain droplets
out the hot leg, which could contribute to steam binding.
And what we're going to do is look at some of the
experiments, both separate-effects tests and integral-effects tests,
that look at -- really focused on the behavior of what's going on at the
top of the core and in the upper plenum here both in terms of drain to
the core and entrainment out the hot legs.
Step 3 of the CSAU process involves identification and
ranking of phenomena. I'll focus on some of the parameters that are
more highly ranked for upper-plenum-injection plants. The PIRT process
for a LOCA breaks the transient down into three phases -- blowdown,
refill, and reflood. And the columns that I have here are CSAU -- this
would be the three and four-loop work that was sponsored by the NRC.
And then the next two columns would be the Westinghouse application to
upper-plenum-injection plants, and also our previously approved rankings
for three and four-loop plants.
Again, as I walk through here, you can see a high number
indicates a very high ranking. What we have for the hot assembly
location is we have that this has the potential for being more highly
ranked in an upper-plenum-injection plant. The reason there is in the
upper plenum there are a variety of different structures. We talked
about support columns and guide tubes. These affect the vapor
velocities at the top of each assembly location, and some of them are
more restrictive from a CCFL point of view than are others. And so one
step of the UPI methodology is to identify the most limiting hot
assembly location, what structure it's located underneath, and carry
that through the analysis.
We can't preclude where the hot assembly is going to be
located, so that is a bounding assumption that we can't defend anything
other than that.
Entrainment and deentrainment is ranked highly. This is
intended to reflect several things. You have the jet coming in
impinging on the structures. You're going to break that up. Some of
it's going to deentrain then on the structures as the droplets and
sheets and films fall down. It also refers to the amount of entrainment
that gets carried out the hot legs and potentially leads to steam
binding. That is viewed as a more highly ranked parameter for
Phase separation has to do with development of a pool in the
upper plenum. Countercurrent flow drain and fallback is highly ranked
because this is really the mechanism that controls how much water from
the upper plenum gets down into the core.
And condensation participates here in several different
ways. One is it affects the subcooling of the water at the upper core
plate and it's the subcooled countercurrent flow behavior tends to be
less limiting than saturated CCFL because the ability to condense some
of the vapor upflow so the velocities are decreased.
Another effect of the condensation is to decrease the amount
of steam available in the upper plenum to entrain liquid out the hot
So what we're going to do here is take these highly ranked
phenomena and go through and look at separate-effects tests and range
some of these parameters and get a feel for the sensitivity of the
results to variations in the parameters and also our ability to predict
the phenomena themselves.
In step 7 the validation developed the assessment matrix for
the code for the purposes of looking at the phenomena for upper-plenum
injection. I have these grouped in terms of separate-effects tests and
integral-effects tests. For subcooled countercurrent flow we really
have two situations we're interested in. One is what's going on at the
top nozzle right above the top of the fuel, sometimes referred to as the
tie plate. The General Electric CCFL tests that we're going to show
allow us to examine our ability to predict those phenomena.
We also have CCFL going on at the -- potentially at the
upper core plate. The upper-plenum test facility in this particular
test configuration can be viewed as a separate-effects test. The
facility has the capability to do integral-effects testing, but in this
configuration, you're really applying fixed boundary conditions to the
upper plenum, allowing you to consider this as a separate-effects test
that's looking at a limited number of phenomena.
For integral-effects tests, here we have feedback between
the heat generation in the core and what's going on in the upper plenum,
and we're going to look at some CCTF tests which were done in Japan.
And again the subcooled CCFL we can get useful information
from the GE tests and UPTF. In terms of entrainment and deentrainment,
the jet impingement effects and how much gets entrained out the hot leg,
we can look at that in UPTF. There's also available information to make
some assessments from the CCTF tests. And in terms of the amount of
condensation that takes place in the upper plenum, again we have data
available to make some assessments for UPTF as well as the CCTF integral
I want to start off with the GE CCFL tests. This is where
we went in and did our initial ranging of parameters. The GE CCFL tests
are set up as a part length BWR rod bundle that had a prototypical GE
tie plate. There was some discussion about the validity of these test
results to the PWR design. This is not in your package, but let me show
you this quickly.
This is the GE tie plate where the dark areas indicate the
area open for flow and for draining down into the rod bundle. And this
figure illustrates the top-nozzle design for the Westinghouse UPI plant
that we're talking about. You can see there is some difference in the
flow pattern and the general arrangement, but they're both fairly porous
plates, and if you look at some of the literature in terms of how plates
can be characterized, these plates do fall into the same class of
CHAIRMAN POWERS: I'm not really sure what I'm looking at
MR. NISSLEY: Okay. I'm sorry. The dark areas in both of
these figures are the area where flow can drain down into the rod
bundle. So it's the open part of the plate. The parts that aren't
colored in are solid, horizontal plate.
CHAIRMAN POWERS: And the circles are just fuel rods.
MR. NISSLEY: This is where the thimble tubes are. This is
where the control rods would ride up and down inside of these.
I'll show a tie plate from the CCTF test in a little while.
Back to the GE tests, the injection in four out of the five
tests we looked at was subcooled by about 115 degrees, which is a little
less than in a typical UPI plant, but it's the same general ball park.
In these tests -- the tests were performed by starting out with a drain
flow and no upflow of steam, and then steam was decreased in order to
map out when CCFL would occur at the tie plate. In addition to the
measurements of drain flow as a function of steam upflow, there were
also measurements of the liquid temperature within the rod bundle of the
draining fluid that allows us to make some assessments of the
condensation models and how they are behaving.
Let me quickly show a schematic or a sketch of our model of
this test facility. The five tests which were performed are listed up
here. They were ranged in terms of the amount of the liquid injection
flow into the region above the tie plate. As you can see, in this case
it's in degrees C, four of these are subcooled by about 115 degrees and
one of them is subcooled by about 15 degrees. It's essentially a
And again we had a steam -- or initially started out with a
drain down through the bundle with no steam upflow, and then steam was
increased in a stepwise manner until we had the onset of flooding and we
could map out the flooding curve.
What these tests showed was that we had a tendency for the
subcooled tests to predict flooding at a lower steam flow rate than
observed in the experiment, and this is without doing anything to the
models that were in the code as it was licensed for three and four-loop
plants. What this shows basically is a time-dependent or it illustrates
the test as it proceeded with time.
We started out with a fixed drain flow rate, go across
until -- increasing the steam injection rate until CCFL was observed and
flooding was established. This is the triangles in this case are the
data. In the experiment what we saw was we predicted the onset of
flooding at a lower steam injection flow than was observed in the
experiment. So the models as is without any kind of ranging are
conservative for the subcooled CCFL tests.
DR. KRESS: When you measured the drain rate, did you
collect it in buckets at the bottom?
MR. NISSLEY: Yes, it was measured at the bottom of the test
What we did was we went in and we ranged one at a time the
interfacial drag and the interfacial condensation to see the relative
contribution to these results from each of those models.
I'm going to switch now from one of the subcooled tests --
all four of those tests behave like the one I just showed -- to a
saturated test. And this is presented in a little bit different
fashion. Typically flooding curves are laid out with the parameters
being either a Kadalatze number or a Wallis number with the plate
characteristics of the GE plate and also the Westinghouse plate. The
Kadalatze number is the appropriate formulation.
CHAIRMAN POWERS: I think I understand the Kadalatze
numbers. I'm thankful Mr. Wallis is out here, because I haven't got a
clue what a Wallis number is. It's got to be the ratio of two physical
phenomena to each other. What are those ratios?
MR. NISSLEY: Well, they're really a dimensionless velocity,
and for the Wallis number, it has the hydraulic diameter reflected in
it. A lot of the early work was done with very simple geometries like
small pipes, and the hydraulic diameter is important in certain types of
geometries, in particular small pipes, because what you can get is you
get like, as the liquid is draining downward on the inner pipe surface
and the vapor is flowing upward, you start to get to the point where
you're bridging the liquid film across the gap. And therefore the
hydraulic diameter is important.
If you go to a large-pipe geometry or to thin plate or
larger hydraulic diameter plates such as these, that hydraulic diameter
is no longer important. And what the Kadalatze numbers is looking at is
really the fluid-to-fluid interactions, irrespective of the wall
effects, if you will.
For the saturated CCFL, again, if you look at these in terms
of the test behavior versus time, you're starting out here, you're going
up, increasing the steam flow until you hit flooding, and then you're
going over, maintaining the flooding, and then as you decrease the
steam, you go back down in the opposite direction.
For saturated countercurrent flow our models actually do a
pretty good job. If you look, we do hit the flooding curve right at
about just a little bit below, we're slightly conservative here.
We went through and we ranged interfacial drag. We used
several values, and all this is is a multiplier on the interfacial drag
between the liquid and the vapor.
DR. KRESS: That's between droplets and vapor?
MR. NISSLEY: It's both between droplets and vapor and film
and vapor. We have a three-fluid model where we have both a liquid film
representation and a droplet representation. So this is applied to the
liquid film-vapor interface and the droplet-vapor interface.
We looked at a multiplier of -- on drag of .5. We looked to
another value, where we looked at .01, and what this showed is this
allows us to actually bracket the data, where the predicted flooding
actually goes slightly beyond the flooding curve and the data.
So, the main point of this particular -- main two points of
this particular slide is we're less conservative for the saturated CCFL
case with our nominal models, and if you decrease interfacial drag, as
you would expect, the flooding becomes less limiting and you actually
bracket the data via that mechanism.
I'm going to switch back to a sub-cool test here, and one of
the things we saw from -- we've already shown that the sub-cooled
flooding predictions tend to be conservative.
We also looked at the fluid temperatures which were measured
in the draining liquid flow within the rod bundle, and within our normal
or what I'll call our nominal models, where we have a multiplier -- flat
multiplier of 1.0 -- in other words, no deviation at all from our normal
interfacial condensation, what you can see is that the injection
temperature, which is about 38 degrees C, right in the vicinity of the
plate, immediately above and immediately below the plate, all the
condensation is taking place right at about that region, and what's
happening as a result of that is you're not -- you're actually
condensing some of the fluid above the plate, and so, you haven't
decreased the vapor up-flow through the plate that much.
As you decreased interfacial condensation, what you see is
that more and more of the condensation is taking place below the plate,
and that has the effect of reducing the vapor flow that's making it to
the plate and therefore is giving you less limiting CCFL predictions.
So, if I take that same sub-cooled test I showed you before
and I go in and I apply multipliers -- in this case, we went through and
we looked at multipliers separately and in combination -- what we found
is a good way to bracket the data was with a representation where we had
a reduction in interfacial condensation and interfacial drag multiplied
by .2, and so, what you could see in this case is we've come out and
we've actually -- we're on or slightly beyond the data here.
So, what we've done is we've used this as means to shift
that conservative bias and go up and actually bracket the data.
So, at this point in time, what we've used is the GE CCFL
tests. We've looked at interfacial drag by itself, we've looked at
interfacial condensation by itself, and we've looked at several
combinations of the two, and we've come up with the ranging as I showed
here of going to multipliers of .2 on both the interfacial drag and the
Next I'm going to show what happens if you apply that in the
upper core plate for the UPTF test facility.
UPTF is a full-scale facility. It's actually a four-loop
German PWR. So, it is larger, in fact, than the upper plenum injection
I'd like to make several points here.
This inner region here is showing the location of open holes
and guide tubes where the squares are basically open geometries at the
core plate and the circles are where the guide tubes are located. So,
this is looking down on the upper core plate.
In these particular tests, there were boundary conditions
applied right below the upper core plate, there were three phases of the
test run where they ranged the amount of injection flow into the upper
plenum, and they also ranged the radial distribution of the injection.
In phase A, there was a uniform distribution of steam, so
you had essentially the same vapor up-flow applied as a boundary
condition at all of these locations.
In the second and third tests here, the inner region was
given a slightly higher steam injection rate, about 13 percent higher
than the average, and the outer region was given a lower injection rate.
This was to approximate the radial power distribution in the
core and the effects of that on the steam flow into the upper plenum.
The other things that were varied in these tests -- the
various injection rates into the upper plenum were applied based on
different assumptions of the decay heat model.
Phase B was looking at if you used the old Appendix K kind
of decay heat model, you would have had more steam generation, whereas
phases A and C were looking at a best estimate decay heat rate, and the
decay heat rate that was chosen to come up with these steam flow rates
was one about 30 seconds after the break occurred.
So, it was right at the beginning -- appropriate for right
at the beginning of reflood.
In these tests, they blocked off the -- one of the loops
here. The cold leg was blocked, and also the hot leg. They actually
injected the hot leg through the hot leg nozzle.
In an actual UPI plant, the flow would come through the
barrel azimuthally around the upper plenum here.
So, that was one difference in this test. It was a
limitation of the test.
But you did have the jet coming in and impinging on these
structures. We were measuring the amount of entrainment and the amount
of vapor going out of the hot leg. So, this does allow us to get a feel
for entrainment and de-entrainment effects as well as drain into the
The basic flow behavior in all of these tests is that the --
there was one region slightly in-board from where the injection took
place where most of the drain-flow was going, and where this measurement
took place was down below the upper -- the top nozzle, and this is an
indication of sub-cooling.
So, if you're measuring sub-cooling below the tie-plates,
you know you're getting drain there.
One main point to make from this is you can see that the
drain is restricted to one region of the core, and if this was falling
the whole way down to the bottom of the core, as it would, all of these
other regions would be -- more or less experience a bottom-up re-flood.
I'm going to show the effects of ranging, looking at several
figures of merit for the UPTF tests.
The first one we're going to look at is the drain rate to
the core. This really allows us to tell for a large-scale plate how
much total drain flow are you going to get down into the core, and
that's important because the UPI delivery is what's contributing to the
We're going to look at the entrainment rate out the hot
legs, which tells -- gives us some feel on the contribution to steam
It also gives us sort of a global measure of, as the jet
impinges on these structures, breaks up, and have condensation going on,
what is the net effect on all of that of how much entrainment gets
carried out the hot legs?
And finally, we're going to look at the amount of
condensation that takes place in the upper plenum. That gives us
another global indicator of the steam-water interactions in the upper
plenum, and there are some implications.
The amount of sub-cooling of the drain flow into the core
and also the amount of condensation that takes place does affect the
amount of entrainment to the hot legs.
In these tests, as in the plant, we model the injection as a
liquid film that is introduced in the cell in our model where the
injection physically occurs.
So, in this case, this is looking at the upper plenum
nodalization, and we are introducing the -- that injection flow as a
liquid film here.
Again, I mention you have sheets, films, droplets. The code
is only capable of introducing one field. So, what we've done is our
nominal modeling is to use the film representation, but we've also
looked at droplet representation to see the effect in another ranging
study, and I'll show that in a minute here.
DR. KRESS: When you say you introduced it as a field, it's
a flow rate coming in. Do you talk about a change in film thickness
distributed over that area per time?
MR. NISSLEY: It's showing up as a mass source, and
depending on how that affects the void fraction of the cell, it's
subsequently partitioning it between -- it's being introduced as a film,
but if there's excess beyond what the critical film thickness is, it
will turn some of that into droplets.
DR. KRESS: I see. So, it could be a droplet source, also.
MR. NISSLEY: If the film is thick enough, the code will
turn it into drops.
I'm going to try and pick up a little bit here. I think I'm
running a little bit late.
Quick summary on what -- there's a lot of information here,
but what we looked here was -- the first column here shows, for each of
the tests, what the measured figure of merit was, the next column shows
the prediction with a model as is, with no ranging of parameters, and
the final column shows what the effect is if we range interfacial
condensation and drag as we defined with the GE sub-cooled CCFL tests.
As you can see, in terms of the amount of drain rate into
the core, we tend to under-predict the drain rate, which is a slightly
conservative bias. We're delivering less liquid to the core than the
When we reduce interfacial condensation and drag, this
really doesn't have a big effect, and one of the reasons for that, if
you'll recall, what we saw in the GE test was it was the condensation
that goes on below the top nozzle that reduces the vapor flow at the
nozzle, and that mechanism is not really present in this test.
In terms of entrainment and de-entrainment, we also have a
bias with the nominal code here where we tend to over-predict the amount
of entrainment out the hot leg.
What this would tend to indicate is that we have a bias in
terms of the amount of steam binding which could occur, and again, you
could view that as a conservative bias.
When we range condensation and drag, by reducing the drag
we're making it harder for the vapor to carry the liquid out the hot
legs, and you can see that, in fact, we've actually bracketed two out of
three of the data points and have come, you know, very close to being in
the money for the third data point.
So, by reducing the interfacial drag, that's the real actor
here. We are decreasing the amount of over-prediction of hot leg
DR. KRESS: That entrainment -- it's almost surely all
MR. NISSLEY: Yes.
DR. KRESS: So, it's only the droplet drag that matters.
Your model has, what, the drag that's in Stokes law?
MR. TAKEUCHI: For the drag correlation, we have the model.
DR. KRESS: That is for films.
MR. TAKEUCHI: No, for the droplets.
DR. KRESS: For the droplets?
MR. TAKEUCHI: Yes.
DR. KRESS: That is a mobile --
MR. TAKEUCHI: Interfacial friction between the film -- no,
sorry, steam and droplets.
DR. KRESS: That is a mobile interface drag model.
MR. TAKEUCHI: That's right.
DR. KRESS: Okay.
MR. NISSLEY: But by changing the interfacial drag between
the film and the liquid or the film and the vapor, that will affect how
much entrainment you are getting so it will affect your droplet source
DR. KRESS: Because you are changing the terminal velocity?
MR. NISSLEY: Yes.
DR. KRESS: Exceeding the terminal velocity carries --
MR. NISSLEY: Yes. In terms of condensation here, with the
nominal modeling we are underpredicting condensation and of course then
by decreasing condensation here further we are increasing the deviation
from the data in this case, so this is not acting in the direction you
would really want it to, but in terms of the effect on the PWR
calculation, again we come back to having some conservative biases.
Underpredicting core drain tends to be a conservative bias.
Overpredicting hot leg entrainment tends to be a conservative bias,
although when we range the models it does tend to then move in the
direction of bracketing the data.
DR. KRESS: Does condensed steam all go into droplets in
that model? Do you just add mass to the droplets?
MR. NISSLEY: I think it depends on -- it looks at the
interfacial heat transfer between both the film and the vapor and the
interfacial heat transfer between the droplets and the vapor --
DR. KRESS: Partition --
MR. NISSLEY: Yes.
DR. KRESS: When you change XC you change both those things?
MR. NISSLEY: Yes.
DR. KRESS: Okay.
MR. NISSLEY: This is a different kind of ranging study. We
talked about that assumption of introducing the injection as films. We
looked at the case of introducing it as droplets and the first three bar
charts here are looking at essentially a mist droplet injection, one
one-thousandth of a foot to .01 feet, which is about a tenth of an inch,
up to about a half an inch droplet.
What you can see as the effect of that is as you go to
smaller and smaller droplets you really start to overpredict the amount
of entrainment and if you go to droplets less than about a tenth of an
inch or so, the amount of entrainment that is out the hot legs is just
orders of magnitude higher than what the data shows.
So what this study tends to do is confirm that modelling,
introducing the liquid injection as a film is comparable to introducing
it as drops of a reasonable or a fairly large size, but if you go to a
droplet, a very small droplet injection, your agreement particularly
with the entrainment is very poor, so we have used this as justification
for continuing to add the injection as a liquid film.
DR. WALLIS: The condensation match at the bottom there, I
think we went into this before, you changed the coefficient. You
brought the coefficient down whereas if you wanted to match the data you
would have actually raised it, used XC equals .2 but if you wanted to
match the data you would have actually raised it.
MR. NISSLEY: Well, over here, if we were just going in
DR. WALLIS: It did raise it?
MR. NISSLEY: That's correct. We did not raise it in the
upper plenum, the point there being that there would always be the
alternative of what I might call regional ranging where you did one kind
of ranging in the vicinity of the tieplate and the bundle and you did
another kind of ranging in the upper plenum.
DR. KRESS: I am a little confused. If you increase the
condensation you are cutting down on the steam velocity and you are
increasing the mass of the droplets which increases their terminal
velocity, so if you increase the condensation rate doesn't that
automatically decrease the amount of entrainment you get?
MR. NISSLEY: Yes. If you increased -- Dr. Wallis, I think
you were originally looking at this particular aspect?
DR. WALLIS: Those two, right.
MR. NISSLEY: Okay. Let me do them one at a time.
Here what we found was decreasing the condensation as based
on what we saw on the GE CCFL tests -- it actually takes you further
away from the data when you apply that in the upper plenum.
If you were to instead increase condensation in the upper
plenum this would come in better agreement with the data.
DR. KRESS: Because it cuts down on the entrainment.
MR. NISSLEY: Well, it would also reduce the amount of vapor
flow rate, which would tend to take you in the right direction as well.
DR. WALLIS: I see. It wold increase the vapor flow rate
from the core, wouldn't it? You would have more condensate?
MR. NISSLEY: That is why I got into the discussion of
DR. WALLIS: That would make it worse in terms of reflooding
MR. NISSLEY: Yes. You would have more saturated -- the
liquid would be closer to saturated or at saturated instead of subcooled
so you do have that feedback between what is going on in the upper --
DR. WALLIS: See, I think that if you have one dial which
you turn to fit GE data and it goes in the wrong direction for the
condensation rate in the upper plenum, it indicates that maybe you need
MR. NISSLEY: That is what I was alluding to by regional
DR. WALLIS: Or something else.
MR. NISSLEY: Yes. Let me move on to CCTF. CCTF is an
integral test and I am going to try to pick up here through the next
couple of slides.
Here we have the upper plenum injection occurring but we
also have a heated core so we will be able to see some of the feedback
effects between what is going on in the upper plenum and what is going
on in the core.
First, I would like to comment on some of the things the
data shows itself irregardless of what the code is predicting.
This is from the compendium of ECCS research where a test
with upper plenum injection was compared with a reference test with cold
leg injection. In this case we had the cold leg injection, the high
power region cladding temperature transients in the top figure and the
upper plenum injection clad temperature transients in the bottom. An
interesting point here is that for the high power regions you can see
that the turn-around in quench is a steady progression from the bottom
to the top which is indicative of a bottom-up reflood, and you can also
see that the actual profiles themselves are very similar, indicating
that the water is contributing, the water injected in the upper plenum
is contributing comparably in terms of core reflood as if it were
injected into the cold leg.
In contrast --
CHAIRMAN POWERS: Look at that slide. I guess it appears to
me that there is some cooling that must be taking place at the 305 meter
region early and that is because of your injection taking place up
MR. NISSLEY: Right. This is a one-eleventh scale facility
and you do have, if you have flow down into an assembly not too far away
and there is vapor generation going on, you are going to have some
enhanced cooling from that. If I look at one of those cooler assemblies
that has downflow here I can see a very dramatic difference where I have
got the normal bottom-up reflood for a cold leg injection test, but for
the upper plenum injection test this is one of the assemblies where you
would have had strong downflow.
You can see you very quickly turn around that top elevation
and you almost have a top-down quench versus a bottom-up quench.
Another interesting observation from the data is here we
have two different CCTF tests, one of which was done with what I will
call full UPI flow. In other words, there was no single failure
The second one had a single failure assumption so the real
difference between these two tests is this one only has half of the UPI
flow as this one does. If you look at these temperature versus time
traces, you can see that there is really only a modest effect of cutting
the UPI flow in half, and in terms of what is going on in the upper
plenum, there is a dramatic difference.
I am sorry this isn't in your package but this is the void
DR. WALLIS: Did you sort out the negative void fraction
MR. NISSLEY: No, I didn't. What I think it is is that the
DP measurement is below the upper core plate. It is from the top nozzle
up to the hot leg and they are accounting for that elevation difference,
but I couldn't confirm that. That is my suspicion.
What you can see in the upper plenum is that you very
rapidly in a period of about 100 seconds, filling the upper plenum from
the upper core plate the whole way up to the hot leg, and in fact, the
data did how some indication of subcooled liquid in this case, whereas
in the single failure case, the water in the upper core plate was at
saturation and the actual level in the upper plenum was only about a
quarter to a third of what it was in the full flow test.
However, the effect of that on the core was much less
dramatic. At first that might seem to be quite a bit of a surprise, but
as we indicated earlier the upper plenum injection -- or I'm sorry, the
various ECC systems that contribute here, this shows the flow rates for
the single failure test.
The accumulator flow rates that initially fill the lower
plenum and the downcomer are on the order of 8 to 10 times the flow rate
of the low pressure injection end of the upper plenum, so you have got
the downcomer filled, the lower plenum filled and reflood kicked off
with the accumulator before the upper plenum injection really had much
to bring to bear on the transient.
Now I would like to move into some of the predictions.
Some of the instrumentation was different in CCTF than UPTF.
In addition, it is a transient experiment rather than a steady state
experiment. And one of the things we could look at here is the upper
plenum pool depth, which gives us a general indication of the net effect
of how much flow is draining into the core, and also how much is being
entrained and de-entrained.
There was experimental measurements of entrainment into the
hot log, but there is so much noise in the data you really can't make
much sense of it. So these data were well behaved and much more easy to
You can also look at the amount of subcooling in the pool,
which is an indication of condensation. Okay. Before we look at vapor
in-flow versus vapor out-flow to get condensation. If we look at the
amount of subcooling in the pool, we can get a similar measure of
An additional figure of merit that we have available here is
the cladding temperatures. This is really to give you a feel for how
much of that core drain, how it is participating in the reflood.
One difference in CCTF from either UPTF or the PWR is the
test was configured that the injection actually took place in the inner
region of the upper plenum. So, again, we modeled the flow in the
nominal case as if it were coming in as a liquid film in the central
part of the upper plenum. We are going to show ranging now where we
look at variations in the interfacial condensation and interfacial drag,
and also variations in that injection modeling assumption.
DR. WALLIS: Do you see less asymmetry than you do with side
injection? This one, you make a pool, so it doesn't matter where it
MR. NISSLEY: You see less asymmetry in terms of the
measurements of pool depth and subcooling, but it is a smaller facility.
And, in addition, you do see a preferential breakthrough region. If you
look at the thermocouple measurements at the top of the core, you tend
to see that -- clusters one-third to one-half, once it establishes
drain, it tends to stay there. So that is one asymmetry that is
observed, is the drain pattern.
The key on these figures may be a little difficult, so I
will try and clarify them here. What we are looking at here is the pool
depth in the upper plenum. The squares indicate the data. The solid
line reflects the nominal as-is modeling without ranging anything. The
dashed line which follows it quite closely is modeling the injection as
droplets of about a tenth of an inch, instead of as a continuous liquid
film. What you can see is there very little difference in the
We have an inner and outer region, the data pretty much
shows the same pool depth. The code tends to preferentially give
somewhat higher levels in the outside than it does in the inside, but,
on average, it is pretty close to following the data.
When we range interfacial condensation and drag, we tend to
build up the pool to a deeper level, about 50 percent higher than the
data, and that is simply attributed to the reduction in drag. We are
not entraining as much liquid out.
DR. WALLIS: This is another case where you have ranged it
in way which drives the prediction away from the data. Whereas, if you
wanted to fit the data, you would be ranging it the other way. And I
just don't quite understand this magic .2, as opposed to 2, let's say.
MR. NISSLEY: Well, I think -- I guess that comes back in
terms of the application of separate effects versus integral effects
tests. If you look at the UPTF tests, which are separate effects tests,
the nominal models do over-predict the hot leg entrainment. Here you
have got various -- you know, there's more things going on in this case,
being an integral effects test.
If I look at the pool subcooling --
CHAIRMAN POWERS: Let me ask you a question. When I look at
that previous slide, you have some data points there and then you have
the model curves. If you had done this test six times, how much scatter
in that data would I have anticipated?
MR. NISSLEY: Are you saying if you repeated the physical
experiment six times?
CHAIRMAN POWERS: That's right. Yes.
MR. NISSLEY: That is a good question, which I don't think I
CHAIRMAN POWERS: You don't have some idea on the magnitude
of the experimental error here?
MR. NISSLEY: Not offhand, I do not. I would make -- the
data here is derived from that void fraction plot that I showed
DR. WALLIS: Well, one thing you could do, is how much is
the void fraction data bouncing around? You just show us some points
here, but, presumably, it is a continuous reading.
MR. NISSLEY: It was simply derived from these.
DR. WALLIS: Derived from those.
MR. NISSLEY: It translated the void fraction into a
DR. WALLIS: So which one is it we are looking at, which
MR. NISSLEY: We are looking at this one.
DR. WALLIS: That one. So someone takes some points off
MR. NISSLEY: Right. We picked points off of here and
translated them into the squares here, so this is a fairly continuous.
DR. WALLIS: It just doesn't look right. I mean that has
got a monotonic trend and the squares have a bounce. I don't want to
get into that sort of detail. We are asking --
MR. NISSLEY: This bounce here, I can't --
DR. WALLIS: Yeah, why is there a bounce there and not in
I can't answer that offhand.
CHAIRMAN POWERS: I mean I am sitting here, I am looking at
your -- actually, at the .2 curve, that ranging that you spoke of at the
beginning of your presentation, and it is higher than the data, but I
don't know whether that is significant. If the experimental error was
of a similar magnitude, then it is a no, never-mind. If it -- if the
experimental data could be replicated within the size of the squares,
then it seems to me you have got some sort of a logical dilemma here on
what you do with XC and XD. You have got to match one set of data by
making them high and another set of data by making them low. That gives
you a problem, I would think.
MR. NISSLEY: Yes, and I guess I would tend to agree with
that, but to put it in perspective, what a lot of this information is
showing is that you can have very substantial differences in what is
going on in the upper plenum, but what is going on in the core is not
that significantly affected.
I think very telling was the comparison of Run 72 and Run
76, just the data. That pool height was up to 35 inches -- or about 35
inches within a hundred seconds, and the core response was not much
DR. WALLIS: So I guess that is your case, isn't it, that it
doesn't matter that there is a lot of disagreement in the details, but
peak clad temperature is insensitive to anything.
MR. NISSLEY: I think a very important conclusion that we
are offering is that you can have substantial differences in what is
going on in the upper plenum and it is at most a second order effect in
the core. And we showed quite -- we showed a number of PWR calculations
at the last subcommittee meeting which tended to confirm that.
This is the subcooling in the upper plenum. The data for
Test 72 is essentially saturated, so I don't have a real data figure
here. Again, the difference between drops and continuous liquid is
fairly small. By reducing the condensation, I am evaporating less --
DR. WALLIS: Again, you are going the wrong direction.
MR. NISSLEY: That's correct.
DR. WALLIS: It doesn't match the data.
MR. NISSLEY: That's correct.
DR. WALLIS: So I just don't know why you show something
like that. What is the point of showing a .2 curve, what is the
MR. NISSLEY: Thank you for setting up my next slide.
CHAIRMAN POWERS: I get the message here. It is not fair,
he has seen this before.
MR. NISSLEY: What we are looking at here, I said the final
figure of merit we wanted to look at for CCTF was we actually have some
cladding heat-up data here to look at. The top figure shows the data
spread for the high power region, which, in this case, was pretty tight
between the various rods in the high power region. Our prediction has a
tendency to slightly over-predict the cladding temperature and the time
If I go in and take -- I just showed figures of what was
going on in the upper plenum, and I say, well, how does that affect what
is going on in the core? If I -- what I have in the bottom figure here
is the predicted cladding temperature, again, repeating the nominal case
here, and then showing the case where I have got droplets, where there
was very little difference in the upper plenum, versus the case where I
have got ranged condensation and drag, where I had a fairly -- a more
noticeable difference in the upper plenum, but, yet, when I look in
terms of the core cooling, it has very little effect.
CHAIRMAN POWERS: When I look at your upper plot, you have a
curve labeled prediction and the others, I take it, are the experimental
MR. NISSLEY: These are all thermocouple data.
CHAIRMAN POWERS: And I see in the thermocouple data very
smooth curves, despite the fact that you have got coolant running down
and steam running up, and droplets and things like that. Does that just
mean the thermocouples are relatively large and they are not very
MR. NISSLEY: I think there could be a --
CHAIRMAN POWERS: Maybe an average.
MR. NISSLEY: -- temperature lag there. The other thing
being, in these regions, this is a bottom-up quench, don't forget. This
is not --
DR. KRESS: It is mostly just steam going up, carrying the
train, dropping --
MR. NISSLEY: Yeah.
CHAIRMAN POWERS: Well, let me not be obscure. I am asking
a question with respect to nothing that you are talking about. Is that
when we get to these higher burnup fuels that we are talking about later
today, we are talking about clads that are extremely sensitive to
thermal stresses, and if, in these processes, we were to induce wild
swings in the temperatures over short periods of time, those thermal
stresses get excited. This would say don't worry about thermal
stresses, and I wondering if that is really true.
DR. KRESS: Actually, I think Mario Fontana would tell you
that these particular thermocouples are pretty rapid response
thermocouples that had these tests. They are not your big lunky
thermocouples that lagged a lot. They are pretty responsive, and they
were right -- right at the clad surface.
CHAIRMAN POWERS: Okay. So you are saying the temperatures
are really very exclusive in this.
DR. KRESS: The temperatures are pretty well followed by
DR. WALLIS: Can I ask you about the philosophy of ranging?
There seems to be very little guidance about how to range and what is
good enough ranging. You could have said we will plot on this curve XC
equal zero and XC equals infinity, and, by George, it makes no
difference. That would have been more convincing.
Why is ranging over so limited range, one direction, why
isn't -- why don't you have sort of a mandate to range until it makes a
difference or something, over all conceivable values? We know
condensation is a very whimsical thing, so one might explore infinity.
MR. NISSLEY: That's a good question.
DR. WALLIS: That would be very convincing. That would be
MR. NISSLEY: I guess we come back to -- and, again, I think
in regard to whether this is consistent with -- aspects of this, I
think, are certainly consistent with CSAU. It is the devil is in the
details in terms of how you implement it.
One of the principal philosophies of CSAU is using separate
effects tests to go in and determine how to range the parameters. And
one alternative we could have applied here was to say let's go in and
use the GE CCFL test to say what is going on at the top nozzle and in
the upper portions of the rod bundle, and then apply -- go and use the
UPTF data to say, okay, now, what is going on in the upper plenum, and
use that -- recognizing that they are somewhat different processes going
on in the two regions, and come up with what I have referred to
previously as a regional ranging approach. We did not do that,
I think the results that we are showing here indicate that
that additional level of detail is not necessary. And I think one of
the key reasons for that is the very significant role that the
accumulator plays in getting reflood up and running. I think that CCTF
test with two times the injection flow is very telling. You have a very
different behavior in the upper plenum, but it doesn't mean much to the
If you were looking for the extra level of technical detail,
I think perhaps the regional ranging might be more technically
DR. WALLIS: I'm sorry, but I notice our Vice Chairman isn't
here. The phrase he likes to use is -- What if we are wrong? And the
way to be sure, to just sort of bound the regions in which you might be
wrong, is to say, well, let's explore a bigger range and find out.
It's so obvious to say, well, let's explore this over a
broader range, as broad as we can or until it makes a difference or
MR. NISSLEY: Well, this comes back to the question of best
estimate versus conservative.
DR. WALLIS: Certainly is part of best estimate.
MR. NISSLEY: That's true, but we've accepted several
conservative biases in here already that that's part of our rationale
for not going to that additional detail.
DR. WALLIS: Being conservative about condensation is a
MR. NISSLEY: Perhaps the best defense I can offer is all
the evidence suggests that you can have significant differences up in
the upper plenum and it does not significantly affect the core cooling,
which is the intent of the regulation.
CHAIRMAN POWERS: My understanding of the XC and XD terms
were that you had a model of condensation rate and you just introduced a
parameter in that?
MR. NISSLEY: We simply introduced multipliers on the
existing models, yes.
CHAIRMAN POWERS: And that model presumably -- what I'm
wondering is doesn't the model constitute infinity? I mean it has a
heat -- presumably has a heat balance and a mass transfer balance on it
or something like that.
MR. NISSLEY: There's limits to the amount of condensation.
CHAIRMAN POWERS: And so, doesn't that represent infinity?
If the model gets you the maximum possible rate of condensation given
the limits of heat transfer and mass transfer -- apparently that's not
DR. WALLIS: No, it doesn't. .2 is not a fudge factor on
equilibrium. It's a fudge factor on a rate process defined by some heat
MR. NISSLEY: That's correct.
DR. KRESS: .2 times HA delta-T.
CHAIRMAN POWERS: So, you could -- infinity is not infinity
in this case. I mean there is a molecular limit.
DR. WALLIS: Infinity drives everything to equilibrium.
It's a heat transfer coefficient. So, having it infinite means you
condense up to equilibrium of thermodynamics, is I think was what you
were driving at, and zero means that there's no condensation at all.
CHAIRMAN POWERS: Yes, I guess I understood the zero. It
was the infinity is not infinity here. There is clearly a molecular
limit. And I thought maybe that that was built into your model, but
apparently you can go --
DR. KRESS: It just gets to equilibrium a lot quicker.
CHAIRMAN POWERS: Yes.
MR. NISSLEY: Once you get to equilibrium, that's it.
CHAIRMAN POWERS: To be fair, you didn't actually describe
the details of your model.
MR. NISSLEY: No, we certainly didn't get into the model at
A few quick comments on the PWR sensitivity studies. I
wanted to focus today on the data that we've looked at in our
We did show a number of PWR sensitivity calculations to the
subcommittee. The predicted PWR flow pattern -- again, the accumulator
is sufficient to refill the lower plenum and get re-flood up and
What we see in our calculations is, for the in-board
assemblies -- in other words, those at average or high power -- that we
have no draining into those assemblies at all; we only get draining on
the low-power assemblies out in the periphery.
DR. WALLIS: Now, you haven't done any predictions and I
guess there are no tests for what would happen if you had a flux
MR. NISSLEY: The question I should have offered at the last
-- or the answer I should have offered at the last meeting is UPTF phase
A did have uniform injection radially.
We've done numerous sensitivity studies, including the ones
I have listed here, where we've ranged interfacial condensation and
drag. We've introduced the injection as droplet flow.
We've also spatially distributed the flow, some of it being
in outer regions and some of it in inner regions, and in all cases,
while we did see differences in the behavior in the upper plenum, the
actual effect on the core cooling was, at most, second order, which is
very consistent with that set of CCTF parametric studies that we showed,
I mean the data studies that we showed.
This is just a complete list of the uncertainty contributors
that we consider in the overall methodology.
The point I want to make -- this is in bold font, but it's
probably -- might be a little hard to pick that out.
The only real difference that we're introducing here is
that, in the three- and four-loop plants, we ranged interfacial
condensation in the downcomer on low power or at lower plenum regions,
and what we showed in the course of licensing this was that does not
have much of an effect on a UPI plant, and we have replaced that with
ranging of condensation and interfacial drag modeling, as I described.
One other point worth bringing up is --
DR. WALLIS: This is part of the uncertainty, it seems to
me, in the CSAU procedure.
MR. NISSLEY: Correct.
DR. WALLIS: And you're assuming that all condensation
coefficients between .2 and 1 are equally likely or something in your
MR. NISSLEY: That's correct. We're assuming a uniform
DR. WALLIS: Best estimate is halfway between?
MR. NISSLEY: In that case would come out to be .6.
DR. WALLIS: So, you're saying .6 is your best estimate.
MR. NISSLEY: In effect, yes.
DR. WALLIS: What's the evidence that it's the best?
MR. NISSLEY: I do not have that here.
DR. WALLIS: It happens to be the middle of the range that
you chose to investigate.
MR. MARKLEY: Yes.
DR. WALLIS: "Best" implies something.
MR. NISSLEY: That's true. This is somewhat analogous to,
in the three- and four-loop modeling, the nominal model.
There were two estimates of the condensation efficiency in
the downcomer from the same set of data, based on how you interpreted
the data, and our model was right on the upper set of data, but the
other way of interpreting the data gave something about 50-percent less
So, given that -- this I view as an analogous situation in
that we were comparing the data, we matched the one set, the best
information we had was here is the other end of it, and it's uniformly
distributed in between.
DR. WALLIS: You could so something much more sophisticated.
You could take those dots we were talking about, the Chairman was asking
you about, and you could say here are some curves that go through and
here's the average, something which explains the scatter.
MR. NISSLEY: We don't have that right now.
CHAIRMAN POWERS: Let me just ask some philosophical
questions here. I mean it seems to me that he's coming back with a
conclusion -- whether he takes 6.2, presumably, doesn't continence
It really didn't affect any of the results that I saw. I
mean I saw no big dramatic changes. Is that roughly correct?
MR. NISSLEY: Right. When you look a the PWR calculations,
you see the same thing.
CHAIRMAN POWERS: Okay. So, why would one want to invest --
I know we're using the word "best," but "best" is in quotes here, and
why would you want to invest a large amount of effort in determining
what particular value you would use here?
MR. NISSLEY: That's an excellent point.
DR. WALLIS: I think you would, because someday someone may
say maybe we could get 20-percent more power out of this thing. We're
going to use our best-estimate code, as approved, to prove that it's
CHAIRMAN POWERS: But they would probably make him dance
through these exact same hoops.
DR. WALLIS: No, they wouldn't. They'd said it's already
been approved, they've danced once, there's no need to dance again.
CHAIRMAN POWERS: For the higher power, I think they would,
wouldn't they? I think they would. I mean your power is dictated by
the decay curve here, right?
DR. KRESS: They wouldn't make them re-validate the code;
they'd make them redo the calculation with the code.
CHAIRMAN POWERS: I see.
MR. NISSLEY: That's true.
CHAIRMAN POWERS: Okay. So, you've got to go look for your
.6 now. Okay. Fine.
Let me ask you one question about your list of uncertainties
there. I do notice that you have clad reaction rate. Did you look at
MR. NISSLEY: Break-away oxidation meaning?
CHAIRMAN POWERS: Something different than parabolic
MR. NISSLEY: No.
CHAIRMAN POWERS: That's just multipliers, then, in
Baker-Just, is all you're doing?
MR. NISSLEY: No, this is Cathcart-Powell per the reg guide.
CHAIRMAN POWERS: You dialed it around a little bit?
MR. NISSLEY: Yes. They're in the report. There's
estimates of uncertainty in an appendix.
CHAIRMAN POWERS: Break-away, of course, is a much more
DR. KRESS: Dr. Paul would like for me to tell you that
that's not Powell, it's Paul.
MR. NISSLEY: Give him my apologies, please. With all due
respect, it's Nissley, not Nestle.
MR. NISSLEY: In conclusion --
DR. SHACK: -- you will make the very best?
MR. NISSLEY: I'm accused of being from the family
What we focused on here today was to review the PIRT and
review the application of the code to relevant data that looked at those
The assessment results did indicate with the nominal models
there are some biases; it's not a dead-on best estimate code.
The core drain tends to be under-predicted. This is seen in
both the GE tests and the UPTF tests.
The hot leg entrainment tends to be over-predicted, as seen
in the UPTF tests, and the condensation in the upper plenum tends to be
under-predicted, and as pointed out, further decreasing the condensation
moves you further away from that.
In terms of ranging of parameters, we've looked at separate
effects tests and have ranged individually interfacial drag and
interfacial condensation and have selected a combination of the two
where we range them from the nominal value to a combined value of .2.
We've also looked at how we model the injection in terms of
whether it's a film or a droplet or whether it's spatially distributed,
and from those studies, we've concluded that modeling the injection as a
liquid film is an appropriate way of modeling the injection.
DR. WALLIS: We had talked about XC, which for some reason
seems to be always ranged with XD. Could we talk about XD a little bit?
If XD were infinity, then the steam would allow no liquid to
get in. Is that true?
MR. NISSLEY: Yes, I would expect so.
DR. WALLIS: So, XD cannot be infinity. That would be bad
news. Some value between 1 and infinity where it gets bad.
MR. NISSLEY: Right.
DR. WALLIS: And you don't know how much that is. Is it 1.1
or 2, 5?
MR. NISSLEY: Based on that parameter, no, I can't answer
DR. WALLIS: I think it would be good to investigate that.
MR. NISSLEY: I think, from the UPTF tests, where we had
clear, steady-state solutions, where you could see the net effect of
several of this processes, including the assumption of how you're
modeling the injection and the net result of how much of it goes out the
hot legs, what we were showing was that we were definitely
over-predicting the hot leg entrainment.
DR. WALLIS: Now, I would ask another question, because it's
fair that you don't show us any XD bigger than 1. Is that because you
didn't calculate it or you didn't want to show it?
MR. NISSLEY: We didn't even consider it because of the UPTF
DR. WALLIS: You never considered it? Just taboo?
It gets back to my philosophy of ranging.
MR. NISSLEY: I don't have -- can't readily find the table,
but I had a summary table that indicated for all the three phases of
UPTF we overpredicted hot-leg entrainment by I believe 20 to 60 percent.
DR. WALLIS: Nothing in CSAU which says thou shalt range out
to the point where you begin to see something you don't like, and then
you should ask seriously is that likely -- what's the probability of
MR. NISSLEY: I don't think there's a step in there that
DR. WALLIS: I don't think it is in there, but maybe --
CHAIRMAN POWERS: I agree with you that we often see these
studies, explorations of particularly complex phenomena, that people
never go look and find out what happens when you fall off the cliff. I
mean, how far do you have to go?
DR. WALLIS: Where is the cliff? How far is the cliff?
CHAIRMAN POWERS: We're content to say we're not near it,
and they back off that.
MR. NISSLEY: We were content to say that in the UPTF tests
we overpredicted entrainment in every phase, and we were beyond what the
DR. WALLIS: But there were not many tests, and you
certainly underpredicted condensation. So, you know, it wasn't as if
you were doing a great job of being right on the data. So --
CHAIRMAN POWERS: I guess really the question -- I think you
may be a little hard on yourself, because you don't understand the
experimental error, the replicate error, on those experiments. You
might have been just fine on the experimental error. You were close in
all of the calculations. What I don't understand is how close is close
enough, because of the question you raised.
DR. WALLIS: You also asked -- your question about these
experiments' repeatability cannot be answered by appealing to
experimental error. Repeatability is another question. It could well
be that there's something whimsical, a butterfly flaps in Mexico and
that changes where the water comes down in the PWR.
CHAIRMAN POWERS: I agree with you, and that's why I'm
not -- I think he may be a little hard on himself. What I'm still
puzzled about is how good does he have to be for you guys to say okay,
now you have a validated code, you can use it for a power uprate.
DR. WALLIS: That's the key question.
CHAIRMAN POWERS: And you're the one that has to answer it.
DR. FONTANA: I think that none of the large-scale tests
were run to the point where the flow did not break through to the lower
plenum; is that correct?
MR. NISSLEY: That's correct.
DR. FONTANA: Okay.
MR. NISSLEY: I would like to make a point -- unfortunately
our utility representative or utility sponsor is stuck in Milwaukee,
couldn't get here.
I'd like to come back to that operating comment, and early
on this was a question of why utilities are doing this. Wisconsin
Electric is the utility that we're performing this analysis for, and
they have two major goals. One of them is they want to increase the
diameter of the fuel rod in the core, for two reasons. One, to get more
uranium in there, and decrease the number of load assemblies, and in
their mind perhaps more importantly is they have spent-fuel issues.
And the second is to implement a power uprate. So all of
the analysis work that we've presented here already does incorporate the
DR. WALLIS: That's a bit problematic, because, I mean,
you're comparing with very limited tests, very limited steam flow,
presumably power uprate gives you higher steam flows than you have now.
So you may get into some region where you begin to get closer to this
cliff. We don't have much evidence about where it is. So you're going
to use something which seemed to work nicely for existing systems and
you're going to apply it to something which we don't know what it's
going to be, but it's going to be something that challenges a bit more,
perhaps, the safety features of the plant. You don't quite know where
the cliff --
MR. NISSLEY: I can't give an airtight response to that. I
would comment that the UPTF test phase B had a steam-injection rate
which was approximately 20 percent higher than the others, and the power
rating it was scaled from is comparable to the one this plant's
DR. WALLIS: Then of course the question about whether
injecting steam across the base of some assembly is quite the same as
generating the steam in the assembly. It's a different boundary
condition for the flooding, and it may be that the asymmetry of it all
comes on one side, is promoted more by one or other of those boundary
conditions. So again really a nasty lawyer could say that there's
something really different about UPTF and the reactor.
DR. KRESS: We're running a little bit behind, and I think
we're scheduled to hear from the staff, so I think we probably ought to
DR. FONTANA: Let me just ask one more quick question, just
Did I hear you properly saying that the effect of a
20-percent power upgrade is already within the scope of your analyses
and not an extension of the analyses? Is that correct?
MR. NISSLEY: Let me clarify what I was trying to say. The
three UPTF phases, one of them, the steam flow rates were based on an
assumed decay heat so long after the break occurred. And Phase D had
like an AMS 71 plus 20 percent, whereas the others were based on an AMS
79. So there was -- in that one phase of the three UPTF tests there was
the equivalent of a 20-percent uprate.
DR. FONTANA: Okay. Yes.
DR. WALLIS: Do you have a chair, Tom?
DR. KRESS: I have a chair. I'll be willing to turn it back
DR. WALLIS: I feel it's important for the full Committee to
hear the staff's point of view.
DR. KRESS: That's what I thought.
DR. WALLIS: Because they have to make some --
DR. SEALE: I would agree.
DR. KRESS: And I think we need to let them take the time
that they need to do that.
DR. WALLIS: In order for the staff to present.
DR. KRESS: When Dana says this ran too long, it'll be your
fault instead of mine.
CHAIRMAN POWERS: I will remove that concern from your mind.
I found the presentation quite interesting and very well done.
MR. ORR: Okay. I'm Frank Orr from NRR.
I'll try to make it fast anyway. I don't think there's
anything too complex in what I'm doing here.
Okay. Here's a slide where I present -- this is what was
the general description of the review, what we covered in our review,
minimally. We covered more than that, but we limited it to the
differences, but it turns out we went beyond the differences. Processes
used to develop this methodology.
It turns out the development and assessment by Westinghouse
we reviewed to assure that they did it by CSAU. In the three and
four-loop methodology we had committed to doing that, and we did. We
used CSAU as our guidance in performing the review, so it was not only
developed and assessed, but performed in accordance with CSAU. The
computer code that was used was the same code that was used for the
three and four-loop methodology. Again, that code was approved for
that. This one doesn't change that. They did not really change the
code, they added a couple of multipliers to be ranged.
CHAIRMAN POWERS: Did you go through, when you did three and
four-loop analysis, go through a verification of the computer code?
MR. ORR: Yes. Yes. That was all part of the process, the
CSAU process, that we ended up implementing in that methodology.
But also this code has a bigger history. They use the same
code a different, what would you call it, whatever it is, modification
or whatever it is, series, but it's the same code, and everything
they've been using for ten years is part of the SECY-83-472 methodology,
which is a quasi-best-estimate methodology itself. And it represents
four of the five reviews we've done of methodologies for this plant --
for this design. So we've seen a lot of this, we're familiar with the
code. It's pretty well established. And they're using the same one
Many of the issues that we've discussed here have existed in
the other implementations and also in the three and four-loop
methodology. The three and four-loop methodology is not as sensitive to
some of these as this is.
We reviewed the overall calculational approach. Now this is
where the difference between previous UPI methodologies and the present
one exist, in that we've introduced the statistical overall package that
we did not have in the SECY methodology. That is the same package as
was used in the three and four-loop methodology -- let's see if I can
find the slide here -- I always like to put this one up -- no one likes
it except me, but I like it, because it's simple.
This is a simple stick figure. It doesn't include the
statistical correction factor that we've got, but it does show one
thing, and that is right here, that is the global response surface or
whatever it is is the area that -- the sole change in the methodology
for this between the three and four-loop methodology as far as the
quantification of the peak clad temperature is concerned.
DR. WALLIS: Could you tell us where a coefficient of .2 in
condensation appears in this methodology?
MR. ORR: It doesn't appear in the overall process. I'm
talking about processes at this stage.
DR. WALLIS: Well, where would it appear in this document?
Which one of these branches would one find variations in the coefficient
MR. ORR: Right in there someplace.
It would be in the ranging.
DR. WALLIS: It's in the ranging. The ranging is in there
MR. ORR: Yes. After the -- right at about where the dotted
line is, that's about where you're introducing the ranging, in those
DR. WALLIS: How do you decide what range -- ranging should
MR. ORR: Okay. My understanding of what they did and what
we reviewed too is that they ranged over the ranges that they had
covered by the -- sufficient to bracket the data they had given us.
DR. WALLIS: How about ranging over what is reasonably
expected to occur in a PWR?
MR. ORR: We felt that was reasonable when we reviewed it.
DR. WALLIS: So you think they made the bridge nicely
between .2 --
MR. ORR: I think as reasonably achievable. We exercised
our engineering judgment on that, and maybe we were wrong, but as far as
I know, I think we're right. It's the same thing with all the other
requirements. I've also had to look --
DR. WALLIS: You may be wrong with the coefficient of .2, or
MR. ORR: I mean, I don't -- at the subcommittee meeting
other data was mentioned. I had not -- was not aware of that data. I
was not -- I'm not so certain that -- that data is obscure to me, so I
do not know whether that would be relevant data or not relevant data or
anything like that, but --
DR. WALLIS: Well, suppose that Novak Zuber takes his data
that he knows about and uses it and finds out that compared with the
code we now know that the coefficient should be 2 to fit Novak's data.
MR. ORR: Well --
DR. WALLIS: What does that tell you?
MR. ORR: There are provisions within regulation for
bringing that up and making appropriate modification, but right now I
have no better information than was presented to me during the review.
And I think it's an appropriate implementation of that data.
I can't act on anything better than what is presented to me,
and as a regulator I have followed what is written in this book and
in -- is it NUREG 52-049 -- whatever the one that defines or pretty much
gives a description of the CSAU process.
DR. WALLIS: I guess you never asked the question, you never
sort of said well, it might be bigger than one?
MR. ORR: We considered that.
DR. WALLIS: How am I going to put to rest the idea that it
could be viewed as --
MR. ORR: We considered it, but again we -- our review was
limited to what was presented to us and what the differences were.
DR. WALLIS: Isn't that part of the problem, that you only
review what is presented to you, you don't step out of the box and say I
think it might be bigger than one?
MR. ORR: If I did that, I would be -- you know, I think it
might be -- it's a whole big thing. I don't know that it would be
bigger than one, you know? There are a lot of things that are
imaginable. You saw some droplet sizes when the droplet size
distributions were shown. It is imaginable that very, very small
droplets could exist but they were ruled out and with some reason, so
anyways I'll skip beyond all that.
I had the other one. Ultimately I think our review boiled
down to the types of technical issues that are in front of us. We did
look at things like the PIRT and all the rest of the stuff in CSAU.
We noticed that the -- let's put this up here. The main
thing in here that I am saying is that the biggest -- what they have
done to this model, it differentiates between the others, they removed a
couple of items for good reasons from the response or the global
response surface, the one that I put up there. They replaced them with
the interfacial drag and the interfacial condensation, ranging of those.
It was an appropriate thing to do but it did not change any of my pet
stick figure at all.
As far as the process is concerned there, that hasn't
changed, so the whole review comes down to the technical issues, and the
principal technical issues are the ones that have to do with behavior
above the core because that is where your injection water is coming from
is above the core.
Things like the downcomer condensation and the C sub D are
more important because the injection water was coming from there.
It turns out that we have seen all this other discussion
about the technical things that we have covered. The points have been
brought up. We considered those. We still came to the conclusion that
what they have done is adequate to meet the various requirements of the
regulations for a realistic model -- notice I haven't said best
estimate. I avoided that again, and all it says is to come up with a
determination, an ability to determine the 95 percent of the -- well, it
doesn't even say 95 percent, it says high probability -- but 95 percent
of the time you are not going to see the criteria in 50.046 be to -- or
We feel that that is true. In our judgment it is true. That
is why we feel that the methodology is acceptable.
DR. WALLIS: Frank, have you seen the evaluations from our
MR. ORR: From which consultants?
DR. WALLIS: Dr. Zuber and Dr. Schrock.
MR. ORR: I have seen what they said and I am not saying I
disagree with them. As a technical person, as a purist, but as a
regulator and what it takes to satisfy the regulations, and to assure
the health and safety of the public, I think that our conclusions are
the appropriate conclusions. I haven't seen anything that would say
that our conclusions are inappropriate, especially given the precedent
set in previous reviews, and so I guess that's the bottom line.
DR. WALLIS: So I guess the reason you differ from them is
you are using different criteria. You are pointing at peak clad
temperature as being the criterion --
MR. ORR: I am pointing at --
DR. WALLIS: -- so any of this, I don't need to worry about
MR. ORR: I am pointing at satisfying that regulation.
DR. WALLIS: I am saying we look at all this stuff and, you
know, they have other criteria in the back of their minds on what is
best and what is a good estimate and what is realistic and so on --
MR. ORR: They are saying something should be very, very
realistic. I realize that in that model there's already things like
TMIN that nobody knows anything about where we have just adjusted down
as far as we could adjust just to get it conservative to make sure we
have at least 95 percent of the time we will be under a true --
DR. WALLIS: So I think what you are telling me is that some
manufacturer or utility could come here with a model which is utterly
unrealistic but peak clad temperature is insensitive to all these things
that varied from zero to infinity, therefore it doesn't matter?
MR. ORR: I would not say that because we have had such
things happen and I have raised strenuous objections to it. This is not
utterly unrealistic. In fact, I would argue that it is the most
realistic model, in a whole different ballpark than any of the other
models out there. In fact, we have got some coming in that we are going
to look at.
I am already biased, and I will admit I am biased, having
gone through all these exercises with this model, to thinking this is
probably better than the ones that ave coming in.
DR. WALLIS: Did you look at the SER from November 1977?
MR. ORR: What's that? 1977?
DR. WALLIS: Well, I looked back. The words are so deja vu.
I just wondered if you had looked back at the history of this which goes
back to 1977.
MR. ORR: I don't know what specific words you are saying.
DR. WALLIS: In those days Westinghouse came up with a
scenario and the Staff made estimates which made them concerned so they
turned it down. I don't know if you looked back at that history at all.
MR. ORR: I was around then. I was involved in some of the
reviews then. I don't know what you are talking about though. It's out
of context. I mean --
DR. WALLIS: Okay, fine. Doesn't matter.
CHAIRMAN POWERS: Let me ask you, you mentioned and you put
a slide on peak clad temperature. Do you also look at the maximum
extend of clad oxidation?
MR. ORR: That -- it spills out of the way they handle it.
First of all, they take the event. They back out the event that gives
limiting peak clad temperature. They trace back that event -- I suppose
the power history and everything involved in it -- and they back out
what the oxidation would have been calculated, the core responds to
that. Check with Mitch, but I think that is done more as just a
follow-through calculation rather than a best estimate and variable.
Mitch, is that true, or --
MR. NISSLEY: Yes. Let me clarify that. To do the
oxidation calculation, we select the transient that exceeds the 95th
percentile probability, PCT, and an important element of oxidation is
the amount of time you are at the elevated temperature and we look at
the data here and another figure of merit would be the time at
temperature, which can be represented by the time to quench and what we
came up with was a -- from the review of the comparisons with data a
time scale stretching, if you will, that exceeds two sigma of what our
comparisons with data show.
So we have a transient that is greater than 95th percentile,
PCT, and we have a time scale stretching that is beyond two sigma.
CHAIRMAN POWERS: And then you simply apply parabolic
kinetics to that?
MR. NISSLEY: That's correct.
CHAIRMAN POWERS: And do we have any reason to think that
there would be transients where break-away could occur?
MR. NISSLEY: I am not familiar with break-away oxidation.
If you could -- is this a burn-up related issue or --
CHAIRMAN POWERS: Well, yes, it certainly -- certainly my
instincts on break-away get excited with burnup. It's simply spallation
of the oxide so that you instead of getting parabolics you get kind of a
step-wise oxidation kinetics, and the thicker your oxide the more likely
that is to happen and if it gets embrittled or you get hydride
precipitation, then break-away becomes more and more possible.
MR. NISSLEY: I guess in response to that the most to date
information we would have would be what was presented at the reactor
fuel subcommittee meeting last April where we took one of these
transients that was beyond the 95th percentile and applied the kind of
burn-down credits that you get for high burn-up fuel and demonstrated
that the temperatures are quite low.
CHAIRMAN POWERS: Okay.
MR. ORR: I think you are going to have to look in your
thing for the other slide. In this area we are talking about a lot of
technical things but when I am doing a review also I am under other
pressures and other mandates and you might want to just look at the
schedule we are under. It's the last slide in that package.
We are already sliding quite a bit. Again, concerns about
health and safety override everything else, but on the other hand -- or
complete flaws in the model that would invalidate the model. I don't
see any. But otherwise the schedule that we've got here is the type of
schedule we continue to be working on and we see a lot of pressure from
the other side to meet our schedules.
DR. WALLIS: The SER, it says early March '99. Is that
MR. ORR: It seems like that is already shot.
Now we are hoping for the end of March. That may be a
problem too. In this schedule you have got to be able to accommodate
Westinghouse's ability to give us a final document. It took on the
three and four loop methodology, it took a year and a half to produce
that final document, that meets our documentation ultimate needs.
This they said that they can probably do in about two
months, being a smaller document and only being an addendum.
We need time for our project managers to produce our safety
evaluation. We also need our project managers to have time to process
the technical specifications, et cetera, needed by the utility's license
in order to reference this model, incorporate it into the licensing
basis. Within all that time I am thinking that it is getting optimistic
even now that we can meet a May timeframe, so I am guessing now that it
will probably be, even if everything goes optimally, late June.
Still, I think that meets the users' needs. Beyond that,
the same people that have confidence that we have done a good review
will also come down and say, well if you have done a good review, why
haven't you gotten this out of here? But that I don't think is the
concern of this committee. I think the concern of this committee is the
adequacy of the model and I think that 10 CFR 50.46 is a safety
regulation. Compliance with it I see as definitive of what safety
represents within licensing space.
To the best that I can see, 10 CFR 50.46 has been met and
the CSAU methodology has been met. Therefore, Reg Guide 1.57 has been
met and by satisfying all these you are satisfying GDC-35, which spawns
all of those regulations.
DR. WALLIS: The ACRS role is not in your schedule here, and
I've understood all along that we were sort of observing and commenting.
We're not signing off on anything.
MR. ORR: I guess that -- what ACRS does with it I think is
fully ACRS's prerogatives, and regardless of what is done, be assured
that I would not turn a deaf ear or blind eye to whatever you said one
way or the other. It would certainly be assessed and considered. Right
now, I do not know of anything that would compromise our approval of
MR. WERMIEL: This Jerry Wermiel. I'm the new chief of the
Reactor Systems Branch.
Dr. Wallis, there shouldn't be any question. We are not
going to proceed with an acceptance of the topical report in a licensing
action until we have a letter from the ACRS that says it's okay to
proceed or, if the ACRS letter has conditions in it, that we can address
those conditions and properly resolve them.
DR. WALLIS: This is a new one to me, because you know, we
asked these questions before, and it wasn't clear to me ever that we
were required to put a stamp on anything.
MR. WERMIEL: You're not required.
DR. WALLIS: If we say you guys did this, go ahead and do
it, we'll advise the Commission as we see fit.
MR. WERMIEL: That's correct.
DR. WALLIS: That's what I think we will do.
MR. WERMIEL: Exactly.
Now, what I'm saying is -- and I didn't mean to say that it
was a requirement at all, but if there are conditions that we believe
are important to our acceptance, that the ACRS raises that we believe
affect our acceptance, we certainly will consider them.
That's why we're not going to proceed until we're clear that
any concerns the ACRS have that may impact our acceptance are properly
DR. WALLIS: Thank you.
Is there anything else?
CHAIRMAN POWERS: Well, I guess I'd like to hear the answer
to the dilemma you posed, which is he's got a -- we saw a variety of
curves, of calculations compared to some data -- and don't go away,
because I think you have to answer the question.
MR. ORR: Okay.
CHAIRMAN POWERS: I know Graham's not going to answer the
MR. ORR: Okay.
CHAIRMAN POWERS: He's good at asking questions.
MR. ORR: That's good. That's his job.
CHAIRMAN POWERS: We've got a variety of curves that come
close to the data. He's got an analysis that says, even if I don't come
close to the data for this, in the overall analysis, I still keep my
peak clad temperature low, and by implication, I keep my total clad
oxidation within the bounds prescribed in the regulations, and so, I'm
sitting here very happy and say, boy, this looks good to me, it's an
unimportant phenomena, the complexities don't matter in this thing, and
then Dr. Wallis says yes, but once the code gets approved, then it gets
used willy-nilly and we don't go back and re-look and see if these
complexities have an impact on these new applications of things, and so,
the question comes up, how good does the code have to be in order to be
MR. ORR: That comes into the licensing process. When
someone tries to do a power up-rate or something, we have to reassess
whether the methods they've used to -- the analytical methods they've
used to justify that up-rate still apply.
Now, we do that in consideration of this. I would say that
I can't guarantee I won't be -- I will be doing the review of anything,
but it would be very reasonable for the reviewer of that up-rate that
looks at the applicability of this methodology to that up-rated
condition to assure that all of the bounding parameters for the up-rate
still will fall within the range that is covered by what we've reviewed.
I know, in a number of times, people have come to me and
asked me about such things, and I have said but that doesn't fall within
Power up-rates usually -- you run into analytical problems
more often with over-pressure protection than you do with LOCA analyses,
CHAIRMAN POWERS: Let's take one that's a little more willy
than nilly. Let's say a guy comes in and he says I want to take my fuel
up to 75 giga-watt days per ton.
MR. ORR: Okay.
CHAIRMAN POWERS: Okay. He's got a lot of other things to
do, but among those things he has to do is he has to look at the upper
plenum injection. Okay.
Now, what do you do?
MR. ORR: We also look at what they've got. If you're going
to go to a higher burn-up, then you're starting to worry about oxidation
CHAIRMAN POWERS: And he said I ran my computer code and
it's just fine.
MR. ORR: Yes. We still look at what models were used to
justify the existing burn-up, for instance, and to assess whether those
models are applicable for that higher burn-up.
DR. WALLIS: If there's more steam produced, then the cliff
that we talked about at some value of condensation coefficient or drag
or something gets closer, and you don't know where it is, you don't know
how close it's coming.
There's some burn-up where a condensation coefficient of 1.1
becomes critical. There's some burn-up where 1.2 or 2 or -- there's got
to be some steam generation where it matters.
MR. ORR: Okay. But is it higher than what it's been ranged
at right now? If it is, that's a good question for a reviewer to ask.
But right now, as far as we know, the data that they've provided is
reasonably prototypic, and they've bracketed it.
Why would I say that they can't use this for existing
plants? I don't have any regulatory basis for saying no on that.
I grant you that, theoretically, it would be nice to have
the end-points -- well, we do have end-points right now of what has been
demonstrated. It's reasonable to ask if someone came in with something
that's beyond that range.
DR. WALLIS: But they didn't. There's nothing in what they
calculated which says, if we had 25-percent more burn-up and we still
used the same condensation coefficient, we'd be in trouble with peak
clad temperature. There's nothing in what they submitted.
MR. ORR: Well, there are things that would be affected,
like you say, the decay heat, the amount of steam generated.
DR. WALLIS: So, you're going to use a number like .6?
MR. ORR: I don't know. You know the problem when you get
to it and you ask enough questions and you pare it down to where the
real problem is.
CHAIRMAN POWERS: I think you've given me my answer. I
think you've said now that you can't use the code willy-nilly; you've
got to justify it to mean why you used this code.
MR. ORR: In any licensing action, you look at the affected
things and you see how much of the stuff continues to be justified as
useable and how many items in that justification are not still justified
for use, and like I said, the frequent one where I've found problems
with models in power up-rates, for instance, is not LOCAs, it's
That doesn't mean it wouldn't be LOCAs. I'm just saying the
ones that my history -- the history of a lot of those reviews has been
is that people fail to look at over-pressure protection and they use
reference analyses that are not within the range that the original
analyses -- their up-rate doesn't fall within the range of those
CHAIRMAN POWERS: Would you ask questions about the
MR. ORR: Would I?
CHAIRMAN POWERS: Yes.
MR. ORR: I think we have in certain circumstances,
especially considering that issue that came up about a year-and-a-half
ago or two years ago. We've been very concerned about oxidation,
especially higher burn-up oxidation.
DR. KRESS: If, for all these changes, you go back and ask
if the code is still applicable for these changes, what then is the
purpose of having an approved code?
MR. ORR: Well, in a lot of cases, most cases in fact,
they'll be able to show that it still is applicable and the ranges that
we've assumed in the ranging here are still applicable. That's the
overwhelming preponderance of the ones that we see.
DR. KRESS: It makes this showing of still applicable
MR. ORR: It makes it easier. That's the whole reason why
you have a generic model and then you have a referencing plant. It's so
that it simplifies the review so you don't go through four or five years
on a three- or four-loop plant to approve a plant-specific model.
You've got one generic model and you've got a plant-specific
model and it might take you two or three months to review that
CHAIRMAN POWERS: I'm going to have to ask the Chairman if
we've got more to go here.
DR. WALLIS: Yes, I was going to ask that question, but
every time I try to ask it, someone seemed to have another question.
CHAIRMAN POWERS: It's been very interesting and, I have to
say, very well-presented material. I've enjoyed both speakers on this
You'll give it back to me?
DR. WALLIS: I think you've taken it, sir.
CHAIRMAN POWERS: I've done my best, I'll put it that way.
It's just we have an action-packed afternoon agenda. I would like to
recess now until 10 minutes of two.
[Whereupon, at 12:50 p.m., the meeting was recessed, to
reconvene at 1:50 p.m., this same day.]. A F T E R N O O N S E S S I O N
CHAIRMAN POWERS: The next topic is the Phase 1 standard for
a high quality PRA, and I am going to turn to the member with a limited
knowledge on the area of probabilistic risk assessment, and I am sure he
will struggle along to keep -- get us informed and up to speed on what
the fundamental issues are here.
DR. APOSTOLAKIS: Thank you, Mr. Chairman for your kind
DR. APOSTOLAKIS: Well, as we all know, this agency is in
the middle of developing a new approach to regulations, risk-informed,
performance-based. The effort has been intense the last three years or
so, and every single time we produce a regulatory guide or we review a
new effort by the staff, the question of the quality of the underlying
PRA always comes up.
CHAIRMAN POWERS: And never gets answered.
DR. APOSTOLAKIS: In fact, I saw that in her speech at the
Regulatory Information Conference this moment, Commissioner Dicus said,
and I quote, "Because the use of PRAs is a vital part of the process,
there must be some standard for them. They must have functionality and
credibility." So this is the issue we are facing now. How do we
establish a standard that will help people do PRAs that will have
functionality and credibility, at least in the eyes of the decision
There have been efforts in the past to collect methods for
doing PRA and so on, and the PRA procedures guide comes to mind, of
maybe 15 -- 17 years ago, but that was different. That was not a
standard, that was just a collection of methods that people were using
at the time.
The IPE exercise showed that it is very difficult to tell
whether differences in results are due to differences in analytical
methods alone, or a combination of analytical methods in design
differences. However, as I understand, when the staff was reviewing
some of those submittals, they turned them back with specific questions
about specific approaches, and they were found to be unacceptable. That
was not the rule.
So there is clearly a need for a standard. It is not an
easy thing to write a standard like this one because you really don't
want to impede the progress that we have on the methods and developing
new tools, analytical tools and so on. However, there are certain
minimum requirements that a PRA must satisfy to be declared an adequate
or a good PRA. So this effort, under the auspices of the American
Society of Mechanical Engineers, is to address this, exactly this issue.
And today is the first time that this committee will have an
opportunity to find out what the whole effort was about, although we
have already read the standard, the draft standard, I think it will be
very, very helpful to have this exchange here and presentation and so
on. So without further ado, I will turn it over to you.
MR. BREWER: Thank you.
DR. APOSTOLAKIS: Please identify yourselves first.
MR. BREWER: Dr. Bernsen is the Chairman of the ASME
Committee on Nuclear Risk Management and he will lead off.
MR. BERNSEN: Right. I am Sid Bernsen, Chairman of the
Committee and with me here today we have Ron Simard, who is Chairman of
the project team that has been developing this document, and to support
Ron today, we have Mary Drouin and Duncan Brewer who are members of the
You gave half of my talk, but that's all right. In the
interest of getting to the meat of what we want to discuss today, I may
shorten some of my remarks. You will have copies of the slides. Some
of the you on the PRA Subcommittee at ACRS have probably seen this. I
don't plan to go over it in detail, but it gives you some idea of how
long ASME has around, the size of the organization, and the size of the
support staff that ASME staff.
With regard to codes and standards development, it all falls
under a Council on Codes and Standards and we have more than 600
published codes and standards, and lots of committees, 4,000 volunteers,
including a lot of people who are supported by the various government
agencies, and there is substantial staff support for the activity.
Within the nuclear arena, we have -- the Board on Nuclear
Codes and Standards manages the activities of a number of committees,
subcommittees, the Committee on Qualification of Mechanical Equipment,
and the two subcommittees of the boiler code, BPV stands for Boiler
Pressure Vessel Code, on design and the Subcommittee on Inservice
Inspection, and then we have the Committee on Operation and Maintenance
-- I think some of you have heard them present their cases using
risk-informed methodology -- Quality Assurance and so on, and the last
of these is the Committee on Nuclear Risk Management.
Now, I forgot to do something that I should have started
with, and all of the NRC staff people understand this, but I have got to
tell you we are representing ASME Nuclear Codes and Standards today, but
the comments we make are not necessarily the position of the Society,
they are our own individual observations.
The Committee on Nuclear Risk Management has a charter to
develop and maintain standards and guides on risk management techniques,
supporting PRA applications and performance-based applications. We
really want to focus on those things that are needed to support our
other codes and standards as well as -- if, in fact, they support the
community as well, meaning that it is pretty hard to define how far this
goes, because when you use a PRA to make decisions in the area of
quality assurance, or inspection, or testing, what equipment is
important and so on, you really find that you need to look at a large
scope of a PRA in some cases. And under the Committee there is a
project team right now developing the standard.
I think the most important element I wanted to cover is
that, in parallel, or even before we started this activity on the PRA
standard, ASME had undertaken an effort to redesign their process for
developing codes and standards. We tried to find a way that would
expedite the process, provide for more public participation earlier on,
and streamline the way we did things. And I would say, personally, it
is a good thing we did because we elected to use that process in order
to develop this particular standard.
And, in contrast, probably a number of you are familiar with
traditional ways that things like this get developed. You form a group
of people and they chew on it, and chew on it, vote on it, and,
eventually, they come out with a product which then goes to the next
level of hierarchy within their Standards Committee and they do the same
thing again. And, of course, each time you develop more advocates for
the thing that has been developed, and it is hard as heck to get them to
listen to the other point of view. Eventually, it gets out of the
committee and then it goes to the board and the public for more
comments, and the process can take as long as forever we have found.
So we decided that one of the things we wanted to do was
require they use a project approach where you have a team that is given
an assignment, and they are charged with the job of following it all the
way through, so that there is somebody that is always in charge of
pushing the thing along and resolving comments along the way.
The other thing we said was it is very important that you
get a product out quickly and you get review and comments from a broad
constituency -- broad segment of the Society, in general. You get that
input before you really establish your position, so you want to get a
draft out early, and then you want to submit it to broad review. That
means, of course, that the public is going to see these things at an
earlier stage of their development than in the past, where they didn't
get out until they had been carefully massaged by the committee. So
that in doing this particular standard, we have tried to get it out for
general and wide public review at an early stage.
And then after the comments are received, they all need to
be addressed and, if necessary, and I will show you a timeline later on,
then another review cycle would be undertaken on those substantive
changes that are made to the document in response to the comments. Then
after that, it is submitted to the committee for a vote and the board
action is one of oversight and not a rehashing of the document after it
has been approved by the Standards Committee. Then, of course, we will
intend, and do intend to submit to ANSI for acceptance as an American
Now, I guess, and I know I am taking some of Ron's stuff,
but we have sent this document out to well over 100 people worldwide
requesting review, and, of course, it is available on the Internet as
well, so that we are expecting a lot of comments. In my opinion, the
more we get, the better, because that will just make the document much
more robust. We will understand the different points of view. We will
be able to address those and come up with something that has real
strength and real value behind it.
But at the same time, I think we all need to prepare
ourselves, and I have been talking to the work group and others about
it, that you are going to get opposing views and we are going to have to
figure out how to resolve those things, but we are going to do it in a
public process. We are going to do it in a documented process, and I
think the product is going to show the value of that approach.
The time-line -- this all started late in '97. When we and
NRC had some discussions and it was agreed to develop this standard, we
formed a project team -- well, we had approval from ASME Council on
Codes and Standards to undertake this effort.
We formed a project team in early '98, with an expectation
of getting it all done in a year. Well, we haven't made that schedule,
but I think that's indicative of the process. We are interested in
quality first, and schedule's important, but it can't be a driver.
We formed the committee. We have an initial membership.
One of the requirements for our standards committees is that there's a
balance of interests of the various groups involved and we can talk
about what balance means; we may have a slide on that.
Where we are now is we're here today; we want to get your
questions, comments, feedback. We're going to have a public meeting on
the 16th; May 1 close of the review and comment period.
The project team will then get together, and by mid-July, we
hope to -- let's see, yes -- decide -- I suspect we will have a second
cycle, so that, if we're realistic, probably sometime in the fall, in
October, we should be able to submit this for ballot.
CHAIRMAN POWERS: Let me ask you a question about this
balance among the project people. I don't know who is on your panel,
but can you give me some idea of what kind of balance?
MR. BERNSEN: There's no requirement for balance on the
project team. We looked for and obtained, I think, a very good strong
group -- I don't know whether we have a list of the membership of the
The project team -- there isn't a requirement for balance on
the project team. This is an idea of the membership of the team, but
the committee itself has to meet certain criteria.
No more than a third of the membership can represent one
interest group, and we want representation from regulatory, from
industry, from owners, from manufacturers.
CHAIRMAN POWERS: Will I see any representatives of what I
would call the public interest here?
MR. BERNSEN: I don't think we have -- as far as I know, we
don't have someone that you would probably identify as public interest.
CHAIRMAN POWERS: I have seen in the past a report coming
out of Greenpeace that was relatively critical of the PRAs of the time.
Does that get a voice in what the PRAs of the future should look at?
MR. BERNSEN: I guess my answer to that would be the
mechanism that exists now would only be through the fact that there is
-- there are public announcements of the product at various stages in
the process, and the public comments need to be addressed.
Both ASME requirements as well as ANSI requirements provide
for that, so that you have that, but I would have to say that, as a
pro-active measure, we haven't sought that type of participation or
input at this stage.
CHAIRMAN POWERS: That was a relatively thoroughgoing report
out of Greenpeace on this. I mean they obviously have someone that has
an interest that's looked at this.
MR. BERNSEN: It would be useful to look at it.
DR. MILLER: Well, balance is not unique to this process.
Balance has been tradition. Is that right?
MR. BERNSEN: That's right. This is the traditional balance
that applies to all of our codes and standards work.
DR. MILLER: In the past, have you had membership, as Dr.
Powers says, of the public? I think the standards committees I've been
on have has membership of the public.
MR. SIMARD: Not as a rule, no.
MR. BERNSEN: Generally not.
CHAIRMAN POWERS: One of the features of a PRA is that
there's an attempt to estimate some sort of risk.
Now, there have -- there's been a tradition within the
community to choose things like core damage frequency. There's
sometimes gets used a quantity that I have to admit I have absolutely no
understanding at all, called large early release fraction.
It's not evident to me that that's the exhaustive set of
risk measures that one would want to get out of one of these
assessments, and I'm wondering if it isn't the public that ought to have
some sort of input on what are the appropriate risk measures to be
calculating with these PRAs.
MR. BERNSEN: I could go a little bit -- give you my own
personal opinion. That is that, as you will hear, this particular
proposal, this particular product just goes as far as core damage and
large early release fraction and hasn't gone beyond that into the full
level two and level three area.
When you get to the level three area, you're starting to --
trying to estimate what the impact is on the public. I think it's
certainly appropriate to do that. But at this stage, we're still in the
technical area where we're talking about the calculations to that level.
But it's not that we're trying to avoid, you know, input
from everybody. It's just that it doesn't seem to be that critical at
this level of development of the document.
What I want to do --
DR. SEALE: I was going to comment.
I've heard expressions from, actually, people who are on
your list there that represent the community of nuclear practitioners of
one sort of another indicating that there, in fact, might be other risk
measures besides the ones that have been spoken of here which are of
greater importance or greater interest to them in making the kinds of
assessments that have practical value in the day-to-day decision-making
process of, let's say, non-catastrophic but nevertheless events whose --
well, things that can use the guidance of a risk assessment to tell you
the best way to do the job.
That is, in states which are more directed toward the
decisions that are involved and how are we going to get through this
refueling outage or how do we take care of this kind of on-line
maintenance activity and so forth, that suggests that you're really --
and the standard may want to direct your attention more to general
principles of how you manipulate information, how you define hierarchies
of relationships between parameters and so forth, and leave the actual
specification of an in-state as an exercise for the user.
MR. BERNSEN: Well, see, that's the kind of comment -- we're
looking for substantive comments and suggestions on how we address these
things in a meaningful way, and so, we want that kind of dialogue.
DR. SEALE: Okay.
MR. BERNSEN: What I'm going to do now is -- I've already
run beyond my planned schedule anyway. I'm going to turn it over to
Ron. That doesn't mean I might not step in later but give Ron Smart a
chance to walk you through --
DR. APOSTOLAKIS: Let me interrupt for a second. The
schedule shows three hours for this, but the three hours were not
intended to be spent with you.
Can we spend only two hours with you and then -- because we
would like to have a discussion among ourselves, with you present, if
possible, regarding our letter and how we want to do it.
MR. BERNSEN: We only planned on about two hours.
DR. APOSTOLAKIS: Very good. Okay.
DR. KRESS: Before you leave this comment that Dr. Powers
made on other risk measures that might be important, when I read your
draft standard, I got the impression you were focusing on frequency of
fission product release. It seems like both of those things are those
things that are in common with all the risk metrics, whatever one you
want to choose, and that, by focusing on those two, you've encompassed
just about any risk measure one wants to adopt.
MR. SIMARD: My name's Ron Simard. I'm an ASME volunteer
and a member of the Board on Nuclear Codes and Standards, and Sid
referred to a time I guess about 14 months ago when we were approached
by the NRC, and if you think back to that time, remember about 14 months
ago there were a number of risk-informed applications before the NRC,
proposals to modify in-service testing or in-service inspection
programs, for example, that are based upon ASME work products.
One of the needs that was identified at that time was for
some kind of standard, something that would give people an assurance,
both on the utility side and from the point of view of the NRC reviewer
who had this application on his desk, that the dependence upon PRA
insights for that particular application was justified. In other words,
for that particular application, given its complexity and all, the PRA
that was used was robust enough, it had all the right stuff in it that
you needed if you were going to rely upon the insights. At the same
time, from our point of view we had these products out there,
risk-informed code cases. We were encouraging all the committees that
report to us on the Board to more risk-inform their activities, so there
was value to us.
So what we set out to do was to produce a standard that
would help to standardize and reach agreement on the level of PRA
quality and robustness and completeness that was necessary for any given
complex or simple application and such that the amount of review that
was needed by the NRC would be minimized. So the project team that Sid
referred to was charged to develop a standard that would do that, and to
use this redesign process that he mentioned.
Now our deliverables include two things. Remember the
hierarchy. The hierarchy is that there's a committee that reports to
the Board on Nuclear Codes and Standards. The committee meets the usual
requirements of ANSI for consensus and diversity of interests, and
that's the group that is going to formally vote upon this draft when we
give it to them. So what we owe them is we owe them a draft that in our
opinion is defensible and reflects a broad range of comment and input
from the public as Sid pointed out at a time when it's most meaningful
and can influence our thinking.
The second thing we owe them is a summary of the comments we
received so that they can see what were the comments and what did we do
CHAIRMAN POWERS: I think you must be very brave. Because
if my boss attempted to give me a charter like that, I would hand it
right back to him. "What," I would ask him, "is an acceptable level of
quality? And confidence in whom?"
I mean, how do you answer those, how do you define those,
how do you engineer a system to those kinds of standards?
MR. SIMARD: We deal with questions like that all the time.
Let me just remind you, you'll hear us use the word "consensus" a lot.
The ASME develops consensus standards. So this is the -- in the
definition of consensus, what you do is you take a question like the one
you just proposed, sir, and you expose it to a diversity of opinions,
and you do your best not to achieve unanimity, because you never will,
but at least to honor the diversity of views and to reach a point where
you've satisfied at least a majority if not everybody.
So it's hard to answer your question specifically, but
generally I think what you've put your finger on is the nature of the
consensus process, and if you go to number 11 now, and so what we did
when we formed the team that's going to produce the draft and solicit
the public comment, is we tried to draw upon the people who we thought
would be best equipped to help us answer that. So in parentheses you
will see a number of utility organizations that at that time had been
involved in risk-informed applications. And in addition we made a point
of adding someone from each of the four NSSS owners groups who had been
active not only in their various risk-informed applications but also
what was called the PSA certification process that you may recall was
really just getting started I guess about 14 months ago. It was still a
BWR owners group activity, and it was still in the early stages.
We also tried to draw upon expertise in specific areas of
PRA development and use, and of course the NRC. So our hope was that
this group of people could at least make a reasonable stab at answering
the question you posed, and then reach the point where we are today,
which is where we now put it out and expose it to a broad range of
comment and opinion.
CHAIRMAN POWERS: Are you relying on an expert judgment
process here to tell you whether you reach adequacy or not? I guess
what makes me uncomfortable is I'm not sure that this is the group of
people that has to be satisfied. What I forecast when I think about
evolutions that use PRA in the regulatory process and then how we design
nuclear plants is shock on the part of the public looking in on the
subject, because they will see things that from an apparent perspective
are being allowed to no longer come under close regulatory scrutiny
because someone has taken an elaborate and incomprehensibly complex
technology of cut sets and probabilities and said gee, fire protection
seals aren't very important here, so we won't regulate them or we don't
have to spend a lot of time on them.
And yet the public says gee, fire protection seals, they
sound kind of important to me and they used to regulate them pretty
tightly, and yet these guys are coming along with this arcane
technology, they talk about probabilities and cut sets and things like
that, and I don't understand that, but I sure understand fire protection
seals. And why aren't they regulating that anymore?
So what is making me uncomfortable is I'm not clear that
you're seeking the opinions of the people who are likely to be affected
by all this, or think they're affected.
MR. SIMARD: Well, you know, I think one of the challenges
for all of us who get involved in this activity is to make clear
especially to people who aren't that familiar that this is an arcane and
kind of daunting technology, but -- and the point here may be subtle, it
may be some of us in this room recognize it, but it is hard to
communicate outside this room.
And the point is that we are trying to develop something
that will support a risk-informed application rather than something
risk-based, and to those of us, you know, within these walls, the
distinction is that risk-informed means that yes, you rely upon the
insights from this complex analysis to some extent, but you still go
back to your engineering judgment, your years of operating experience
and so forth, and the dependence that you place upon this technology in
making your decisions is somewhat limited.
Now as --
CHAIRMAN POWERS: So you're seeing this as just another
layer of regulation then.
MR. SIMARD: Let me ask either Sid or Duncan or perhaps
Mary, members of the project team and, you know, people who have been
familiar with the use of PRA insights and applications if maybe we can
make a meaningful distinction here.
Sid, do you want to --
MR. BERNSEN: Let me just give you a personal reaction
first, and I'll let the experts talk afterward. But first of all, what
we're talking about is there is in fact and has been going on for years
use of PRA results in regulatory decisions and safety decisions. We
can't deny that. It's been going on.
Even before we had PRA, we were developing safety criteria
and rules and regulations based upon expert judgment, which made some
things important and some things too important and some things
unimportant. That's been going on since the beginning of the safety
considerations for nuclear plants. Nothing new. This tool gives us a
lot more insight. We've been taking advantage of the insight.
The thing we're talking about today is not how you use it in
the regulatory process, but what kind of tool should you have and how do
you factor that into your decision making process. I don't think that
we can possibly in two hours discuss the important philosophical
questions you raise. I'm not denying they're important, but
nevertheless, the thing is that we're not talking about anything that's
all of a sudden new. We're talking about something that's evolutionary.
MR. SIMARD: Turn to slide number 12.
One point I would like to emphasize today is that -- I think
it was around February 1st, this was announced on the ASME web-site, and
e-mail notices were sent to well over 150 people and organizations that
it was available, and we are seeking their public comment, and one of
the things that they were asked to do is, in addition to the standard,
they were asked to take this.
I'm holding up a copy of a white paper that is on the
web-site with the standard, and the white paper attempts to explain our
approach here, and in particular, it explains the PRA that we describe
in here and how it's meant to be used.
It emphasizes that the real heart of this standard is a
process, a process by which you've got a specific risk-informed
application, a proposal to make some modification, for example, in the
way you're going to use an ASME product, but it's a specific
application, it could be relatively simple, it could be relatively
The heart of this standard, as the white paper explains, is
a process whereby you do three things.
First of all, you look at the reference PRA that's in here
to determine which of the technical elements would be necessary to
support that particular application. Maybe you don't need a human
reliability analysis, for example.
Second, you then compare your plant PRA to that reference,
and finally, if there are differences, you evaluate the significance of
any differences between your plant PRA and the one that's in this
standard, and then you're kicked out to other means, like use of expert
judgement or, you know, bounding analyses or whatever, but our
expectation is that almost all of these things are going -- to some
significant extent, almost all these applications will require that
augmentation that the use of the PRA is not the primary factor you're
The other point we tried to make in this white paper is that
the reference PRA in this standard is meant to be used only in the
context of that process I just described to you.
It does not -- the PRA in here is not the project team's
view of what a plant PRA ought to look like. It is not our intent to
put out a description of the PRA that every plant in the U.S. ought to
It was our attempt -- again, using the admitted level of
subjectivity that's inherent in this, it was our intent to try to define
a PRA that would give a realistic estimate of base-line risk,
recognizing that there is tremendous reliance still upon engineering
judgement and so forth.
So, the last point I would make on this view-graph that's up
here now is that we didn't want to simply put this admittedly complex
standard out on the street for comment even with this white paper and
just sit back and wait for the comments.
So, we have arranged for a public meeting to be held in this
building next Tuesday, March 16th. The NRC was kind enough to let us
borrow the -- to let ASME borrow the auditorium here, and the purpose of
that meeting is to give reviewers a chance to come in and ask members of
the project team why we took the particular approach we did, and without
getting into, you know, technical details and so forth --
MR. BERNSEN: Which slide are you on?
MR. SIMARD: I'm still on number 12.
MR. BERNSEN: Twelve. Okay.
MR. SIMARD: Turning now to number 13, just briefly, the
other thing we wanted to is bite off a reasonable size chunk.
So, in this -- our first effort here is to address a level
one PRA, and as Dr. Powers mentioned, the metric here is core damage
frequency, but we did get into a level two PRA, large early-release
frequency, along with some specifications of what information you would
carry over from your level one to your level two PRA, and we limited
ourselves in this go-around to internal events at full power operation,
and the intent was that, later, the scope could be expanded, for
example, to pick up shut-down or to complete the level two requirements,
and that this standard could either incorporate or perhaps reference
work in this area that was done by others.
DR. KRESS: Do you have a bullet somewhere that says
something about uncertainties, including uncertainties?
MS. DROUIN: Uncertainties is covered right now in the
DR. KRESS: It's part of the current standard.
MS. DROUIN: Yes, it's part of the current standard.
DR. KRESS: As I read the current standard, it looked to me
like it did include a full level two.
When it talked about the simplified LERF, it sort of
dismissed it with one line that says see NUREG something or other, and
then it proceeded to talk about fission product release, transport, and
source term. Did I read that wrong?
MS. DROUIN: What it does, when you get into that part of
the standard, there's two options.
You can either do the limited LERF, and then we reference
NUREG/CR-6595, but we also wrote details, if someone elected not to do a
limited LERF, then what the standard would be for doing a full LERF
that's not, you know, the simplified approach that --
DR. KRESS: But it's still a full level two, right?
MS. DROUIN: Not quite.
DR. KRESS: Where does it fall short of being a full level
MS. DROUIN: It does have some stuff in there on source
term, but it doesn't really get into, I would say, a lot of details
there that you would normally see.
DR. KRESS: I'll go back and look at it.
MR. SIMARD: Dr. Apostolakis, let me ask your advice.
At this point, again, looking at the amount of time you'd
like to spend, I can go over -- I think there are five view-graphs that
summarize the process here, and I want to end on a view-graph that
reinforces what we are looking for right now in this comment period.
DR. APOSTOLAKIS: I think you should go ahead. I think this
is important to understand the process.
MR. SIMARD: I think one of the lessons we've learned
already is we need to rearrange this. It's in a very linear fashion.
It starts off with the technical requirements, documentation, peer
review, configuration control, the process comes last.
The process needs to be first, and I would hope that we're
able to focus people on the process in here, because we need feedback
now on whether it's useable. How burdensome is it? How practical is it
both from the utility and from the NRC reviewer perspective?
So, the process contains -- as we have laid it out in this
current draft, has seven steps to it.
In the first three steps, based upon the particular design
change or operational change that you are considering in this
application, you would identify the systems, structures, and components
that are affected and the PRA scope and the results that are necessary
to evaluate that application.
In step four, then, you would then look at the plant PRA to
see if it has those necessary scope and results, and then you have a
If they're missing, you could always choose to upgrade your
PRA so it includes the elements that are missing, or you get kicked out,
outside of the standard, to the more traditional methods, bounding
analysis, sensitivity analysis, the use of expert panels, and you'll see
that again now as we go to step five.
At this point now, now that you've identified the systems,
structures, and components that are affected by the proposed change,
this is where you check to see if they're modeled in the PRA, and again,
you have a decision.
You could always choose, if they are not, to update the PRA,
or again, there are other methods you use to supplement it.
Now, in step six, again, for this particular application,
you look at the reference PRA that's in this standard to determine if
the scope and level is adequate to support the application, and again,
if there are some elements that you see missing from here, within your
application to the NRC, you do other things to make up for them.
Now, finally, step seven is where you're now ready to
compare -- now that you have evaluated the reference PRA and its
adequacy to support the application, you compare your plant PRA to it,
and you now determine if the requirements for the various attributes of
these -- of the human reliability analysis, systems analysis, and so
forth, are met, and if they're not, is that significant?
If it's significant -- you know, if it's not significant,
you proceed. If it is significant, again, you go outside PRA space and
you use other methods.
So, what we are looking for, in particular, during this
period, during this 90-day review and comment period, is feedback on the
practicality of this approach.
My hope is that someone will actually test-drive it on an
application and give us feedback on exactly how many exceptions does he
or she identify between his plant PRA and this one and how much work is
it to do these determination of significance?
DR. APOSTOLAKIS: Can we go back to slide 19? Step six is a
little bothersome. One would hope that what the standard contains is
Now, whether it's sufficient for a particular application,
that may be open to question, but if you allow people to determine
whether the scope and level of standard are necessary and sufficient,
you may inadvertently be giving them license to dismiss the standard
It seems to me that, if you decide to put something in the
standard, that is necessary. You are not claiming it's sufficient.
That depends on the application, but necessity, it seems to me, you
should be claiming.
In other words, I'm already in step five; I have already
completed the first five steps. I have decided on the particular SSCs
and so on should be included and so on.
Well, the moment I say that, then the standard should
specific minimum requirements for the correct handling of these SSCs and
processes, because if you say that it's not -- it may not even be
necessary, then you're opening up the process, you know, and saying now,
you, Mr. User, decide whether you want to use this or not, and I don't
think that's the idea, right?
Sufficiency I can buy, because you may -- you know, there
may be other things.
DR. BONACA: Yes, I totally agree with that, the necessity,
On the issue of sufficiency, also, it seems to me that if
you find you don't have enough detail in the standard, then you can
augment that, but what if you find there is a flaw in the standard?
Is there any mechanism by which the ASME would be involved
in gathering this information, trying to upgrade the standards?
What I find here is that this is one of the key issues,
okay, the standard and how do you play with it, and there is only a very
small section, probably one paragraph or two, dedicated to that. So I
completely -- the issue of necessary, but I think the thought process
here requires much more expansion.
MR. BERNSEN: I am glad you -- let me answer the first part.
I am glad you asked that question because it provides an opportunity to
address something I hadn't mentioned before, that is that one of the
things that ASME recognizes, and through our experience we have found it
is true, it is not enough to just issue a code or standard, but one
needs to provide support and maintenance of this thing. I mean we have,
with the boiler code, for example, numerous code cases which deal with
specific requirements for exception. We have an annual amendment
process on most of our standards. These can even be handled even more
frequently in the early stages.
We have an interpretation process. On all of our standards
we get requests, inquiries on how to interpret them. In a new area, a
new standard like this, we anticipate a lot of that dialogue. We do --
that's why we form a standing committee, we give it staff support. We
have a procedure and a process for dealing with these things, and our
objective is to turn inquiries around very quickly.
So, yeah, we are just at the starting stage. The use of
this thing is going to require a lot of interaction with the community
as time goes on, and we are anticipating that, we are geared up for it,
and that is one of the things that we are expecting.
DR. SEALE: It almost sounds here like the variable here is
not the standard but the particular PRA that you are gauging against the
requirements of the standard. And so you determine if the scope and
level of the PRA for your plant are necessary and sufficient based on
the requirements in the standard for your application, and then you make
your decision, yea or nay, depending on what comes out of that.
But the variable is the PRA is that you are going to be
working with, not the requirements of the standard.
DR. APOSTOLAKIS: No, but even in the standard, though, I
can see how the requirements may not be sufficient.
DR. SEALE: It may be incomplete or something.
DR. APOSTOLAKIS: But they have to be necessary, though.
DR. SEALE: Yeah. Yeah.
MS. DROUIN: But not necessary for every single application.
DR. APOSTOLAKIS: No, but you have decided -- this is step
six, this is not step one. In a particular application, for example,
you have decided that you need to do a fault tree for a particular
system. Then everything you guys say about fault tree analysis is a
must. You can't pick and choose. The moment you decide you need to do
a fault tree analysis, then the standard tells you what the minimum
requirements for that are. Now, that may not be sufficient because you
may have to do a few other things, depending on the application, but if
you start from scratch and say now the analyst will decide which
requirements of the standard are applicable, then, boy. Duncan. I will
MR. BREWER: This is Duncan Brewer, Duke Power Company, and
I am a member of the project team and a member of the committee.
DR. APOSTOLAKIS: And a nice guy all around, right.
MR. BREWER: Okay. If you you look at the actual text, the
emphasis is on sufficient rather than necessary. The idea would be that
-- let's say I am doing a fire analysis, exactly the one that Dr. Powers
was talking about. Well, right now, the ASME standard has no
information at all about how to do a fire PRA, so, in that case, it
would not be sufficient, the standard would not be sufficient in order
to handle that scope.
Now, I think there might be some cases where you might be
able to ask the question of whether or not the requirements of the
standard are necessary to perform an application as well, but the
emphasis in the actual text is on sufficient rather than necessary.
DR. APOSTOLAKIS: Well, -- go ahead.
DR. BONACA: The point I was going to make is that you are
asking very specifically guidance on shall versus should, and you have
shall all over the place for that. You know, you are establishing
requirements. And, you know, I didn't see that shall misplaced. I
mean, typically, I saw it as a necessity.
DR. APOSTOLAKIS: Yeah.
DR. BONACA: So there is a lot of elements of necessity that
you are pointing out there.
DR. APOSTOLAKIS: Well, I always thought this was supposed
to contain a minimum set of requirements for a quality analysis. That,
in my mind, is necessary, not sufficient.
MS. DROUIN: And that is a correct statement. When you go
back and you look at the requirements in Section 3, which gets into the
technical aspects of the PRA, those are the requirements for doing a
quality PRA that would produce a realistic estimation of core damage
frequency. But there could be an application that does not necessarily
require you to produce a realistic estimation of core damage frequency.
And as an example, if you take your first element of your Level I PRA
initiating events, and if you look at that, there are shalls and the
shalls are, you know, you shall look at your LOCAs, and your transients,
Now, I would argue if someone submitted a PRA, independent
of an application, for example, and they had not modeled or considered
or evaluated, whichever word you want to put in there, any of the
support systems in their plan as a potential initiator, then I would
question the quality of that PRA in terms of producing a realistic
estimation of core damage frequency. However, there could be an
application under which -- I can't think of one off the top of my head
-- where perhaps they didn't model a loss of, I don't know, instrument
air, and that is not significant or has no impact in terms of the
DR. APOSTOLAKIS: But this is step 3, Mary. This is step 3.
Identify the PRA scope and results needed to evaluate the application.
And I agree with you. But in step 6, you have already done that. You
have already decided that you need an analysis of the support systems.
Then there should be no question that what a standard requires for this
analysis is necessary.
See, step 3 is what you just described. So you may decide,
step 3, that I don't need the support systems, that's fine, then it is
out. And I agree with you.
I think we are spending too much time on -- yes, go ahead,
MS. DROUIN: I think it was just a confusion, and a matter
of trying to present this in a linear fashion when it is really not done
DR. APOSTOLAKIS: I understand that, but this is step 6.
MS. DROUIN: And I think we are more into a semantic
DR. APOSTOLAKIS: My comment refers to step 6. I have
already decided what I need to do for this application.
MS. DROUIN: Okay. I appreciate that. And all I am trying
to add to that is that what the intent is is that, again, I think we are
into a semantic problem, is that what we are trying to say is that you
have identified it, and in step 6 is where you are actually making the
determination of the significance of it.
DR. APOSTOLAKIS: Yeah, but I don't think that the
requirements of the standard should be so open. In other words, the
user should decide whether a particular requirement is necessary and/or
sufficient. I can see how they can have flexibility when it comes to
sufficiency, but the necessity, it seems to me, you know, that is the
whole idea of having a standard.
In other words, let's say you are doing again a fault tree
analysis, a system analysis, you must do some common cause failure
analysis, right. I am not up for questioning. Okay. Now, if you do
it, though, that is not sufficient, or may not be sufficient for your
particular application. Okay. You may not be using the right method.
You may be making other assumptions that are not in the standard, are
not optimistic and so on. But it is necessary for you to do it, and
consider certain things.
So, Duncan, I think it is really necessary requirements they
are imposing, not sufficient.
MR. BREWER: Well, let me try one more thing. For example,
we have performed tech spec changes where we have simply shown the
impact of the tech spec change on the system reliability. We didn't
have to show the impact on core damage frequency or LERF because it was
easy to demonstrate the very small effect on the system reliability.
Therefore, not all aspects of the standard section 3 would be required.
If we had not performed human reliability analysis, but yet
we were still able to show that the change had very little impact on
system reliability, then those parts of the standard would not be
DR. APOSTOLAKIS: Right.
MR. BREWER: And it wouldn't be necessary that we be able to
say that we were meeting those parts.
DR. APOSTOLAKIS: And I agree with you and that is, again,
MR. SIMARD: But I think this is a valid comment.
DR. APOSTOLAKIS: You decided that the scope of the analysis
you need for this particular thing is this.
MR. SIMARD: Duncan, I think this is a valid -- if we were
-- even -- and I hope we will get this in writing. But, you know, even
if we were to decide, oh, well, there's a semantical difference here, I
mean that tells us that we need to change the words and make it clearer.
So, at a minimum, -- at a minimum, we need to respond to point he
DR. APOSTOLAKIS: Yeah, I think both the comments from Mary
and Duncan had to do with a whether a particular set of requirements is
needed or not. And what I am saying is that is decided in step 3.
Given that I have to do something then that is in the standard, I have
no choice anymore. I have to comply with what the standard says.
MR. BERNSEN: I was going to say we appreciate written
comments on the standard, but you don't have to give us written comments
on the viewgraphs, they are not standards.
DR. APOSTOLAKIS: I looked at the remaining slides, and I
would say that unless you feel there is one that really we must see and
discuss, maybe we can skip several of them.
MR. SIMARD: Yes, may I just conclude with the last
DR. APOSTOLAKIS: Certainly.
MR. SIMARD: Because one of the points that we are trying to
make every chance we get, we'll certainly make it repeatedly at the
public meeting next week, is that this is not the traditional process,
as Dr. Bernsen pointed out earlier. People are used to seeing a draft
like this when it's fairly well along in the process and there has been
a lot -- and it's rather hard to change.
What we are trying to do is make clear to everybody who sees
this that it's early. We are looking for substantive comments, we're
looking for constructive comments. I mean, sure, we'll accept somebody
who says this, I really, really hate this and it's totally off the mark,
but it would be much more useful to us to have if not suggested word
changes at least a suggested approach. So the message we are trying to
get out is there really is an opportunity here.
It's early in the process. Public input now is meaningful.
We'll use it. We need it. And in particular we recognize that things
have changed since we started out here 14 months ago. The NRC has
responded I think at a remarkable pace to process the risk-informed
applications and to develop significantly its regulatory approach, so
that's another area where we are especially looking for comments,
comments on whether or not the approach you see in this draft comports
with the situation today.
And with that I'll close.
DR. UHRIG: Does the ASME have other standard -- PRA
standards in other fields?
Aerospace, for instance.
MR. EISENBERG: Not at the present time.
DR. UHRIG: So this is really the first ASME standard on
MR. EISENBERG: Except that we have applications in
operation and maintenance and in-service testing and inspection in the
form of code cases which have been the basis for pilot-plant studies and
DR. UHRIG: This is from the nuclear industry.
DR. APOSTOLAKIS: No, even for other industries.
MR. EISENBERG: At least the published documents from ASME
are in the nuclear industry. We have efforts going forward in the
nonnuclear areas, but we don't have anything published --
DR. UHRIG: There is really no precedent of ASME involvement
in PRA prior to this.
MR. EISENBERG: In PRA as an entity, in a quality PRA,
DR. UHRIG: Okay. Thank you.
CHAIRMAN POWERS: I want to just ask a question, and there
may be a transparent answer to it, but you spoke of PRA for internal
events but you explicitly excluded fire. I just wondered why.
Was it a case you thought the techniques were not well
enough developed to be useful or didn't know anything about them or --
MS. DROUIN: It's not that we didn't know anything about
them, but in the midst of this past year it was recognizing that there
was a lot of work going on in the fire area, and just biting off what we
bit off was a lot.
CHAIRMAN POWERS: What is the lot of work going on in the
MS. DROUIN: There is research work going on in terms of
development of fire methods.
MR. CUNNINGHAM: Mark Cunningham of the staff.
There is research going on, but there's also the work that's
going on by NFPA in developing a standard for fire protection that the
committee was briefed on a couple of weeks ago. A part of that
standard's development is some sort of a specification on the standard
for a PRA.
CHAIRMAN POWERS: So you just didn't want to get into a
conflict between two standards. I understand that. Probably a good
MR. BERNSEN: I wanted to point out that in undertaking this
effort we recognized that there are other standards-developing
organizations that have a strong interest and involvement in this, and
ASME has not been interested in taking the whole thing and doing the
We recognize that there are pieces and parts of this that
might be more appropriately done by others, and among the things that
we're doing is setting up a high-level coordinating group to replace
things like NSB and so on to work out, sort out the needs and who's best
qualified and appropriate to do them, and of course we've had dialogue
with ANS, and I should also say very good support from ANS in terms of
people at liaison in developing this part, and they have an interest in
developing some other activities too that we're going to cooperate with
The thing is we want to make sure that there is a structured
approach here that's easy for the user to handle. We want to avoid
redundancy, be consistent, duplication, utilize the talent where it's
most available and the resources are available. And we certainly hope
that like NFPA would take the lead in the fire area and all we'd have to
do is reference it. Because as you will see in the document they've
turned out --
CHAIRMAN POWERS: Let's hope that your standard fares better
at the hands of this committee than the NFPA did.
MR. BERNSEN: Well, I mean, my point is that there are lots
of references in this document, and we certainly don't want to reinvent
the wheel. If something exists that's usable and can be referenced,
incorporated, that's a better way to do it. And we in fact, you know,
have a group looking at what the future needs are and who is best
qualified to do them and make recommendations in that area. So it's
going to be a standards community effort and not only an ASME effort.
DR. APOSTOLAKIS: Okay. Maybe we should move ahead now and
start with --
DR. WALLIS: Could we go back to the first question about --
I'm still bothered by the word "quality." I mean, most of these
standards refer to sort of scope and completeness and thou shalt analyze
A, B, C, D, E. It's very difficult to know that it's being done well.
We get this in other fields, too, where someone goes through the
motions. And students go through the motions, have answered all the
problems, but they still get a D because something's missing in the
Now I don't -- are we in a position to really assure the
quality of a PRA?
DR. APOSTOLAKIS: There will be a discussion I hope of the
scope, have a section --
DR. WALLIS: Scope's all right. Scope's okay. But there's
nothing in what I read here on my part that assured me that it was going
to be of good quality.
DR. APOSTOLAKIS: Because I think they deliberately avoided
specifying the method for doing something, and I think that was a wise
decision. A criticism of efforts like this has been that they tend to
impede progress in methodology development, so they have stayed away
from it. So yes, indeed, by complying with the standard, that doesn't
mean you're going to get an A in your PRA. Your PRA may still be very
bad, because, you know, the standard says I should do a common cause
failure analysis and I write one line, you know, and I did it. Then you
So this is supposed in my opinion, and of course please
correct me if I'm wrong, it's supposed to specify -- it was intended to
specify minimum requirements, and I didn't see that word used too
liberally. It should be used. Minimum requirements. You have to do
this, but if you do it, that doesn't mean you're home free. There's
still more to do, because the methods are --
DR. WALLIS: This would save a lot of work on the part of
the NRC. Now what I think I'm hearing is that it's still the NRC is the
arbiter of quality.
DR. APOSTOLAKIS: Yes.
DR. WALLIS: That makes me a little bit nervous.
DR. APOSTOLAKIS: By law they are.
DR. WALLIS: Because I'm not sure that they have really been
tested in their understanding of quality, particularly of a PRA, and in
some other --
DR. APOSTOLAKIS: The NRC?
DR. WALLIS: Yes.
DR. APOSTOLAKIS: Oh, they have pioneered it. I don't know
why you say that.
DR. WALLIS: They define that what they decide is the
decision. It doesn't mean it's quality.
DR. APOSTOLAKIS: Well, there is such a thing as a community
judgment, and depending again on the application, the NRC may decide to
have an internal review and -- or if it's a bigger issue, they go
outside and they form panels and so on in the same way that they handle
every other issue. It's a very open process, I think. But you're
absolutely right that the methods are not here, but I don't think they
MR. SIMARD: And just one observation that goes somewhat to
your question, it may not answer it completely, but if you assume that
the technical elements that are contained in section 3 of this standard
do provide quality, then there is a considerable emphasis here on peer
review, and there's another section in here, section 6, on peer review
requirements whose purpose is to establish that the elements that have
been identified earlier in the standard have been properly interpreted,
implemented, and that any exceptions have been identified. So again,
starting with the assumption, however, that the technical elements
described in section 3 here do represent what you and I might reasonably
consider a quality PRA --
DR. WALLIS: The concern I had too is not only do you have
to recognize quality when you see it, but you have to somehow know that
the capability exists to do it. There are all kinds of wonderful things
here that they're being asked to analyze. I'm not sure that they know
how to do it. And have the requirements outrun the capability to
actually meet them?
DR. UHRIG: Are these not pretty much based on what the
state of the art is now, PRA? The basis for the standards?
MS. DROUIN: Yes.
DR. WALLIS: So there is no problem with outrunning the
quality of the state of the art.
DR. APOSTOLAKIS: What did you say?
DR. WALLIS: Outrunning the quality of the state of the art.
You can list requirements, thou shalt do A, B, and C, D, hoping that
people will know how to do it.
DR. APOSTOLAKIS: No, I think that's a valid point, and as
we go along, maybe you should point out some examples, because asking
people to do something that the state of the art does not allow is not
really a wise thing to do.
So shall we move.
DR. MILLER: But as Ron pointed out, Graham, the peer review
panel is intended to be kind of a quality check by those who should be
able to recognize quality. I mean, that's the way I read that
Now, again, quality is in the eye of the beholder.
DR. APOSTOLAKIS: Like in a fire PRA, if I put in the
standard, whenever it's written, make sure that smoke is included, then
I know that can't be done. So I'm killing them right there. So this
kind of thing I think we should try to avoid.
CHAIRMAN POWERS: No, you're not killing them, you're
encouraging the development of --
DR. APOSTOLAKIS: Should be included. So what is next now?
Who is the next speaker?
MR. SIMARD: We are done.
DR. APOSTOLAKIS: You are done?
MR. SIMARD: Well, you skipped all our slides.
DR. APOSTOLAKIS: There will be no presentation on the body
of the document?
MR. SIMARD: In the interest of time, what we skipped over
were some slides that described at a fairly high level the content. So,
for example, there were several slides that talked about the fact that
there was a section on peer review requirements here.
DR. APOSTOLAKIS: Well why don't we go down, then, the table
of contents and see whether the members have questions?
MR. SIMARD: For example, beginning with number 23 -- and
the numbers are in the lower right-hand corner of our view-graphs --
that's the table of contents. It's on two slides here.
Now, the slides that follow that, for each one of those
sections, give a fairly brief description of what is in there in terms
of the information sources, the assumptions and limitations, the fact
that they are in there, but it does not get into exactly what
information sources or what assumptions were used, because our fear was
that would take literally more time than we had.
DR. APOSTOLAKIS: Well, I -- let me ask the members.
Shall we go down the list and see what questions we have
that perhaps the lady and gentlemen can help us answer?
CHAIRMAN POWERS: I think we need to have a validation of
any criticisms or comments that we want to make. So, I think we've got
to go through those things that we have anticipated as areas of
confusion or difficulty.
DR. APOSTOLAKIS: Okay. So, the answer to my question is
Okay. Well, yes, you have a nice list there. Do you want
to take a break first to get ready for this, or are you ready for this?
Shall we take a 10-minute break? Okay. Let's take a break.
CHAIRMAN POWERS: Let's come back into order.
DR. APOSTOLAKIS: Before we start the discussion, I
understand Mr. Bradley of NEI would like to say a few words.
MR. BRADLEY: Thank you. Biff Bradley of NEI.
I just wanted to spend a few minutes -- I noticed the agenda
discussed industry programs for PRA certification, and I wanted to just
briefly mention the status of that and how we believe it relates to the
As of now, all four of the NSSS owners groups have committed
to and funded efforts to apply the peer review process to their fleets
of plants, and by the end of 2001, essentially every unit in the country
-- right now, there are about five units that haven't signed on, but
essentially, the vast majority of all the operating plants will have
undergone the industry-developed peer review process.
Also, the four owners groups have gotten together and
adapted the BWR process and, through a fairly significant series of
technical interactions, have improved the overall process and adapted it
to all the types of plants.
So, there will essentially be a uniform industry document
that describes the peer review process and will be applied across the
This has been used already successfully to expedite the
reviews of some recent tech spec improvements, such as the Perry diesel
generator AOT extension, and we believe that this is an excellent method
to continue getting expeditious reviews of licensee submittals over the
next couple of years.
With regard to the standard, given the significant
investment the industry is putting into this certification effort and
the fact that it appears to be effective at getting applications
approved -- I mean we're having a pretty good track records now of
getting fairly quick turn-arounds on licensee applications -- one of the
comments, I'll just let you know, that we're going to make as an
industry is that we would really like to see the final standard
explicitly reference the certification or peer review process as it
exists and has been improved by the industry as a means to meet a good
part of the standard, and what we would like to see is that whatever
remains needs to be -- fill in whatever gap there believes to be
existing between what the peer review process does now and what ultimate
level of quality you're looking for for regulatory applications, rather
than reinventing the process.
So, I just wanted to mention that, but the industry has put
their money on the line and funded these efforts, and like I say, by the
end of 2001, essentially all the plants will be through it.
So, we want to build on that and not -- you know, when the
standard does come out, we'd like to be able to certainly use that
effort to meet as much of it as we can and we'd like to see the standard
DR. KRESS: You see the two, then, as being complementary.
MR. BRADLEY: I would certainly hope that they are.
DR. KRESS: Since your peer reviewers would take this
standard and see if the PRA actually meets it, would be one way they
could be complementary.
MR. BRADLEY: Right.
Now, we talked about the issue and how do you really know
that you have quality even if you have a standard describing what needs
to be there, and the peer review process --
DR. KRESS: That would be a function of the peer review.
MR. BRADLEY: That's how you do that.
There are some issues with regard to differences in the
technical criteria that are in the cert process versus the standard, but
we'd like to see those comported to come into -- you know, by doing one,
that we've met the other to the extent possible.
So, that's going to be the thrust of the industry comments
next week and forward.
DR. MILLER: Biff, in the standard, of course, you know,
there's a chapter on peer review requirements. Would you see the
certification process as supplementing that or replacing that or being
consistent with that?
MR. BRADLEY: As Ron mentioned, you know, the standard is an
early draft form now, and our intent would certainly be that, in its
final form, we would like to see the two come into one, to the extent
So, yes, the answer is we would like to use that process to
meet the standard.
DR. BARTON: It sounds like the certification process may be
complete before the standard is out.
MR. BRADLEY: Right.
DR. MILLER: And the certification process is certainly
going to be far more elaborate than the peer review chapter in this
standard. I would hope that it would at least consistent and not
MR. BRADLEY: Okay. Thank you.
DR. APOSTOLAKIS: Thank you.
Okay. General requirements -- any comments from any member?
I have a comment.
The scope, 1.1, says "This standard sets forth the criteria
and methods for developing and applying PRA methodology to commercial
nuclear power plants," page two.
I think "and methods" should be deleted. That comes back to
the comment Professor Wallis made earlier. Methods are not part of the
By the way, a matter of process here, our guests were not
prepared -- are not prepared to answer detailed specific questions.
There were many other people involved in the actual writing of the
So, I suppose what we're going to do in the remaining time
is raise issues of a fairly general nature and see whether our guests
have any comments to make, but you don't necessarily have to comment on
these things, but at least you will have the transcript to use later,
but also, the committee will deliberate among itself today and tomorrow
and write a letter, in which, of course, we will document our concerns.
So, you will get a written document from us. Actually, we
will send it to the EDO, I guess, but of course, you will get a copy.
So, don't feel that you're under any obligation to respond
to every single comment that is being made.
Now, if you wish to comment, that's fine, in case you feel
we are wrong or we misread this. So, I suggest that "and methods" be
Now, this is not just words, we're not going to do that, but
I think it's important in the scope to state that you are setting forth
In fact, as I said earlier, I like these words "minimum
requirements," and I remember, a year-and-a-half, when we were
discussing it at lunch with some of the NRC people as to what the
standard was going to be all about, that they told me -- and I was --
you know, I thought it was a good idea, that the standard will not
specify methods, it will specify minimum requirements for quality. So,
let's use these words also here somewhere.
MS. DROUIN: I just want to answer that.
When we were writing this, you used to read -- in earlier
versions, you used to see the word "minimal," "at least" -- Duncan, you
might help me out here, but what we were directed by the person in our
writing group who is familiar with the protocol, I guess, for writing
ASME standards is that, by definition, it's minimal, so you don't have
to use the word "minimal."
So, we went through and took that word out everywhere.
DR. APOSTOLAKIS: Well, I think it's important to specify
that it's minimal, because it comes back to the issue of necessary
DR. SEALE: Your public is not a group of people who are
necessarily well-versed in the expectation on all standards. They are
people who are going to be using PRA techniques.
So use the word for their benefit.
MR. BERNSEN: Thank you. We certainly need a written
comment on that. We have had a policy against minimum requirements in
the sense of saying that if you have additional requirements they should
be contained in the code or standard. You are establishing a set of
requirements that are necessary to meet the document but as I say we
need the comment and we'll address the comment and maybe we can find
some other words to do the same thing.
We understand what you are saying but at the same time our
counsel has said, look, you can't say minimum requirements because if
there are other requirements you want to incorporate them in your
DR. WALLIS: I have a problem with minimal and quality in
the same document or described in the same document. Minimal -- when we
describe what students have to do to get grades, which we often point
out to them after we have awarded them -- now minimal understanding of
course material is something like a "D" --
DR. APOSTOLAKIS: For graduate courses it is a "C."
DR. WALLIS: That is what you are looking for, and so a good
understanding is probably a "C" -- I don't know if this is what your
standards are, but to get something that is high quality, you have to
have outstanding, excellent, all kinds of stuff like that -- that is the
DR. APOSTOLAKIS: That is correct.
DR. WALLIS: What are you aiming for here?
DR. APOSTOLAKIS: Not the latter, so I think the discussion
on the scope should elaborate. I don't know if you want to use the word
"minimal" but you should definitely make it clear that you are not
specifying methods and therefore by complying with the standard you
don't necessarily have a quality PRA.
The methods you use to satisfy the standard requirements
will play a crucial role in that determination. Yes, Duncan?
MR. BREWER: In a few cases in the technical requirements
section, methods are actually specified --
DR. APOSTOLAKIS: We will come to that.
MR. BREWER: Okay, and I didn't know whether or not you feel
as though the methods should not be specified.
DR. APOSTOLAKIS: Yes, we will come to that. We will come to
MR. BREWER: Apparently you do.
DR. APOSTOLAKIS: Definitions. Any comments from my
CHAIRMAN POWERS: At least in connection with those parts
that I paid closest attention to, which is the expert judgment part of
it, I found the definitions here not very useful. When I looked at the
definitions prior -- after having looked at the expert judgment I went
back to the definitions because there are a variety of terms that arise
in the Section 3.4 on expert judgment that are not defined in there, and
they appear in the definitions, and where they are found I eventually
found out is they are in a non-mandatory appendix.
That just struck me as just poor standard writing, quite
frankly. You have a non-mandatory appendix that defines the terms that
arise in what I presume to be the mandatory section on expert judgment.
That is not right.
Similarly, you have technical facilitators and technical
DR. WALLIS: And technical integrators --
CHAIRMAN POWERS: -- and you have subject experts and
proponents and opponents and things like that, and they don't show up in
here, but I find them in a nonmandatory appendix. What do I do if I
don't want to use that nonmandatory appendix, and let me assure you that
I do not, how do I go about getting the definitions?
DR. MILLER: That was kind of my -- what is the criteria for
MR. BERNSEN: In fact we have written some criteria. The
criteria very simply if the dictionary provides adequate definition, you
don't need to define it. If it is unique then you define it. You try
to use definitions that are commonly used in other standards.
You don't define terms that aren't used in the standard.
Your comment is well-taken. I am sure we are going to get
that from various sectors.
DR. MILLER: Because I just found a found a whole --
CHAIRMAN POWERS: Yes, there are things like Level A, B, C
and D -- what in the world is that?
MR. BERNSEN: Okay.
CHAIRMAN POWERS: But the ones that weren't in there were
kind of surprising too.
MR. SIMARD: Just one comment. You will catch us in a lot
of examples I think of what you call poor standard writing here because
we had to do sort of a risk assessment.
We had to balance our desire to get this out and get
comments on our approach, methodology, are we on the right track, with
our knowledge of the fact that we had still more work to do. I mean you
have caught us right there in something that we will obviously fix.
MR. BERNSEN: As I said, the guys and gals are going to have
to have thick skin and recognize and not be defensive because this
stuff, we pushed it out early for review and that is a very valid,
worthwhile set of comments.
DR. APOSTOLAKIS: Now there are some -- I guess you will
give me detailed comments on the definition so we can --
DR. MILLER: I'll give you a list of them --
MR. BARTON: We already gave you some. Mike should have
given you a packet.
DR. APOSTOLAKIS: I have what you gave me.
CHAIRMAN POWERS: You should have four or five pages from
DR. APOSTOLAKIS: On the definitions?
CHAIRMAN POWERS: Well, it brings up the definition problem
that I can't find it.
DR. KRESS: Are you still on definitions?
DR. APOSTOLAKIS: Yes, I am on definitions. Yes, go ahead.
DR. KRESS: Well, I just wondered why they thought it was
necessary to define a severe accident in a standard on PRA which only
deals with -- and in particular why you related it to design basis
accident, which seemed a little strange to me, but that just struck me
DR. APOSTOLAKIS: Yes, there are some definitions that
probably don't belong here. For example, I don't know that Monte Carlo
uncertainty propagation is something that requires a definition. I mean
either you have a section on it or nothing.
This is not a definition. It is trying desperately in 15
lines to describe simulation, but there are also a couple of things that
are not correct, like the definition of a failure rate is incorrect. It
is not the expected number of failures per unit time in a given time
interval. It is a conditional number -- given that the thing was good
at the beginning of the interval so we are getting into too much detail
now, so you will get comments on the details.
Does anyone have a comment that is a bit more general?
MR. BONACA: I have some specific ones and one general
comment, I believe is that in many definitions including for example
accident sequence or station blackout, the definitions seem to be
focused only on those particular sequences that end up with core damage
or breach of containment integrity. I believe that the definitions
should be more general to accommodate also those other sequences which
have the success -- you know, more general.
I don't know -- for accident sequence here, I think -- you
know, it specifies -- this is just an example, that's why I bring it
up -- it's more general than just one, that is a combination of events
leading to essentially core damage or breached containment integrity or
both. Well, there are a lot of other accident sequences that do not end
up with containment failure, so that is a very narrow definition, and I
find that in many places, so my sense is that in looking at those
definitions he should expand it to include success paths.
DR. APOSTOLAKIS: Do you really need a section on
MR. EISENBERG: I think normally -- we normally have a
glossary in all of our standards.
DR. APOSTOLAKIS: Then you really have to spend time on
For example, look at the definition of "importance
measure" -- "a mathematical expression that defines a quantity of
CHAIRMAN POWERS: As in pi, right?
DR. APOSTOLAKIS: So I am sure somebody will clean it up.
MR. BONACA: One last comment. In several places I found
where I believe there should have been the expression "operating," it
was "operable" and there is a difference there. For example -- mission
entire 24 hours implies containment spray is required to be operable for
24 hours, not operating for 24 hours.
So that distinction, I think is important.
DR. APOSTOLAKIS: By the way, station blackout is what, loss
of off-site and on-site AC power?
DR. BARTON: Right.
DR. APOSTOLAKIS: But here it says both of those things and
failure of timely recovery of off-site power.
DR. BONACA: That's not right.
DR. APOSTOLAKIS: That is not part of --
DR. BONACA: Right. Because you have success, you include
those in success sequences when you recover.
DR. APOSTOLAKIS: Okay. Anything else on definitions? I
mean I think they got the message and they will get detailed comments.
As you see, we have read it line by line.
Okay. We are moving on to Chapter 3, but we have to do it
piece by piece. So, scope, I don't know that anybody wants to say
anything about scope.
Plan familiarization. Is anybody unhappy with the plan?
DR. BONACA: Well, I have a comment. Simply that plan
familiarization, there is no reference to observation group reform is
doing time critical sequences on simulators, and I believe that that
should be a significant source of information, and I know it is.
Somewhere, particularly on page, you know, 18, where you have examples.
That is an important example to include, somewhere in this chapter I
DR. APOSTOLAKIS: In fact, I thought there were no comments
on the scope, but I do have a comment on the scope. Page 17. Can we
all go to page 17? This diagram, if you look at the first big box,
Level I analysis, technical elements, initiating event analysis,
sequence development, success criteria. That feeds into internal
flooding analysis. Why is internal flooding analysis so important? It
must be an oversight or something. Your silence is deafening.
MR. SIMARD: Our official response is oops.
DR. APOSTOLAKIS: Okay. Okay. Fine.
Now, again, on page 18, it seems to me that familiarizing
oneself with an object of study is kind of something that anybody with
any brains would do. You don't do a PRA on a plant that you are not
familiar with. So I was a little struck by this detail advice on how to
familiarize ourselves with a plant.
CHAIRMAN POWERS: Well, as a counterpoint, George, I will
have to say that within a community I deal with, I have certainly seen
people attempt to take PRAs in which they have used the as-designed
facility and, in fact, had not attempted to familiarize themselves with
the facility that actually existed as a piece of hardware.
DR. APOSTOLAKIS: Well, sure, but I mean people do do stupid
things. Anyway, okay, fine. But I am really -- I am not sure the word
"shall" should be used so liberally here. Like on 18, 3.2.1,
information sources to be identified and used. The information shall
include the applicable design, operation, maintenance and engineering
characteristics of the plant. I mean, again, in my mind, this is
something you do anyway, but do you really want to say "shall" here, and
all of this applies?
Okay. Fine. Fine. Everybody seems to think that this is
DR. KRESS: I think "shall" is the right word.
CHAIRMAN POWERS: Like you say, I am very familiar with
cases where the opposite was done, so.
DR. APOSTOLAKIS: And, again, there is a provision somewhere
else that if I am dealing with an issue that has nothing to do with
operations, I will not have to do this. In other words, if I am doing a
structural analysis of the containment, --
DR. SHACK: Applicable is the key word here, George.
DR. APOSTOLAKIS: Where is that, do you see the word
DR. SHACK: Should include the applicable design, operation,
maintenance and engineering.
DR. BARTON: The third line of 3.2.1.
DR. APOSTOLAKIS: Okay. Initiating event analysis. Any
DR. APOSTOLAKIS: I was wondering why you avoided listing
the union of initiating event lists in the PRAs, the IPEs that have been
published. Why not help people that way? I mean if I were to do a PRA
now, that is what I would do, I would go out, pick up two or three IPEs,
look at the initiating events they considered and start then screening
them whether they apply to my plant. That would be of tremendous value
instead of describing methods for finding the initiating events.
In fact, for PWRs, BWRs, it seems to me you have a standard
list now of about 20-25 of them, and then you look for the three or four
that are plant-specific, right, Duncan? Is there any reason why you
don't have them here? The union. So you go out -- and Mary probably
has it in her office already.
CHAIRMAN POWERS: Probably in the back of her head.
MR. EISENBERG: May I ask where that is published? I mean
DR. APOSTOLAKIS: She has reviewed all the IPEs.
MS. DROUIN: Well, I mean there is one list that was
published in 45.50. Everyone pretty much uses, when you talk about the
union, it is from the same set of references which was compiled and is
DR. APOSTOLAKIS: I am sure the author of this section,
given a day, can put together a very nice list, or even half a day.
MR. SIMARD: If I may make a comment, since you brought up
the subject of references.
DR. APOSTOLAKIS: Yes.
MR. SIMARD: Some of you may have comments on references,
either at the end of this section or other words. I should point out
that we still have some cleanup to do here. For example, on the
references, the three references at the end of this section on
initiating events analysis, we still need to check whether they are
acceptable to ASME as references. In other words, are they publicly,
reasonably available to the public and so forth? I thought I would
offer that in case you have spotted it in other places.
DR. APOSTOLAKIS: Yes, I was wondering about Reference 2,
for example. How easily can one get the South Texas project IPE? I
MR. SIMARD: We still need to check that.
DR. APOSTOLAKIS: I understand. I am glad you said that.
But, well, does anyone have a more general comment on references?
Because I do.
CHAIRMAN POWERS: Well, I will certainly come up some
specifics in connection with the expert judgment.
DR. APOSTOLAKIS: Yeah, it seems to me that the issue of
references is important, it is not just an afterthought, because people
will look at it and say, okay, this is acceptable to the NRC then. I
mean up to their standard -- I'm sorry -- since they have it here, so I
might as well do my best to comply with what NUREG-2300, or whatever
reference the union is doing, therefore, I think we should be very
careful what references we put here. And I noticed that the quality
varies from section to section, obviously, depending on the authors of
the section and how handy it was for them to find references.
I don't know how to handle that. As I was telling Mary
earlier, perhaps you should have a team or two or three knowledge
analysts take the standard and look at the references only, for all
sections, and make sure there is a certain degree of consistency and
make sure that -- because, even though the standard is not specifying
methods, the moment you put references there, you know, you are sending
a strong message.
MR. EISENBERG: The standard way of handling references, at
least in standards writing according to ANSI, is that if it is in a
footnote, it is information. If it a reference within the body of the
document, then it becomes a requirement. So we need to look at it in
DR. APOSTOLAKIS: Okay. There were very few footnotes, by
the way, they are all the end. So this is draft 1, I assume.
Questions from anyone? This is now 3.3.2.
DR. BONACA: Here you get into some methods.
DR. APOSTOLAKIS: Here you get into methods, that's correct.
DR. BONACA: I don't have a problem with that if you say
that, you know -- and you're saying that these are accepted methods,
there are other ways to do it, or may be, but you have to explain how
you do it. So, I didn't have a problem.
DR. APOSTOLAKIS: I don't have a problem either, Mario, when
a method of particular way of doing something is widely accepted, as
everybody feels, you know, and I think, in this particular section,
indeed, using conditional split fractions or fault tree linking is
really what people are looking at. I mean it's not that you're
So, for this particular section, I think it's appropriate.
CHAIRMAN POWERS: The problem with that is that if you, in
fact, have any evolution of the technology, what does the guy do? If
this is what people are doing now and somebody comes along from, say,
MIT with a brilliant idea on how to do PRAs in a different fashion --
DR. APOSTOLAKIS: An unlikely event.
CHAIRMAN POWERS: I took the most obscure example I could
DR. APOSTOLAKIS: You're right, and I think it depends again
on the way things are presented. In this particular case, I would be
hard-pressed to argue that they should not cite these references,
because you know, not 100 percent of the PRAs are done one way or the
other from these two.
Now, you have to allow the possibility that somebody has
another approach, maybe have a paragraph there or use the right words,
but when methods such as these are so widely used, in fact, it would be
a disservice to the user of the standard not to mention it.
DR. BONACA: Particularly when you put the word that says
acceptable approaches that involve this include -- which means you're
not excluding other, okay? So, I don't have a problem with that.
The only comment I have is that you do have in the
appendices, in the non-mandatory appendices, four or five pages of --
some of them are really important information that I would view
important in the standard rather than the appendix, and vice versa, I
found something in the standard, like, you know, section 126.96.36.199, that
is not as good as section 188.8.131.52.1 in the appendix. I would almost
flip them over.
So, some review of the two of them I think is appropriate.
Most of all, there is important information in the appendix, and that's
not in the standard. I would pull it out of the appendix into the
MS. DROUIN: Are you going to provide specific comments of
where you think that --
DR. APOSTOLAKIS: Are you going to provide specific comments
to me, gentlemen?
DR. BONACA: Yes, we will.
DR. APOSTOLAKIS: Okay.
Then you will get them.
Now, one of the sections on page 22 talks about sequence end
states, and this is where perhaps Dr. Seale's comment comes into the
This is exclusive now, focused exclusively on plant damage
states. You may want to add a sentence at the end saying that, you
know, one may define an end state -- page 22, at the bottom, on the
So, you may want to add something there to the effect that,
for some applications, one may want to define his or her own end states,
although this is really a PRA standard, I mean the way PRA is understood
by most people.
So, I don't know -- I mean one decision you have to make is
whether you want to please everybody.
I know where you're coming from, Bob, but I mean when you
talk about PRA and level one, you really mean the plant damage states.
DR. SEALE: That is a problem. When you start talking about
using PRA to do maintenance scheduling, then it's no longer Tom's severe
accident assessment capability or tool, I should say.
DR. APOSTOLAKIS: Okay. Any other comments on accident
DR. APOSTOLAKIS: Okay. Success criteria.
DR. FONTANA: I do have a couple of comments. It looks real
good to me.
At the beginning, though, it says that plant success
criteria will be specified by the follow high-level functions, core heat
removal and containment heat removal, and should there not be something
there with respect to neutronics, like the ability to shut the plant
Similarly, on page 23, you say that PRAs will have a sound
technical basis, should be based on scenarios specific to thermal
hydraulic analyses, which is fine, but again, there are other analyses,
I think, beyond inefficient to thermal hydraulics.
DR. APOSTOLAKIS: I don't understand that. What do you
mean? Oh, in addition.
DR. FONTANA: Yes.
CHAIRMAN POWERS: There may be neutronic aspects of these
transients that have been overlooked as PRAs.
DR. SEALE: Yes.
DR. FONTANA: There's a tendency to not want to think about
CHAIRMAN POWERS: That's right.
At our fuel subcommittee meeting a few years ago, I guess,
Dave Diamond talked -- and he subsequently published a NUREG that
describes where he thinks -- he doesn't know, but with his experience,
he says, gee, there's a neutronic aspect to this problem that's largely
Normally, it's not in a PRA; it's not in the codes that
support the PRA.
DR. FONTANA: There should be some thought, I think, to
You get into level two. In discussion how you get to LERF
-- and you refer somewhere else to a NUREG -- is that the work that
MS. DROUIN: Yes.
DR. FONTANA: Okay.
The other thing, should you mention the -- is it necessary
to mention the use of severe accident codes for some of these accident
sequences, such as -- I know you haven't named any codes but on things
like MELCOR and MAPP. The severe accident codes I don't think are
mentioned here. It may be somewhere else, but I don't see them in this
DR. WALLIS: They're mentioned on page 58, for example.
DR. FONTANA: I didn't read page 58.
DR. APOSTOLAKIS: So, what you're saying is that, even in
this section, there should be some mention of these codes?
DR. FONTANA: When you start talking about level two, I
MS. DROUIN: This part of level one here. We're under level
DR. FONTANA: Look on page 24.
DR. APOSTOLAKIS: It sends you to section 3.3.4. In all
fairness, there is a line here that says definition criteria are
specified in paragraph 3.3.4.
DR. FONTANA: When you get to human actions -- and again,
this is discussed in greater detail in other places, but I like the part
where you say what has to be done is identify what the human needs to do
and the time that he's got to do it, and it seems to be a little short
on psycho-babble, which I think is a good idea.
DR. APOSTOLAKIS: Wait a minute now. Which section are you
DR. FONTANA: Look on page 25.
DR. APOSTOLAKIS: Twenty-five is not on human actions. It's
still thermal hydraulics, accident sequences. That's why there is no
psycho-babble there. The readers of this section will not understand
DR. FONTANA: The bottom of the left-hand column.
DR. BARTON: Left-hand column, last paragraph.
DR. APOSTOLAKIS: Yes. There is a separate section on human
Are you finished, Mario? Because the other Mario wants to
DR. FONTANA: The younger.
DR. APOSTOLAKIS: This is Mario square.
DR. BONACA: All right. So the square is speaking up.
There is just an inconsistency it seems to me between the
definition of success criteria provided at the top of page 23, where it
essentially says that in certain cases you can use bounding
thermohydraulic analysis for the plant SARs, and a statement on page 28,
section 3343, where it specifically says that you shall use realistic
criteria. And I believe that you need realistic criteria rather than
bounding, except for some specific use where you have very limited
I mean, the SAR accident analysis is absurd. That's not
analysis. They are just extreme cases that are pushed to give you
certain results. So, I mean, I don't know, I think that's two different
statements here, opposing statements. You see what I'm talking about,
top of page 23 that's bounding thermohydraulic analysis from the SAR
versus section 3343, and where it is prescribed, using the word "shall,"
that you have to have realistic criteria.
DR. SHACK: Yes, but it says bounding shall be used when
detailed analyses are not practical. I don't see any conflict between
DR. BONACA: Yes, but I see a conflict with the use of the
word "shall" on section 3343. See, that's the only -- so, I mean,
either you accommodate and you explain under what conditions.
Review those two and you'll see there is an inconsistency
there that has to be resolved one way or the other.
DR. APOSTOLAKIS: Okay. So which section are we doing now,
DR. BONACA: Yes.
DR. APOSTOLAKIS: One comment on system analysis,
unavailability and unreliability, on page 31. One common mistake in
PRAs is that people find the average unavailability of one train of a
redundant system, and then they multiply or square it to get the
unavailability of the system, which is incorrect. Now the number is,
depending on the degree of redundancy, they are underestimating the
unavailability by 33 percent or higher. And this is something that's
easy to do with code. That's why people do it.
But the truth of the matter is that there is another
dependency that comes into the picture because you are averaging over
time from one test to another, and, for example, the unavailability of a
two-train system is lambda tau square over 3, not over 4, which would be
the product of the two unavailabilities.
I think there should be a warning here somewhere that this
is a potential pitfall. In fact I was talking to some of the people
from Idaho who are working on the Sapphire code, and they were already
aware of it and said you take the Sapphire cut sets and then you do the
calculations by hand. You don't have the code do it, because the code
cannot do it. It depends very much on the redundancy. So since it's
such a common mistake, it seems to me some warning here would be good.
And all the formulas are in the fault tree handbook actually, so it's
not -- so anything else? Anyone else? No.
Then we move on to 3.3.5. By the way, the definition of
dependent failures in chapter 2 in here is not -- are not consistent,
but that's fine, we'll put that in the -- 3.3.5 is data analysis. The
only high-level comment I have is that there is a reference to
frequentist methods without really specifying when these frequentist
methods can be used for data analysis.
As far as I know, all the PRAs -- none of the PRAs has used
frequentist methods. In fact, on page 35, second paragraph on the
left-hand side column it says: Uncertainty shall be characterized by a
probability distribution on the parameter value using the subjectivistic
MS. DROUIN: George, where are you? What subsection?
DR. APOSTOLAKIS: Page 35.
DR. BARTON: 3.3.5.
DR. APOSTOLAKIS: Left-hand column, second paragraph:
Uncertainty shall be characterized using the subjectivist approach to
probability, which means a basian thing.
And then if you go down to 184.108.40.206.3, verification of basian
data analysis results, it says when the basian approach is selected.
Well, you just told them they have to select it. And then on the
right-hand column it says verification of frequentist data analysis
Where did that come from? That's completely inconsistent.
It says when traditional frequentist methods are used. But you just
told them to use basian methods. And frequentist data analysis methods
have failed miserably in PRAs. They have never been used. They may
have been used in other contexts. They may have been used to do other
things, some hypothesis testing here and there. But in terms of the
data that we're talking about here, they have never been used. They
will never be used. Okay?
So I don't want this to become again like the PRA procedures
guide where to avoid the controversy, let's put everything in there.
Okay? But I like the paragraph on the left-hand column, using the
MR. BERNSEN: And that's not just because of the reference
that that paragraph cites, I assume.
DR. APOSTOLAKIS: The reference exists because of my belief.
It's not my belief because of the reference.
Other than that, this is a good chapter. I mean, the data
analysis chapter is very well written except for that unfortunate
intervention by dark forces to talk about frequentist methods.
In fact, look at page 40, page 40, 220.127.116.11.5,
plant-specific: Estimates of equipment reliability parameters shall be
based on basian updating. Okay? This is the voice of reason.
Okay. Let's move on then to the next one, human reliability
DR. MILLER: What's the number?
DR. APOSTOLAKIS: It starts on page 43. I thought it was
You have it, Mario? Good.
There is one possible omission. On page 46, where you talk
about the HRA, in the middle of the left-hand column: The HRA shall
address both the diagnosis and execution portion and so on.
One tool that has proven to be very useful in the situation
of possible misdiagnosis is the confusion matrix that several PRAs have
developed. And some sort of mention of it or a similar tool would be
useful here. That really clarified a lot of things when it was first
introduced, as you know, in the mid-to-early eighties.
There's one more comment coming up.
At the bottom of page 45, you are saying -- the left-hand
column -- "Identification of recovery-related human actions shall be
limited" -- "shall be limited to those actions for which some procedural
guidance is provided or for which operators receive frequent training."
On page 49, which deals with internal floods, it says "Use
of extraordinary recovery actions that are not proceduralized shall be
justified in the analysis."
Those two statements are not quite consistent, from 45 to
You see that? On the left-hand column, one short sentence
above the last paragraph.
MS. DROUIN: Yes.
DR. APOSTOLAKIS: Okay.
And I don't see why you should limit the recovery-related
human actions to those that are proceduralized or part of the frequent
training. I mean if you look at major incidents, the operators, most of
them, have behaved extraordinarily well, and I think, if the PRA
identifies a way that their operators can save the particular situation,
I don't see why you should tell them don't put that in the PRA because
it's not proceduralized.
DR. BARTON: George, I think you need to stay with what the
industry is geared into doing, is to do things in accordance with
procedures, and if you get -- start doing things that aren't in
accordance with procedures, the NRC doesn't look to that too kindly, and
you kind of get violations for doing things like that, but I think the
reference to doing things per procedure or that you're trained on is
probably applicable to the standard.
DR. APOSTOLAKIS: I would raise that question, which I think
is a good question, when someone submits to me an HRA and they say,
well, the operators will do this and this and that, and then the staff
comes back and says but they are not trained as part of procedures, but
I wouldn't put it in the standard. I appreciate the problem, but I
don't think the standards should exclude -- in fact, they may find
something that they will decide to make part of their training, because
it makes sense in the PRA, but if you forbid them from doing it, then
they may never get there.
MS. DROUIN: All I was going to point out is that this is an
excellent example of where -- because as you notice, this is Rev. 10
we're on, and we went through and wrote it and then read it and had
comments, you know, we would see things ourselves that we thought, well,
we may not want to limit it to that much, and then we might not have
gotten the shalls and shoulds and mays matched up perfectly, you know,
and I think this is an example.
DR. APOSTOLAKIS: And I fully agree.
MS. DROUIN: And this is a real good example of that,
because I do know we had conversation -- we said, well, you know, there
could be very valid reasons for taking advantage of actions that are not
proceduralized, and so, we should be able to give them that opportunity,
but we didn't get all the wording.
But I just kind of wanted to point out where you'll see a
lot sometimes of these disconnects.
DR. APOSTOLAKIS: Okay.
MS. DROUIN: So, when you find them for us, we're very
DR. APOSTOLAKIS: As I thought you would be.
Again, I don't disagree with you, John. I think that
question should be raised, but it should be raised when the study is
evaluated, not the standard.
Okay. The next one is 3.3.7, which is what? Flood?
Internal flooding? Any comments from any colleagues?
DR. FONTANA: That looked okay to me.
DR. APOSTOLAKIS: It looked okay to you. Therefore, we move
DR. SEALE: Well, the only thing about it, it struck me that
it's almost like an unusual thing -- I mean it's kind of a special breed
of cat that's thrown in here in the midst of all of these other things
that have much more general coverage, and I wonder if it wouldn't be
appropriate to put place-holders in here in the form of blank sections
that reflect the things of a similar ilk that you might want to plan on
adding to the standard down the road, things like fire analysis and
seismic analysis, in particular.
DR. APOSTOLAKIS: Are you saying it's too specific?
DR. SEALE: It's a different breed of cat, and all of a
sudden it's here, and I don't think that it's necessarily specific, and
as a matter of fact, you could add shutdown to that list, too, if you
wanted to, and I can see, if it is so unusual compared to everything
else that you need a rather detailed, specific list, that's fine, and
I'll bet you you'll need the same thing for fire, and I'll bet you Bob
Budnitz can write you the same thing for seismic right now, and when
somebody gets Dana Powers loose enough to do it, he could probably write
you the list for shutdown.
So, it's specific, but it's -- and maybe that's one of the
points you need to make.
DR. APOSTOLAKIS: I think it's tradition, Bob, that internal
flooding is part of the level two PRA --
DR. SEALE: Yes, I know.
DR. APOSTOLAKIS: -- for some reason, and internal fires are
DR. SEALE: I know, and that's part of -- you know, maybe
that's a tradition whose time has come.
DR. APOSTOLAKIS: Okay. Next one is quantification, and on
page 50 there is something that is very interesting, and I think it's
related to some comments that Professor Wallis will make.
"The quantification shall include an assessment of parameter
uncertainty and should also include an assessment of modeling
Now, how on earth would one do that? How would one assess
modeling uncertainty? Do the level two stuff that Sandia did for the
phenomena? Invite experts and go through the whole exercise?
DR. WALLIS: Well, maybe ACRS should insist that these model
contain ways to assess the uncertainties in them.
DR. APOSTOLAKIS: Well, again it comes back to the earlier
comment, though, whether this standard should be used to advance the
state of the art or should ask people to do things that can't be done
with the present state of the --
DR. WALLIS: That's the big uncertainty, the big thing that
makes all the difference in the world, could well be the model
DR. APOSTOLAKIS: That's very true.
So, it's not obvious how one does this. It may deserve its
own section, because there is work in progress here and there, and as I
say, what Sandia did, for example, in 1150 is a way of handling model
uncertainty, but my God, you don't want to have to do that for every
issue where model uncertainty is an issue, right?
DR. SEALE: Well, there is a section in here on evaluation
DR. APOSTOLAKIS: Where is that?
DR. SEALE: 18.104.22.168.3, page 51.
DR. APOSTOLAKIS: Yes. But it doesn't -- it says, "When
model uncertainty is calculated, all sources of reliable information
should be incorporated," but really, how are you going to do that? See,
that's the real issue.
CHAIRMAN POWERS: I think you hit upon one of the
difficulties, I think, you have with language in this document.
You frequently say take into account all things, take into
account all mechanisms. There's no way to know that you have done that,
and I think you need to be very careful about that "all" term.
Either define it as saying all of them that you can think of
or say all means something different than "all" from the dictionary.
MS. DROUIN: Just a note of clarification. If you find the
word "all" there, it is by mistake. We have tried on several occasions
to go through and do a word search and remove the word "all." I'm not
saying we caught them all.
CHAIRMAN POWERS: I would look very closely at 3.5. You
didn't catch any of them.
MS. DROUIN: But the intent is not to have that word in the
DR. APOSTOLAKIS: Talk to Nathan about this model
uncertainty. He may give you more information.
Page 51 --
DR. FONTANA: Can we go backwards for a minute?
DR. APOSTOLAKIS: Which page?
DR. FONTANA: Page 49 on flooding.
DR. APOSTOLAKIS: We're going backwards.
DR. FONTANA: Yes.
DR. APOSTOLAKIS: Okay. That's floods again.
DR. FONTANA: On the right-hand column, right in the middle
of the page, it says "An area that has small or modest flood -- if time
to damage of safe shut-down equipment is greater than two hours for the
worst flooding initiated."
Is that some kind of agreed-on number? Is there a basis for
that, or is that just a good idea?
MS. DROUIN: There are numbers that we used all through the
standard, and in some cases, we did a better job in flagging "here's our
suggested number," and in some cases, we didn't actually put that word
in there. But usually where we put a value in, it's because that's what
is normally used out in the industry.
DR. FONTANA: That's the case here.
MS. DROUIN: Uh-huh.
DR. FONTANA: Yes. Okay. Thank you.
DR. APOSTOLAKIS: Okay. On page 51, just about 22.214.171.124.3,
exception, if only point estimate quantification is completed, that
point estimate shall be the mean. You mean one is free to make -- to do
a moment propagation and find the mean value in a rigorous way? Is that
what you mean? Instead of doing Monte Carlo, for example. Or one does
a point estimate and then one declares that point estimate is a mean
value, which would be wrong.
MS. DROUIN: No.
DR. APOSTOLAKIS: That's not what you mean. So it seems to
me that should be clarified a little bit.
And then we just saw that there was a line here that there
should be an assessment to model the uncertainty, and then on page 51,
it says if model uncertainty is not evaluated, there will be sensitivity
studies. I guess that's part of an assessment, although again, I have
the same problem my colleagues have -- what does one do with the results
of sensitivity studies?
DR. POWERS: When you think about sensitivity studies, are
you thinking about one parameter at a time to have sensitivity studies?
DR. APOSTOLAKIS: See, that's a problem.
DR. POWERS: Or is it -- is a one parameter at a time
sensitivity study sufficient to meet the objectives of your work here,
or does that take more? And how do you know that it takes more?
MS. DROUIN: Are you wanting an answer, or are you just
making a statement here?
DR. POWERS: I'd sure like an answer.
MS. DROUIN: Oh.
DR. POWERS: I mean, you must have thought about something
when you said sensitivity studies, and --
MS. DROUIN: Where we were coming from in sensitivity
studies was more from not trying to impose an 1150 type of uncertainty
analysis on the PRA person, on the PRA.
DR. POWERS: Okay. So that means that you've somehow made a
judgment that an uncertainty analysis is not essential to have a PRA of
adequate quality for the uses that you want here? How did you do that?
MS. DROUIN: I'm sorry, say that again.
DR. POWERS: You somehow decided that an uncertainty
analysis is not essential to have a PRA of adequate quality.
MS. DROUIN: You mean a formal uncertainty analysis.
DR. POWERS: That's right. Yes.
MS. DROUIN: Yes. And Duncan, you know, you can jump in any
MR. BREWER: Let me think about it. I will.
DR. POWERS: Okay. And how did you decide that? I mean, it
seems to me that it has become very, very clear that a PRA number
floating in space by itself is not a useful quantity to us, okay. A PRA
number needs to have something wrapped around it that gives us some idea
of what the uncertainty is.
MS. DROUIN: I don't disagree with that statement.
DR. POWERS: But you guys have come up and decided that
there's -- that having this wrap around this number is not an essential
thing to have a PRA of adequate quality.
MS. DROUIN: No, all we're trying to say is that achieving
that, trying to do it through -- and again, we were just trying to stay
away from enforcing a detailed 1150 type uncertainty analysis, which I
don't think you need to go to that level. Now, maybe we went to the
DR. POWERS: Well, it seems to me that there's a principle
here that needs to be articulated in the standard somewhere, and that is
to say give me a point estimate. I will admit that in deriving the
point estimate, there's probably a lot of valuable information that I
can use in the PRA. But that bottom-line number that you get to
floating out in space by itself is not very useful.
MS. DROUIN: Okay. But remember that they only have -- the
standard is written not to do the formal uncertainty strictly on model
uncertainties. They still have to do them on the parameter
DR. POWERS: Okay.
MS. DROUIN: So you do have some wrapping there.
DR. POWERS: What I'm saying is that you need to give people
a little bit of perspective because you're going to have a variety of
people coming back that say, boy, it sure is easy to do a sensitivity
study because I dial a number and out comes the result on this thing,
and I plot it. That may be adequate for many applications, and you need
to give them some sort of guidance that says, I have gone through,
thought about applications, and here sensitivity studies are good, and
here, you've got to do something a little more.
And maybe that's an empty set out there, okay? Maybe you
end up saying, I would like to see an uncertainty study, but if you
can't do it there, I haven't found a single application where the
sensitivity study can't in some way, if you didn't fold and mutilate it,
so that it's adequate for me. But I think you need -- because we know
for a fact that there is a large segment of the community that is
intimidated by uncertainty analyses, for reasons that are beyond me
right now, but they are, and so they try to replace that with
So I think you need to think about your objective, which is
words that have something to do with adequate quality or ensure adequate
quality and what it needs to do with regard to this uncertainty and
sensitivity analysis, and give them some guidance.
MS. DROUIN: Well, my question, though, that I would throw
back at you is, I don't disagree with what you've said, but what I would
like to know is where what we have written does not capture that,
because I go back, and it's been a while since I've read this part, but
I'm just scanning it right now, and I'm pulling things, you know, out of
context maybe, but you know, it says the sensitivity analyses shall look
at key assumptions or parameters, both individual or in logical
combinations, and then it goes on for a couple more paragraphs to
explain what you should be doing there. Now, maybe it doesn't get to
the level you want, but I would like to know where do we fall short of
-- DR. POWERS: Well, in that particular case --
MS. DROUIN: Because I don't disagree with what you're
DR. POWERS: In that particular case, it seems to me it's
the or logical combination. I think the single parameter at a time
sensitivity studies are very, very suspect, to my mind.
MS. DROUIN: They can be, yes.
DR. POWERS: And you said either that or this logical
combination. I think you need to give some guidance that says, you've
got to do logical combinations if you're going to do these kinds of
applications or something like that, or maybe because you're in the
standards writing and you can glib over this, you can say, you've got to
-- you should do logical combinations unless you can justify not doing
DR. APOSTOLAKIS: A good place where something --the
presentation could change is page 51 on the right-hand column where you
talk about model uncertainty.
The first full paragraph starts by saying when model
uncertainty and epistemic uncertainty is calculated, then you go on and
so on, then the next paragraph says, if model uncertainty is not
evaluated, the sensitivity of the results to model boundary conditions
and so on shall be evaluated.
I would rephrase that and say, one way of evaluating model
uncertainty would be to do sensitivity studies and then again make a
statement of uncertainty. In other words, don't do the sensitivity
studies and then sit back and say, well, I did it and that's it. I
mean, there are many ways of handling model uncertainty. One is to use
all the information you have here, do deterministic calculations, and do
a NUREG 1150 evaluation using experts.
Now, I think very properly, you said that you don't want to
impose that on people; that's a major effort. If you want to do
sensitivity studies, you know, one parameter at a time, two parameters
at a time, that's fine, but then again, at the end, you have to tell us
what your evaluation of the model uncertainty is.
MS. DROUIN: Yes.
DR. APOSTOLAKIS: So the burden is on the analyst. If he
chooses to do it with sensitivity studies, well, he or she must defend
it rather than give it as an alternative. Instead of doing uncertainty
analysis, you can do sensitivity.
DR. KRESS: I agree with you completely.
DR. APOSTOLAKIS: And then you can't use the results of the
DR. KRESS: And I think a PRA standard that does not call
for a rather sophisticated uncertainty analysis probably will not meet
the quality that you're going to need for a risk-informed regulatory
system. I think you're going to have to deal with uncertainty in an
entirely sophisticated manner, and I don't think sensitivity parameters
one at a time does it for you.
DR. APOSTOLAKIS: Mary, in the level 1 PRA -- I mean, Duncan
is probably more qualified to comment on this -- you really don't want
to scale the industry because nobody is doing model uncertainty in the
level 1, so if they see a requirement like that, they might think this
is crazy. Would it be useful to identify one or two issues in Level 1
where such an analysis might be warranted? And I think -- I have seen
PRAs in the past where they did something like this, a quick
calculation, a model uncertainty in the context of reactor coolant pump
seal LOCA where the success criteria were not obvious, so they said,
well, you know, we don't know what to do, so here is one approach,
another approach, another approach, then they did a quick expert
elicitation, nothing like 1150, and they assigned weighting factors.
I think if you say that and give an example or two, you
might help people, because I can see people working on level 1 saying,
my god, model uncertainty? I mean, you must be out of your mind.
Nobody does that. For a level 2 type guy, that's routine because that's
what they've been doing for years, but level 1 is not common.
So maybe -- and you do give examples -- unless you have it
already. Are you ready to kill me here? No, you don't. You do give
examples in other instances, so maybe -- in fact, Dennis can give you
the reference, I remember, and there may be other cases, too. But I
think there are one or two or three at the most instances where you have
to worry about model uncertainty in level 1. So -- because I don't
think that -- if you pick up any IP, you will not find anything like
that, right? Any discussion on model uncertainty, I don't think --
MS. DROUIN: I don't remember any of them that talked about
model. You certainly saw quite a few on parameter uncertainty.
DR. APOSTOLAKIS: I think -- 1150 I think did have one or
two issues from level 1.
MS. DROUIN: Yes. We did some --
DR. APOSTOLAKIS: Yes.
MS. DROUIN: -- modelling uncertainties in 1150 on level 1.
DR. APOSTOLAKIS: Okay. So, you know, just list a couple of
examples, two or three examples, so people don't get scared that, you
know, this is revisionist --
DR. POWERS: There were two elicited issues in 1150 on level
1, I think.
DR. APOSTOLAKIS: Was it the RCP LOCA one?
MS. DROUIN: We had some -- we had a couple that we went
through the formal elicitation, --
DR. POWERS: Yes.
MS. DROUIN: -- then we had some --
DR. POWERS: I think two.
MS. DROUIN: -- that were an informal elicitation of the
DR. POWERS: That's right. I think there were several that
were done project team or one expert or something like that, but two
that went through the formal --
DR. APOSTOLAKIS: In the interest again of not scaring
people, if I had a problem, say, with OXY PC MOC, would it be okay to go
to 1150, if they have done it, and lift the uncertainty analysis from
there instead of doing nothing? And should that be mentioned here?
Because model uncertainty is more than uncertainty. I mean, if it has
to do with success criteria and my pumps are the same as the ones they
analyzed, why should I go and consult with experts? I would look at the
uncertainty distribution from 1150, maybe have my experts like Duncan
look at it and make sure that it's applicable to my facility, and then
use it as is, or would some minor modifications have to be justified?
Because the last thing you want to do is scare people, and model
uncertainty is not something that level 1 analysts are familiar with.
DR. WALLIS: I don't understand this desire not to scare
people. If something is important and needs to be done, then even if
you're scared, you do it.
DR. APOSTOLAKIS: If it was a major need, I would agree with
you, but there are very, very few instances in level 1 where you really
have to worry --
DR. WALLIS: If it's not important, that's a good criterion,
DR. APOSTOLAKIS: Right. It is an important issue --
DR. WALLIS: -- but not scaring people --
DR. APOSTOLAKIS: -- but it's important only in very few
instances. So I'm trying to make sure that the standard says that,
because in level 2, it's much more -- bigger issue, right?
DR. WALLIS: Being scared of doing something because it's
difficult to do doesn't mean you don't do it. DR. APOSTOLAKIS: I
wouldn't disagree with you.
Shall we move on to the level 1 and level 2 interface?
DR. KRESS: Yes. I did have a couple of comments on that,
On page 55, I just sort of had a policy question first. You
list some examples of characteristics that you would carry over. What's
your general policy in standards about examples? Should they just be a
few examples to give people an idea or should they be a complete list,
as complete as you can make it? Because I recognize these examples left
out some possible things that I might put in, and so I just wondered
what the policy is on this.
MR. EISENBERG: Typically in standards where we give
examples, not in this, but in others, they are characterized as typical,
not as all-inclusive.
DR. KRESS: Okay. You might want to make that a little
clearer because when I read it the first time, I thought, well, they
left this, this, this and this out, but then I went back -- well, maybe
they just want to give some idea, and it wasn't clear to me.
For example, your second bullet under those examples, I was
wondering why that wasn't the RCS pressure at the time of vessel
penetration rather than core damage. Are those two close enough
together to not worry about the difference?
DR. POWERS: Typically they're tremendously apart.
DR. KRESS: Yes. And I would have thought the more
significant one, if I were going to give an example, it would be a
vessel penetration, but that was just a thought, and there's other
things I might have had on there.
Another question I had about these examples is the first
bullet is type accident initiator. To me, that's just a name, it's not
a characteristic, and I was wondering what -- if it was really the same
sort of animal as the rest of the things and what it is about a type of
accident initiator that you actually carry over. You know, it's just a
name to me; it's not a characteristic. So that was just a general
question you might want to think about.
I guess on page 56, under 126.96.36.199, core damage frequency
carryover, it makes an interesting comment. It says the reason for
conserving the CDF is to allow capture of risk contribution from low
frequency, high consequence accident sequence. And you're talking about
the entire core damage frequency. That seems to be a little bit of a
disconnect. If you're calculating LERF, you only want the frequency
associated with the large early release, not the full core damage
frequency, and I had a little problem connecting that last sentence to
the first sentence. You're process of calculating LERF only deals with
frequency associated with those large early release sequences, and not
the full core damage, and I thought there was a disconnect there. This
may be something you can look at.
That's all I had on that section.
DR. APOSTOLAKIS: Okay -- 11.2, technical elements, any
DR. FONTANA: Yes. The questions I had when I was talking
about severe accidents back in Section 3.3.3 are adequately answered and
then some in this section. You have covered everything I had, I think.
DR. WALLIS: I was assigned to look at this whole piece
DR. APOSTOLAKIS: But important piece.
DR. WALLIS: I've just got two points. I read on page 57 a
discussion of reliability of systems whose primary function is to
maintain containment integrity, and they mentioned fan coolers and they
are supposed to assess the reliability of these systems.
Well, when you assess the reliability of a fan cooler you
cannot just ask the question is the fan cooler reliable? You have to
say what is it connected to? It is connected to piping. There is a key
fill tank or something. There is a pump. There's all kinds of ups and
downs which are different depending on plants and often difficult to
figure out in the piping, and then when there is a transient when the
pump comes on you have to ask was there a void in the pipe.
There is a whole lot of technical analysis necessary before
you can tell whether the fan cooler is challenged or not. Is your PRA
guy going to be aware of this? Who is going to do it for him or her?
How does it fit in? These are technical details in the model which you
have to do before you can answer the question is it reliable.
You can't just ask the question in a vacuum -- is it
This is probably a generic question though about a lot of
different issues, not just this one and I wonder if the PRA guy who is
focused on -- who studied under George and knows all about Bayes and
frequency and probability and if there is any difference between them
and all that, does the person know enough to ask the right technical
questions at the same time.
DR. APOSTOLAKIS: Anything else?
DR. WALLIS: Yes.
DR. APOSTOLAKIS: Go ahead.
DR. WALLIS: A similar, sort of related question in a way.
When you get on to page 60, the analyst is asked to analyze things like
redispersal and heat transfer following high pressure melt ejection.
Well, that sounds like a very interesting theatrical event. I am not
sure that the analyst is in the position to make much of an analysis of
this event and just guessing probabilities doesn't get you any wiser.
CHAIRMAN POWERS: I think there is a fairly well --
certainly the NRC Staff has developed a fairly comprehensive methodology
for looking at the salient aspects of debris dispersal.
DR. WALLIS: So there is enough of a basis --
CHAIRMAN POWERS: If it doesn't exist now, they are pounding
down the path to probably give them the technical basis to do that.
DR. WALLIS: So I guess these questions are driving at the
same thing is the PRA cannot be better than your understanding of these,
you know, the modeling of some of these physical phenomena.
There have been cases where I think PRA people have tempted
and have actually succumbed to use judgment in assessing numbers when
they perhaps needed better technical analysis to base those numbers
CHAIRMAN POWERS: Surely not -- they wouldn't do something
DR. SEALE: Surely you jest.
DR. APOSTOLAKIS: Finished, Graham?
CHAIRMAN POWERS: Well, let me continue on --
DR. WALLIS: I don't know the answers. I have said
CHAIRMAN POWERS: Let me continue on Graham's question about
the high pressure melt ejection and dispersal. I think it comes up in
this containment loading area but then it is neglected in the source
term area and I wondered why.
If I am going to throw core debris around, I am going to
release fission products, I guarantee you.
DR. APOSTOLAKIS: Comment noted.
MS. DRUIN: Comment noted.
DR. APOSTOLAKIS: Radionuclide release?
DR. KRESS: Ah -- which section is that in?
CHAIRMAN POWERS: It ends on 65.
DR. KRESS: I had some questions about that.
On page 62, in the set of bullets on the right-hand side,
one of the bullets you asked for is the energy with which the release is
discharged to the environment.
Of course we all know that you have to use that energy in
calculating the plume rise to get the further dispersal as a consequence
model but I think there's some question about what you mean by that
energy. Is it really the temperature of the released gases or is it
energy and does that energy include radioactive decay?
Are you really talking about energy or are you talking about
which would be momentum and internal energy or enthalpy? There is some
question about what you mean by that and it is the input to the
atmosphere of dispersal.
DR. SEALE: It is the thermal plume.
DR. KRESS: It is a thermal plume type thing but I am not
sure it is very clear what you mean when you say energy.
DR. WALLIS: Well, you really need all the relevant
DR. KRESS: You need the properties --
DR. WALLIS: -- which you need to make a calculation of what
it does. Energy is just one.
DR. KRESS: Yes, so I think that might be a good thing to
In the last bullet there, it also asks for the size
distribution of the aerosols and generally if you are releasing things
your source term is a time variable. It is also true that your size
distribution is a time variable.
It is not clear to me that this is asking for a time
variable function, that you want the size distribution as a function of
time -- or do you want that, or do you want some average size over time?
So that one wasn't clear to me what you are asking for there either.
On the next page, I wondered why your Table 3.4.4 didn't
include some actinides. I didn't see any. They look like they are all
fission products to me.
DR. WALLIS: Well, they have got neptunium and plutonium.
DR. KRESS: Oh, yes. I see them now. Right. Those are the
ones I was looking for.
CHAIRMAN POWERS: And they have Americium.
DR. KRESS: Okay.
DR. UHRIG: There are some items missing here, just for
instance in Group 2, Br is not listed over here as being within the
group, what particular isotope you are talking about, same with Se in
Group 4. Group 6 has the Pd missing. Group 7 there apparently is --
Americium is Am instead -- it should be Am instead of Cm.
CHAIRMAN POWERS: Well, that is curium. Curium is very
DR. UHRIG: Curium is not listed over here in the --
CHAIRMAN POWERS: It turns out to be relatively important,
one of the actinides.
DR. UHRIG: Was it Cm instead of Mc?
DR. FONTANA: It's Mc.
CHAIRMAN POWERS: Mc is a misspelling of mendelevium, which
is not very important.
DR. SEALE: Curium is down in 7.
DR. UHRIG: Apparently in Group 7 there's some mix-ups.
DR. KRESS: This is a little bit of my problem I had
earlier, Barry, in that when you are doing a Level 2 LERF you don't need
These include things you are going to get to get some
core-concrete interactions and they don't come out of the core. They
are in the LERF part of the accident, so I am confused whether this is
something you are going to use as an extension later on but I didn't
think you really need all these for a Level 2 LERF. You need them all
for the Level 2 PRA.
MS. DRUIN: Correct. I mean what you are seeing here is
that where we started, like we did in many places of the standard we
didn't try to recreate stuff and so we had already a document that went
through in detail to Level 2, so we tried to chop out the places and we
might have, as you see --
DR. KRESS: Overdone it.
MS. DRUIN: I mean not chopped enough.
DR. KRESS: The question I have, I think -- I hate to show
my ignorance but at the top of that same page, on 63, on the right-hand
side, you talk about a clearance class. I am not sure I know what that
is. Maybe that ought to be defined somewhere.
Do you know what that is?
DR. SEALE: Sure. That has to do with when you breathe it
DR. KRESS: Okay. It wasn't all that clear to me. Now that
you mention it, I recognize what you meant.
CHAIRMAN POWERS: Kind of a strange thing to bring up.
DR. KRESS: Yes. Yes, it was the context where it was. If
I had been reading it in terms of transport I might have --
There was a statement made somewhere -- and I'm looking for
it now. I can find out where it is. Hang on just a minute. It was
back on page 56. I let it get away from me. I'll find it in just a
DR. SEALE: First you forget names, and later you forget why
you wanted to know the names.
DR. KRESS: It's page 65. I hadn't come to it yet.
There's a strange comment in your examples, on the left-hand
side there, the very last part of the italicized part says that higher
retention efficiencies were attributed to sequences involving low
coolant system pressure than those involving high pressure.
I think it's just the inverse of that. I believe somebody
got those mixed up.
CHAIRMAN POWERS: I puzzled over that for a good 10 minutes,
because typically, you're absolutely right, we think the high-pressure
sequence is the ones that retain the most, but they also release the
least, and so, I thought maybe somebody had done a very novel
calculation in which they had actually integrated those two.
MR. SIMARD: As I understand, this is the kind of comment
where you found something confusing.
These are all things we're going to get in writing, and I
know what would be very helpful to us today is to have perhaps questions
to help you understand an approach we took or something like that,
especially if we find ourselves running short of time and there's still
quite a bit more to go through.
DR. APOSTOLAKIS: We will finish by five no matter what.
DR. KRESS: I only had one more on this section.
Maybe I missed it in my reading, but I really didn't see any
comment on including non-radioactive aerosols that come from the core,
whether or not they might be important to LERF.
I don't know that they are. They certainly might be
important to some parts of LERF. Depends on how you define it. But
they're not listed as important for things you need for transport
Was some thought given to what to do about non-radioactive
aerosols that come out of structural components?
MR. SIMARD: I'm not sure. That's a question we'll have to
bring up with the people who are most -- you know, as we pointed out
earlier, there were a couple of dozen people perhaps involved in
different sections of this, and we have only a couple of representatives
So, I don't know that, you know, those of us here can answer
that now, but we'll bring it up before the right folks.
DR. KRESS: My understanding was that the non-radioactive
aerosols actually had a mass that overwhelmed the --
CHAIRMAN POWERS: And then they're going to dictate your
particle size and everything else.
DR. KRESS: They're going to dictate a lot -- particle size,
CHAIRMAN POWERS: The existing text seems to think they only
come from core concrete interactions. That's just not true.
DR. KRESS: That seems to be something we ought to look at.
DR. APOSTOLAKIS: Finished, Tom?
DR. KRESS: Yes.
DR. APOSTOLAKIS: Due to the lateness of hour, I propose we
skip the expert judgement.
CHAIRMAN POWERS: I have some qualms about the approach
I've already spoken to you about the problems of definitions
and the relative role that the appendix and the actual text takes here,
but I'd like to speak to another issue.
This section on expert judgement seems to me that it
advocates a specific approach that someone, the author, found very
useful for doing expert judgement, and he says do it this way, and he
lays down a prescription for doing that at the expense of telling me why
this way and not some other way, and then, laying down the prescription,
he just tells me do this.
He doesn't tell me why I'm trying to do this. There's no
requirements here. There's just a prescription on -- you do it this way
because we find it useful in a seismic hazard analysis or something like
I'm willing to bet that there are lots of other expert
judgement approaches that people would like other than this particular
one, and it's elaborated in an appendix that's non-mandatory, and I'd
have no troubles with the appendix as an articulation of an approach,
but it seems to me that, in a standard, what you want to set down is
what are you trying to achieve when you go through expert judgement, and
what are the elements that should appear when you use that expert
judgement, and unless there is some very good basis, not come back and
say use this particular method when I know for a fact the Sizewell
people have their own method, they're very attracted to it, the
Europeans are very attracted to their methods, there are some
bald-headed guys off in New Mexico that like their methods, there's a
fellow at the University of California at Santa Barbara that is very
proud of his method, which are different than this, and so, I guess I
ask why this method and not anything else?
MR. BREWER: I think that's a very useful comment, and I
appreciate you making it.
CHAIRMAN POWERS: I mean there's a lot of nits about the
rest of it, but I think that's really the fundamental problem with 3.5,
is it goes at great lengths to castigate somebody that's a proponent of
a particular view and expert opinion and it, in itself, is a proponent
of a particular approach.
DR. APOSTOLAKIS: We need an evaluator.
CHAIRMAN POWERS: Yes, we need an evaluator and a technical
DR. APOSTOLAKIS: And also, again -- I fully agree with what
Dana said, but I think, instead of referring to levels A, B, C in the
attachment which were developed in a different context, the seismic
hazard analysis, it would be useful again to talk about when one needs
expert judgement in level one PRA, give an example or two, level two,
and what are the essential elements, what I think that Dana covered
already, and there is a number of ways of handling certain things, you
know, go out and look at them.
By the way, not all of them are very good, but I think that
would be very, very useful. I think the author of this assumed too
much, assumed too much, and jumped right into it and kept running, and I
know who he is, and he knows that I know this.
CHAIRMAN POWERS: There is another aspect of this expert
In fact, expert judgement shows up in three different
chapters in here, and there's nothing wrong with that, but it seems to
me that, in the section that actually deals with it, you ought to
reference that I'm going to talk about later on -- for instance, you
talk about it in connection with documentation, and it's quite a
thorough thing, but if I were reading this, I might not be reminded to
also go look at the documentation section on expert judgement, and also,
subsequently, in chapter five, there's a section on expert judgement.
I think, if you're not going to collect them all in one
place, at least provide a reference that says, all right, now, go look
at this other section, because you've got more to do.
DR. APOSTOLAKIS: I really want to emphasize that the
questions that need to be addressed have to be put in here.
For example, I couldn't find anywhere the reason for the
evaluators, and I know what the reason is. The reason is that this
particular group that developed this did not like the idea of assigning
equal weight to expert opinions.
So, instead of jumping in and saying here is an evaluator,
you say, when you do an expert judgement exercise, be careful when you
aggregate the opinions. In particular, the issue of equal weight is
something you have to think about, because there are pitfalls there.
Now, a way of handling it in that reference, if you want to
say that. That will depend, again, on how you handle references, but
now you have set the stage for people to understand what the issue is
and why certain things have been proposed.
Otherwise, it makes no sense to people and they get
offended, because they say you're forcing me to do this, and I don't
The issue really should be stated here not the solution, and
the issue is that of equal weights, which in the seismic area led to
paralysis for more than 10 years, because of one expert. One expert was
way out there, and because the NRC dictated equal weights, the curves
from Livermore were all shifted to conservative values, and now the
industry and everybody had to live with it. Okay.
MS. DROUIN: But we fixed that problem.
DR. APOSTOLAKIS: Anything else on expert judgment?
MR. SIMARD: May I ask your opinion on something? The
material that we have put in the non-mandatory appendix, the sort of
explanatory material on this subject, do you find it useful, would you
recommend that we keep it in here?
DR. APOSTOLAKIS: I think it creates more headaches.
CHAIRMAN POWERS: It is like a fingernail on a blackboard to
DR. APOSTOLAKIS: Because people -- I think the reaction,
and I have heard other people have a problem with that, too, it does not
say why you may need a technical integrator or a technical facilitator,
and it really does injustice to them method itself, because I know how
many hours of debate went into this and why people came up with the TFI
and TI, because there were some important, real issues that had to be
addressed, and that is not here. So not only -- I mean it makes people
upset because they think that here we come on our white horses to tell
them what to do, without telling them why. Why was there a need for an
integrator? Why was there a need for a TFI?
So, in its present form, I think it does a disservice both
to the standard and to the method. If you had an appendix that covered
a number of methods, then maybe things would be easier.
MR. EISENBERG: Do you feel something like that, which is
explanatory in nature -- or it is not?
MR. SIMARD: Yeah, I think we have got the comment.
DR. APOSTOLAKIS: I think it is a matter of communication.
MR. SIMARD: It is explanatory -- I will talk to you
DR. APOSTOLAKIS: We have Chapters 4, 5, 6, 7. I hope we
are going to go faster on these.
DR. SEALE: We will go real fast on 4.
DR. APOSTOLAKIS: Dr. Seale, Chapter 4.
DR. SEALE: Chapter 4 reflects, almost in a reduced image,
-- well, certainly, in a reduced imagine, the things that are in Chapter
3. It is a good list, though, because there is a tendency to lose the
requirements when you go through Chapter 3, among all the words that are
in there. And what the documentation does is essentially specify
Since it reflects things that are in Chapter 3, it certainly
deserves another good tight scrub once you have done the revisions to 3.
There is one particular suggestion, as a suggestion I might make, if you
look on page 72 under 4.4.2, mapping of Level I sequences, for accident
sequence mapping, the following information shall be documented, a
thorough description of the procedure used to group individual accident
sequence cut sets into plant damage states.
Now, if you added just a little bit to that, that is what
Graham was talking about when he said, who looks at this thing to see
whether or not the actual actuation of the fan cooler is going to give
you a hammer problem -- you know, what are the technical consequences,
not the PRA consequences, of the actuation of this piece of equipment?
That might be a very good place to make some comment to the effect that
you need that technical assessment.
DR. APOSTOLAKIS: Would you also include the comment that my
students could not do it? That's what --
DR. SEALE: Well, I don't know, you could probably get some
students that know something about mechanical engineering, George.
CHAIRMAN POWERS: It's MIT.
DR. SEALE: And that kind of thing goes through there. But
it does need a good scrub after you have done 3 over again. But it is
-- it is daunting when you look at it because there is an incredible
list of material that people are going to have to put together in order
-- we all know that, but still it is impressive to see it in one place.
That's all I have to say about 4.
DR. APOSTOLAKIS: Configuration control. I'm sorry.
DR. KRESS: On page 72, once again, under 4.3.9, they talk
about the entire CDF needs to be carried over. If you are talking about
LERF, I think that is a mistake.
DR. SEALE: What page?
DR. KRESS: Seventy-two. The LERF itself only includes the
contributions from the core damage that lead to LERF, not the entire
core damage. We need to think about that.
DR. APOSTOLAKIS: Okay. Can we move on to PRA configuration
DR. UHRIG: Basically, this is pretty straightforward. The
only comment I had here was reference to the two year update period.
There are places where this is probably appropriate. There are places
where it is probably not adequate, and there are places where it is
probably more than adequate. And the question is, is there anything
holy about the two years? Is that just a general timeframe that you
would expect to your updating of the PRAs? Or every time that you have
a major change, would it be appropriate to --
DR. BARTON: Is it because that is when you update the FSAR?
MS. DROUIN: There was a lot of discussion on the project
team of how often that should be, and the numbers were all over the
place. And then I don't know if we came to a final consensus where -- I
am using the word consensus meaning unanimous within the project team,
but my recollection is the two years came from, because about every 18
months is when you go through your refueling outage, and it is during
that timeframe that you are going to see major changes in the plant.
DR. UHRIG: So then you make the change after the refueling
MR. BERNSEN: You didn't use an expert judgment process on
DR. APOSTOLAKIS: Anything else? Dr. Miller, PRA review
DR. MILLER: Let's see, I have three comments. It hit me
right between the eyes, the first sentence. It said a requirement to be
indoctrination in the PRA process.
DR. MILLER: So I said -- I said, well, I will look up the
list of definitions, what does indoctrination mean. I didn't find it.
So I think it should be explained a little bit, what does that mean.
DR. APOSTOLAKIS: Well, there are high priests.
DR. MILLER: Does that mean --
CHAIRMAN POWERS: Where? I haven't seen one of those.
DR. MILLER: Does indoctrination mean going to MIT and
taking a class? I don't know.
CHAIRMAN POWERS: Shades of Alvin Weinberg.
MS. DROUIN: I want to just talk to that one a little bit
because this was a sentence here that had a lot of discussion within the
project team, this particular sentence.
DR. MILLER: For an educator, by the way, indoctrination is
MS. DROUIN: And a lot of debate over what that word should
be. And, finally, I think, you know, it was probably 8:00 at night and
we just finally just said, okay, we will use the word indoctrination.
But we would appreciate any kind of comment in terms of a better
What the intent of that sentence was is that we felt it was
important that the peer review team have a knowledge of the whole
process of a PRA, of the different pieces and elements, not that they
had to be expert in all of it, but that if, for example, you brought
someone who was a thermal-hydraulics expert --
DR. MILLER: Oh, that would be work that you have, yes.
MS. DROUIN: That he would have --
DR. MILLER: It is late, I can say anything I want to now.
MS. DROUIN: That he or she would have an appreciation or
general familiarity of where their part fits in.
DR. MILLER: Right. I understand now what you mean.
MS. DROUIN: And so that is what the intent of that sentence
was trying to get to, and we had a lot of trouble trying to phrase it,
so any --
DR. APOSTOLAKIS: But the review team will include a
MS. DROUIN: Yes.
DR. APOSTOLAKIS: A generalist is a guy who is an expert at
the whole PRA, in the sense that he or she understands the whole
MS. DROUIN: Right. That is --
DR. SEALE: But I think that there is another aspect to
this. You are referring to the person you bring in as a subject matter
specialist from one of the technical areas. But what about the person
from the public that you bring in to help you review this thing? It
seems to me that the word indoctrination could be one of the most
offensive terms you could --
DR. APOSTOLAKIS: They will not bring members of the public
to the peer review. But I think the word indoctrination should be
replaced. Let's go on.
Mr. Miller, anything else?
DR. MILLER: I have got a couple of more.
DR. APOSTOLAKIS: Yes.
DR. MILLER: The second, you have very strict requirements
on numbers of years of experience. That could be an area where you want
to think through your choice of words, shall and may and should. What
if I have 15 or 14 years and 364 days, you know?
DR. APOSTOLAKIS: And I have been doing everything wrong,
CHAIRMAN POWERS: Is that the voice of experience?
DR. MILLER: There's various places in there that it's
The third is in the level of detail. You have the same
paragraph repeated at least eight times in that document, and the
paragraph reads, "If PRA methodology documentation does exist, then ..."
or if it doesn't exist, then you should do something.
Is there a way to make that so it's generic? Section 6.5,
particularly, kind of mirrors the entire section 3.3, I think, and it
says the same thing over and over. There maybe a way to simplify that.
MS. DROUIN: Yes. I mean, usually, when we saw those kinds
of things, we tried to pull that into chapter one in the general
requirements, and this is probably a place where we could have done a
DR. MILLER: I think chapter three could be shortened by a
factor of three if you did a little thinking about it, and it would make
it easier to understand what you mean.
DR. APOSTOLAKIS: Okay.
Now, the last one is chapter seven, which I understand is
soon to become chapter one or two? You said you're going to reverse the
MR. SIMARD: Yes.
DR. APOSTOLAKIS: Dr. Bonaca, any comments?
DR. BONACA: First of all, the fourth one, which has to do
with assessing the adequacy of the PRA -- that's exactly what's being
done anyway. I think your examples are very good. I like that. I
think it's on target.
The only question I have is regarding the section where you
are questioning the adequacy of the standard and the process that you
describe to assess that adequacy.
We discussed that before, necessary and sufficient, and I
think that the amount of space dedicated to that is totally inadequate.
I mean we discussed it already.
That is the centerpiece of how you handle the standard, and
most of all, how do you bring back to the standard possible issues of
inadequacy of the standard identified by some user or even changes that
are significant to the standard?
I mean there has to be some kind of a clearing house of some
of the issues back to the standard itself, okay, and this is not
described at all.
MS. DROUIN: In regards to your comment, for someone who did
a lot of the writing on this chapter, I really appreciate the comment,
because our reaction has always been this is brief, but as the writer,
it's hard to go back and say, okay, where can I, you know, expand it and
make it better?
So, any specific comments you can give of where, you know,
it would help and specifically what it would help to, you know,
elaborate on, I'd appreciate.
DR. BONACA: One thing I would like to say is that, you
know, you use examples all through the section to -- which is very good.
It's a very specific example you can walk through and understand piece
by piece, okay? You can provide an example for this, and I will provide
two type of examples.
One is where you have inadequate definition or detail, and
that's not a shortcoming of the standard, simply don't have details so
you can provide an example.
The other one is where you have a potential limitation of
the standard, okay, as an example, and therefore, that could be brought
in as one that would have to then come back to the standard for an
MS. DROUIN: Right.
DR. BONACA: Okay. I think just two examples would be
sufficient, and then, later on, you'll have to think about the process
that the ASME wants to use to bring back those improvements that need to
be read into the standards.
But again, just an example would be sufficient to address
this section. Right now, this is quite unclear, you know, what kind of
issues you're talking about.
And the other thing I want to say, that clearly this is an
area that will give users a concern. I have spoken with two or three
people in the industry, and they are telling me they're concerned about
what does it mean to us insofar as evaluation, update, resources
That's where the question-mark is, because this process has
never been used before, while all the steps before of assessing the
validity of the PRA have been done. It's a routine activity.
MS. DROUIN: The other question I would ask, just curious,
to your reaction -- I mean, you know, we've explained that we thought it
was -- because of the comments we've been getting -- to move this
chapter seven, you know, in front of chapter three.
Do you all think personally, from your perception, that
would also be a good idea?
DR. BONACA: When you read that, you understand that, in
almost every assessment, you're looking at a delta between what you have
and what you're evaluating, and that puts into perspective the whole
issue of uncertainties and so on, because you're looking at these
deltas, the uncertainties become less important.
So, for the approach, I think it drives the whole standard,
and I would recommend that.
DR. APOSTOLAKIS: Any other comments from the members?
DR. SEALE: I'm impressed with how far the process has gone
in the time that they've been working on it.
DR. APOSTOLAKIS: Let's not lose sight of the fact that this
is a good standard. Well, it has the potential of being a very good
standard that will be very useful to the staff of the Nuclear Regulatory
Commission and the nuclear industry.
Naturally, when we discuss it, we tend to focus on negative
comments and so on, but believe me, if you didn't hear any comments on
the overall approach, that means that it's okay, you would have heard.
As you probably noticed, the committee is very much
interested in this subject. We've had detailed comments on just about
every chapter, and perhaps some of them you found very boring, assuming
you were not prepared for this, but you know, for the record, we had to
discuss them even among ourselves.
So, I would like to thank you very much for coming here and
participating in this discussion, and you will certainly receive a copy
of the letter we will write to senior NRC management regarding this.
I don't think we will write a letter to the ASME, because we
don't do things like that, but we will have you in mind when we put part
of the letter together.
Mr. Bernsen, you want to say something?
MR. BERNSEN: I wanted to thank you for the level of
interest and participation. I'm glad we waited until we had a product
to review with you.
We were thinking about coming in earlier and giving you a
prediction of when we'd have something, but I didn't think that was a
good idea, because we needed to have something to work on.
I am impressed with the level of effort and attention you've
given it in your general overall evaluations, and I think all of us need
to continue to recognize some fundamental issues, and that is I get the
feeling you think it's a good standard to your practitioners, and that
makes me very comfortable, because I haven't been practicing in the
field for a number of years.
We need to continue to consider the way it's going to be
used, the way it's going to be interpreted, and we need to make it as
helpful and useful as possible, both in the regulatory arena and also on
the part of the utilities and other users.
We also hope that this series of standards that we're
developing will, in fact, become internationally recognized standards,
and we're working hard to bring the whole world community into this
thing so that it will be recognized from the outset as an international
standard, and so, we are going to be continuing to encourage a lot of
input from the outside world, if you will, outside of the U.S. practice,
and that's another area where we need some guidance on what can we do to
globalize this instead of making it so -- appearing now in terms of an
NRC-U.S. utility process, practice, the references, and things of that
sort are so national in scope.
So, we need to consider what the rest of the world thinks,
Anyway, thanks again for your input. It's really been very
MS. DROUIN: I just had a question but now with an NRC hat
Because you made the statement that your comments would go
to, you know, the NRC, when would those be coming out?
DR. APOSTOLAKIS: We will write a letter this week, and
typically it takes another week or so for it to be forwarded to its
ultimate destination. So, you'll probably have something in 10 to 15
days from today, but as you know, you are very welcome to come and sit
in there while we're deliberating the letter.
So, Saturday, you are very welcome, and on that happy note,
I will turn it back to you, Mr. Chairman.
CHAIRMAN POWERS: Thank you.
Again, I thank you for a very fine effort and appreciate
your patience for sitting here and getting castigated and chewed on, but
we are obviously -- look upon this as a tremendously important activity,
and we know that you're operating on a volunteer basis, and we
appreciate your efforts on this.
With that, I'm going to declare a 15-minute break. What I
want to accomplish when we come back is I want to go quickly through
some feedback to authors on the letter for the Westinghouse best
estimate and the 10 CFR 50.59, and then I would like to get two past due
letters out, if we can.
Will we have a AP-600 letter available to us? We have a
draft of the core research capabilities.
Very good. Let's come back at 25 after.
[Whereupon, at 5:10 p.m., the meeting was recessed, to
reconvene at 8:30 a.m., Thursday, March 11, 1999.]
Page Last Reviewed/Updated Tuesday, July 12, 2016