Materials and Metallurgy & Thermal Hydraulic Phenomena & Reliability and Probabilistic Risk Assessment - May 31, 2002

Official Transcript of Proceedings


Title: Advisory Committee on Reactor Safeguards
Materials and Metallurgy & Thermal Hydraulic
Phenomena & Reliability and Probabilistic Risk
Assessment Subcommittees

Docket Number: (not applicable)

Location: Rockville, Maryland

Date: Friday, May 31, 2002

Work Order No.: NRC-400 Pages 1-337

Court Reporters and Transcribers
1323 Rhode Island Avenue, N.W.
Washington, D.C. 20005
+ + + + +
+ + + + +
MAY 31, 2002
+ + + + +
+ + + + +

The Subcommittees met at the Nuclear
Regulatory Commission, Two White Flint North, Room
T2B3, 11545 Rockville Pike, at 8:30 a.m., William J.
Shack, Chairman, presiding.

Introduction 5
NRC Staff Presentations
Introductory Remarks
by Mark Cunningham, RES 6
ECCS Reliability
by Mary Drouin and 11
Alan Kuritzky, RES 28
ECCS LOCA Size Definition
by Rob Tregoning, RES 112
ECCS Acceptance Criteria and
Evaluation Model Requirements
by Steve Bajorek, RES 220
Rule making Activities
by Sam Lee, NRR 324
Meeting Wrap-up

(8:30 a.m.)
CHAIRMAN STACK: On the record. The
meeting will now come to order. This is a meeting of
the ACRS Subcommittees on Materials and Metallurgy,
Thermal-Hydraulics and Reliability and PRA. I am Dr.
William Shack, Chairman of the Subcommittee on
Materials and Metallurgy. Dr. Graham Wallis and Dr.
George Apostolakis are my co-chairmen for today's
meeting. The rest of the ACRS members join us today
except for Dr. Powers.
The purpose of this meeting is to discuss
the status of the staff efforts and industry
initiatives of risk-informing 10 CFR 50.46 concerning
emergency core cooling systems for reactors. Gus
Cronenberg is the cognizant ACRS staff engineer for
this meeting. Mr. Paul Boehnert is the designated
federal official.
The rules for participation in today's
meeting have been announced as part of the notice of
this meeting previously published in the Federal
Register on May 13, 2002. A transcript of this
meeting is being kept in the open portions. This
transcript will be made available as stated in the
Federal Register notice.
It is requested that speakers first
identify themselves and speak with sufficient clarity
and volume so that they can be readily heard. We have
received no written comments or request for time to
make oral statements from members of the public. We
will now proceed with the meeting. I call upon Mr.
Mark Cunningham of research to begin the presentation.
MR. CUNNINGHAM: Thank you, sir. Good
morning. We are here today. We have a large cast of
characters to talk to you about, a variety of
technical subjects related to possible changes to 10
CFR 50.46 ECCS requirements.
I'm going to talk a bit about some of our
goals for the meeting and where we are in terms of the
status of the work. We're going to have then a series
of presentations on possible changes to 50.46 and the
technical work that we've been doing to support, to
underpin such possible changes. More specifically
Alan and Mary will talk about possible changes to the
reliability requirement aspects of 50.46.
Rob and Lee will talk about issues related
to the frequency of losses of coolant which is an
important contributing issue to all of the possible
reassessments of 50.46. Then Steve and Norm will talk
about possible changes to the acceptance criteria and
the evaluation model. At the end of the day, we'll
also have some NRR staff talking about rule-making
activities that are related to these possible changes
in 50.46.
In terms of purposes of the meeting, we're
here to provide you with a status report on the
technical work that we've been performing related to
50.46 changes. We're interested in getting feedback
from the Committee and comments on the particular
technical work that we've been doing. At this point,
we're not requesting a letter from the Committee.
In terms of status, I'll have you recall
that option three when we investigate possible rule
changes in option three, we really have three phases
to our work. Those phases aren't necessarily
The first phase is looking at the
feasibility of changes. Over the last few years,
we've been looking across all of Part 50 to identify
what seemed to be potentially important changes to
Part 50. The first of these we identified a few years
ago. They were changes to 50.44 on hydrogen control
requirements. That's moved on to the point that we're
having a proposed rule. It's near to being issued, I
believe with respect to making some changes to 50.44.
Our next subject if you will within Part
50 was 50.46. In July of last year, we wrote in SECY-
01-0133 that we concluded that it was feasible to make
some specific changes to 50.46 and related regulatory
documents. In particular, we thought we could change
the ECCS reliability requirements. We thought we
could change the acceptance criteria, and we could
change the evaluation model.
We also suggested and recommended in that
commission paper that another longer term change might
be recharacterization or redefinition of the design
basis large-break LOCA. We're still considering the
feasibility of that change, but you will hear about
some of the work on that today as well.
Since July of last year, we've been
spending most of our time performing technical work
that would provide more substance to justify rule
changes. Again, we've been looking at reliability
requirements, acceptance criteria, and evaluation
models. In April of this year, we provided to rule
making folks an interim product in terms of technical
work that we've done with respect to plant specifics,
the potential of changing the reliability requirements
of GDC 35 to reflect a more plant specific reliability
approach. You'll hear more about that later.
While we've been doing the technical work,
other folks in the staff have been working on trying
to decide how you would make the rule changes that
would implement this technical work. We're at the
phase now where we're focusing. In a sense, we're
making the transition that over the next few months
we'll be ramping down in terms of technical work and
we'll be ramping up in terms of looking at specific
rule changes.
MEMBER KRESS: So you're using George
Apostolakis's concept of the darker the color, the
more intense the activity is on that.
MR. CUNNINGHAM: Well, I thought about
starting this by discussion of bright lines and fuzzy
lines and colors and things like that. So, yes. The
darkness of the colors suggest the concentration of
activity if you will. Unfortunately some of the lines
are brighter than I would have liked them to be.
In particular, there's a bright line at
the end of July '02 for technical work. The technical
work does not end in July '02. We have a particular
deliverable then. We will still continue to provide
support to the rule making people, but the concept is
over the next few months we're going to be
transitioning out of being principally oriented
towards technical work to our principal focus being
specific rule changes.
MEMBER WALLIS: You're going convince us
that you've done enough technical work so that you
understand enough to be able to make the rule.
MR. CUNNINGHAM: Yes. That's part of what
we have to do. We have to convince a variety of
stakeholders that we have a sufficient technical basis
to make the rule change. Yes, sir.
What you'll hear today is we're trying to
make the case that we have a technical basis to change
some rules. In July, we will be delivering, in a
sense making a key transition point from technical
work being done on the reliability requirements,
acceptance criteria, and evaluation model
requirements. We're going to be delivering a
technical product to the people who do rule making for
them to start more seriously thinking about how we
would make the rule changes.
You're not going to hear today about
specific plans for when we'll have rule making on this
or when we'll have rule making on that. That's a
little ways down the road yet. The focus today was
intended to be the technical basis for possible
changes. So as Dr. Wallis said, we need to have a
convincing argument that we have a basis to make the
changes. Today is a piece of trying to convince you
and get the feedback from you that we have a basis to
make these changes.
In a nutshell, that's where we are at the
moment. If there's no general questions, I'm going to
turn it over to a discussion. These two folks are
going to talk about an overview of 50.46 and then talk
more specifically about reliability requirement
MS. DROUIN: My name is Mary Drouin.
Office of Research. This particular figure you've
seen I believe several times before. What it shows is
how we have in essence unbundled 50.46. We use the
term 50.46 to always include Appendix K and GDC 35.
These are the related regulations to the ECCS.
MEMBER KRESS: Could I ask you an aside
question about your first box there? Appendix K and
GDC, does it specifically say that it's for an LWR
with ECCS?
MS. DROUIN: Yes. It does. The words on
this slide particularly when you look at this box and
when you look at the four here are lifted right out of
Part 50. (Indicating.)
MEMBER KRESS: Could I interpret that to
mean for other design concepts that may not
necessarily have to have an ECCS? You don't want to
go there.
MS. DROUIN: I don't want to go there.
MEMBER WALLIS: Of course, that box on the
right on the bottom talks about breaks in pipes.
MS. DROUIN: Yes. It does.
MEMBER WALLIS: Do you have some evidence
that other types of breaks are possible?
MS. DROUIN: That is accurate. That will
be covered later on in today's presentation. When you
look at this there are four what we call topical
technical areas to ECCS to 50.46. The first one looks
at what we call the ECCS reliability. When you look
at it, 50.46 and the associated GDC 35 when you read
it, it talks about the simultaneous loss of off-site
power with the LOCA and a single failure criterion.
What those do is in essence tell you what the
reliability in an indirect fashion with the
reliability of the ECCS needs to be.
The next break that we have is on the ECCS
acceptance criteria. When you look at 50.46, there
are five very specific prescriptive requirements that
are provided for the performance. The next part is on
the evaluation model. That is both encompassing 50.46
and also Appendix K. In evaluation model when you
look at these requirements, they are allowing two
models to be used. You can use the realistic or you
can use what is applied in Appendix K.
MEMBER APOSTOLAKIS: Mary, how does the
third box differ from the first, the evaluation model
and reliability?
MS. DROUIN: The first box is telling you
that in your evaluation, you need to assume a
simultaneous loss.
using the evaluation model with the third.
MR. CUNNINGHAM: The first box feeds into
the third.
MEMBER APOSTOLAKIS: The first one defines
the conditions as Mary said by losing power and all
that. You are using the acceptance criteria from the
second box.
MS. DROUIN: Right.
MEMBER APOSTOLAKIS: You are using the
model from the third to evaluate reliability.
MS. DROUIN: Correct.
MEMBER APOSTOLAKIS: You are doing all
that assuming some LOCA size from the fourth box.
MR. CUNNINGHAM: Yes. The third box isn't
really getting at reliability in a quantitative sense.
It's functional success or not. Will you meet the
2200 degrees?
"realistic including assessment of uncertainties."
MEMBER KRESS: If it's --
MEMBER APOSTOLAKIS: You are still --
MR. KRESS: If it's the best estimate, you
have to have the uncertainties.
MEMBER APOSTOLAKIS: Yes, but you are
evaluating --
MR. CUNNINGHAM: It's those uncertainties
they're talking about in that box; the thermal, the
MEMBER KRESS: The rule actually says you
have to be 95 percent confident in your calculation of
the peak plan.
MEMBER APOSTOLAKIS: If I calculate with
an evaluation model the temperature and I have the
uncertainties --
MEMBER KRESS: You have to use the 95 --
MEMBER APOSTOLAKIS: Is the probability
that the temperature will be less than 2200 degrees
the reliability of the system?
MEMBER KRESS: No, by no means.
MR. CUNNINGHAM: There's an additional
piece which is the equipment you have in order to
accomplish that function has to have a certain
MEMBER APOSTOLAKIS: And that's not part
of the evaluation. The evaluation is only a piece of
MR. CUNNINGHAM: Correct. It assumes it's
there working.
MR. CUNNINGHAM: It's your ability to
calculate. It's not just reliability.
MEMBER APOSTOLAKIS: But if I want to do
the first box, I will need the reliability of various
components and then given this configuration, I will
need the third box to do the calculation.
MR. CUNNINGHAM: Yes. The way the first
box is today --
MEMBER APOSTOLAKIS: It's a piece of it.
MR. CUNNINGHAM: Yes. Today in the
current GDC 35 as we'll get into, the reliability is
prescribed by a certain set of characteristics 25 or
30 years ago was a way to attempt to accomplish a
highly available system in terms of reliability.
We're saying today we can accomplish that function
without having to be so prescriptive about it.
calculation as you were saying, most of the boundary
conditions were predetermined.
say reliability in the first box, is that the rational
man's definition or the nuclear industry's definition?
MR. CUNNINGHAM: The reliability there is
the assumption that you'll have a high probability
that the equipment will work such that the acceptance
criteria are met.
period of time or just at an instant when the LOCA
MR. CUNNINGHAM: It is in this case I
guess it's --
PARTICIPANT: Mission time of the PRA.
MEMBER APOSTOLAKIS: -- the mission time?
Or is it just I have a LOCA?
MR. CUNNINGHAM: Let's wait a little bit
because we're going to go back and look at the
specific words in GDC 35 and that may help you. But
it's given a loss of coolant is the way I think of it.
You have to have a set of equipment that would be
highly likely to function successfully.
MEMBER APOSTOLAKIS: For a period of time?
MR. CUNNINGHAM: For a period of time.
standard reliability definition.
CHAIRMAN STACK: Except that the rule
doesn't say that today. That's the way you're
interpreting the requirements.
MS. DROUIN: That's correct.
MR. CUNNINGHAM: Yes. That's correct.
MS. DROUIN: When you actually look at the
words of GDC 35 --
MR. CUNNINGHAM: That's a very stylized
way of accomplishing that.
CHAIRMAN STACK: You'll never see that.
MS. DROUIN: You'll never see for a period
of time.
MEMBER BONACA: Although the notices on
the dockets have to put them all straight; that you
can continue to cool and to support and to get the
circulation. That is accepted.
MR. CUNNINGHAM: The concept is there.
MR. CUNNINGHAM: Again, it's a very
prescriptive way of accomplishing that goal. We're
trying to become less prescriptive.
MEMBER APOSTOLAKIS: Now, the reliability
is a probability. Right? Is that what the existing
rule requires or is it just functional requirements?
MS. DROUIN: It's just the functional
MEMBER APOSTOLAKIS: That imply a certain
MS. DROUIN: It implies it.
actually calculating it.
MS. DROUIN: Correct. What's in this box,
this is the actual requirement right now.
MS. DROUIN: That you meet this function
by assuming, by meeting the very prescriptive things.
MS. DROUIN: By meeting those, then you
have indirectly set what the reliability is.
MR. CUNNINGHAM: And our goal is to change
the way that reliability analysis is done.
MEMBER APOSTOLAKIS: To actually quantify
it and then go back and see whether these make sense.
MEMBER KRESS: In order to do that, you
have to have some risk acceptance level for the set of
LOCAs --
MR. CUNNINGHAM: We're going to get into
that. It's for the set of challenges to the ECCS.
MEMBER KRESS: Set of challenges.
MR. CUNNINGHAM: It's well beyond LOCAs.
MEMBER KRESS: Then you're going to tell
us what that criteria is.
MS. DROUIN: We're going to get into a lot
of detail on this.
MEMBER APOSTOLAKIS: We don't want you to
think we're not going to let you.
MEMBER WALLIS: George, I think we need
you on the Thermal-Hydraulic Subcommittee. You can
ask about reliability of codes and the probability
that they're giving the right answer.
MEMBER APOSTOLAKIS: Why? You can't ask
them yourself.
MS. DROUIN: Before we get into the --
MR. KELLY: This is Glen Kelly from the
staff. Could I just address something about the
previous discussion? In discussing GDC 35 and the way
it's written, that doesn't really deal directly with
reliability. It was a way when we put together the
regulations prescriptively describing a capability
that we wanted a plant to have. There's not directly
discussed the reliability of the equipment itself.
It talks about just design features that
you want the plan to have. What we're thinking about
in the proposed possibility for changes to 50.46 is
that we can look at what that design represents in PRA
space and see whether today those requirements make as
much sense as we thought they did back when we
initially did it. We're looking to see whether there
are other reliability requirements that we could look
at the design and say if your design meets these
following reliability requirements, then that's good
enough for handling of various size LOCAs and other
But GDC 35 itself is not directly a
reliability -- It doesn't say anything about
reliability. It does assure you about the design
MEMBER WALLIS: It's even more appropriate
that the codes must have momentum equations. But how
reliable are they?
MEMBER WALLIS: Yes. They have to
function just like a piece of equipment.
conservation of momentum is to some degree --
MEMBER WALLIS: Free to relax.
MEMBER APOSTOLAKIS: Maybe we can go on
MS. DROUIN: I just want to quickly recap
at a high level what we had recommended in SECY 133
and he subsequent SECY 57. In terms of the ECCS
reliability and looking at GDC 35, what we are talking
about doing here is to come up with as we said a
different way of looking at the reliability such that
we can ensure the ECCS safety function reliability
such that it's commensurate with the frequency of
challenge to the ECCS safety function. In other
words, we would demonstrate a reliable ECCS safety
function without assuming the LOCA loop and without
assuming the single additional failure criteria. So
that's what we're talking about here. We're going to
get into details of what we mean by that in a few
asked a similar question earlier. Wouldn't all this
assume that you have some sort of idea as to what is
an acceptable ECCS reliability?
MS. DROUIN: Say this again. One more
MEMBER APOSTOLAKIS: You would need some
target for ECCS reliability which right now is not in
the books.
MS. DROUIN: That is correct. We're going
to get into that.
MEMBER KRESS: A different way to say it
is you need an acceptable risk for those challenges.
It has a function of the frequency I think you just
MS. DROUIN: Correct.
MEMBER KRESS: Okay. That would be --
MEMBER APOSTOLAKIS: Essentially we're
allocating the goal.
MEMBER APOSTOLAKIS: So far we've been
talking about the --
MEMBER KRESS: We're allocating the goal
two ways. One is by those challenges to ECCS. The
other is by the frequency of those challenges.
MS. DROUIN: We're going to get into our
guidelines, our criteria, what we propose. The next
part is on the acceptance criteria. Again if you look
at 50.46 they're very prescriptive. There's five very
prescriptive materials. What we are proposing is to
add a performance based option such that basically
you'd be ensuring that the core remains amenable to
cooling. This would allow the use of other cladding
material without going through for example -- allow
the use of there being a --
On the evaluation model which right now
you can either use your best estimate or Appendix K,
what we had recommended here was revising some of
these requirements to be more realistic. Specifically
we're talking about allowing the use of the 1994 ANS
standard in place of the I believe the 1974 is what's
in there.
The last one as Mark indicated is on a
much longer track. That's redefining the maximum pipe
break size. We're continuing with the work. There
has been some work accomplished. It's not like it's
not being done. But there is some work. We're going
to be also speaking to that in more detail.
MR. CUNNINGHAM: Mary, you didn't address
the second sub-bullet under the first group there.
MS. DROUIN: Oh, sorry. When you look at
Appendix K particularly there's a lot of uncertainty
and conservatism. Those are going to be dealt with on
a separate track. That will also be discussed in
today's presentation.
MEMBER APOSTOLAKIS: And it will include
model uncertainty.
MR. SCHROCK: Excuse me. Could you
clarify the first bullet there? Is your point that
none of the reg guides or regulations mentioned the
1994 ANS standard currently even though the reg guide
accompanying rule change I think 88 says that the '78
standard is permitted? It doesn't say it's required.
It says it's permitted.
So what is it that you're proposing here
to change? Do you want to have the '94 standard
blessed as being suitable in place of the '78 via the
reg guide statement that it is acceptable or are you
looking for some other way of using the '94 standard
to replace the '71, '73 standard? I think it's the
latter. Isn't it?
MS. DROUIN: I'm going to put your
question on hold. We are going to get into a very
detailed presentation on this. Instead of trying to
answer it right now, I would prefer to wait until we
get to that part of the presentations.
MR. SCHROCK: Yes. But I think in the
context of an overview, it ought to be a little more
clear as to what it is you're attempting to
MR. KURITZKY: The key here is that this
change is for Appendix K. In other words, the 88 was
for the best estimate, which was in the Guide 1.1587
gave us guides for doing the best estimate analysis
which refers to the later ANS standard. Appendix K
specifically states the '71 standard. What this is
doing is a change to Appendix K. So if you're doing
the Appendix K option, you can use the '94 standard.
This is specifically for Appendix K.
MEMBER BONACA: We had a presentation some
time ago where some of the concerns were presented.
Some of the concerns that were indicated were a --
setting for subcool boiling. So you'll talk about
that at some point.
MS. DROUIN: Those were mentioned as
possibilities. We were not definitive in they would
be done. That will be covered also.
MEMBER WALLIS: I don't see how you permit
use of three different standards and you predict three
different big clad temperatures. What do you do then?
Pick the one you want?
PARTICIPANT: It's an average.
MR. KURITZKY: We need all the details on
MEMBER APOSTOLAKIS: On slide seven you
said that you would "provide two voluntary
performance-based options." Then there is a bullet on
the ECCS evaluation model. The whole idea of a
performance-based regulation is not to prescribe how
you demonstrate compliance. Is it because of the
importance of this issue here that we want to actually
approve the evaluation model? Why wouldn't they be
free to demonstrate compliance any way they want?
Because it's too complicated? Too many assumptions
and you want to know about them?
MR. CUNNINGHAM: At this point, I think
it's the combination of the two things you said. It's
a very complicated set of analyses and it's at the
heart of the thermal-hydraulic calculations of things.
We're not ready in operating reactor space to take
that additional step.
MEMBER APOSTOLAKIS: That makes sense.
MS. DROUIN: There's not really much I'm
going to talk about here, just to reiterate that we're
going to go through in some detail the technical work
that we're doing in each of these areas. We did send
a report up in April on some initial work that we had
done on the ECCS reliability. We have a milestone due
in July as Mark noted that will I hate to use the word
"complete" the ECCS reliability and the acceptance
criteria and evaluation model. Those are due in July.
Then the spectrum of breaks is a longer track frame.
We're now going to get into the details of
each one of these of what we're doing with the
technical work. We're going to start with the ECCS
reliability. At this point, I'll turn it over to Alan
who will walk you through it and hopefully convince
you that the opportunity is technically feasible.
MR. KURITZKY: Okay. I'm Alan Kuritzky.
I work with Mary in the Office of Research. What we
want to discuss with you right now is regard to our
approach SECY-02-0057. We proposed some changes to
the ECCS reliability requirements. In specific, we
identified coming up with a risk informed alternative
to GDC 35. As Mary was mentioning before and as we've
had some discussions but not consensus, the GDC 35
indirectly gets to the heart of ECCS reliability by
stipulating that this system, ECCS must be designed to
operate and satisfy its mission function given a
single failure and given a loss of off-site power.
What we're trying to do with the risk
informed alternative is to allow the ECCS to be
designed, operate, or possibly evaluated based in part
on quantifiable reliability numbers instead. I make
the point of saying in part because of course the work
using the reliability numbers is just one piece of a
risk informed defense and depth process. So we're not
going to make pure decisions just based on bottom line
I just quickly want to identify a couple
of limitations in the work we're doing and talk about
the scope. As mentioned in the SECY, we're looking at
the changes to the ECCS reliability requirements,
specifically in GDC 35. We are not at this time
proposing changes to the single failure criterion as
it applies to other systems in the other GDCs such as
17, 34, the ones dealing with electric power or RHR or
cooling water, et cetera.
We're focusing right now just on ECCS. By
the same token, we're also not recommending changes
now to the containment's design, the performance
requirements, or EQ. This is specifically just for
ECCS itself.
Also because what we're proposing is more
of a performance-based alternative, we have to have
some performance monitoring also. Therefore, the
implementation of this alternative needs to be done in
a way that's consistent with other existing programs
like the reactor oversight program and the maintenance
rule or any risk informed technical specification
issues that are coming along at the same time.
MEMBER ROSEN: That's really impenetrable
for me. Are you suggesting some corrective action or
strategy different than the corrective action programs
now in place in the utilities?
MR. KURITZKY: No. We're not specifically
stating that right now. What we're saying is, as we
proceed now into the next phase we have a work group
that's been put together to address more of the
implementation issues associated with this. We're
going to have to go in lock-step with these other
programs so that we can be consistent with them.
In other words, if there are already
programs in place for getting a feedback on equipment,
reliability and performance, we have to get those
seamlessly tied in to what our program is going to do.
We do not want to have to sit there and bring up a
bunch of new programs. We want to make use of what's
already out there and try to seamlessly interact with
MEMBER ROSEN: Okay. It seems to me that
if someone uses any new strategies and they find out
after they do an analysis and actually use it, that in
fact, they had made a mistake in the strategy, in the
calculation. Then that would simply be a problem
identification report at that utility. It would go
through the normal corrective action processes. It
would be a root cause evaluation, sent of condition,
corrective action. It's already in the requirements
in Appendix B of 10 CF Part 50.
MR. KURITZKY: Again, we haven't gotten to
the point of implementation discussion internally yet
as to how this is going to work. Your point is valid.
What we're doing is trying to make use of, we're going
to be looking to making use of existing programs to
the extent possible.
MEMBER ROSEN: Nothing new is needed
there. It's only implementation of an existing
MR. KURITZKY: Right. It's making sure
the implementation, this change works seamlessly with
what's already out there.
MR. CUNNINGHAM: You can think of GDC 35
as a design requirement. We're trying to bring the
design requirement into line with the operational
requirements that exist and are being implemented
today. They ought to be giving you the same guidance
if you will.
MEMBER LEITCH: It seems to me we're
mixing design requirements with reliability
requirements. In other words, if I understand where
you're going with this, it would be possible as far as
this rule change is concerned if one had a very
reliable off-site electric power system to have a
plant with no diesels for example as far as this is
concerned and meet the reliability goals. My
perception of ECCS reliability is it's more impacted
by mechanical things; pump availability, balance and
so forth and by the availability of electric supply
system. The off-site electric supply system
associated with most plants is highly reliable. Are
we saying then forget those design requirements?
The plants that are already built have
those back up electric supply systems already there.
I guess what I'm concerned about and I'm starting to
develop a feel for this is if new plants were built
using this device criteria, would it having a highly
reliable off-site power supply system substitute for
on-site diesels?
MR. KURITZKY: I would like to make two
points in regard to that. The first is again as I
mentioned in the previous slide this is one part of a
risk informed defense in depth approach. The bottom
decisions will not rest purely on the numbers. Just
because you have an extremely reliable off-site power
system doesn't mean you can justify not having
MEMBER LEITCH: But if I could demonstrate
the reliability of the ECCS systems that met this new
criteria without an onsite diesel --
MR. KURITZKY: But again you have to look
at the defense in depth issues also. It may be that
you don't want to have all your eggs in the off-site
power basket. That's one of the things we have to
consider. Again, we're not going to make the decision
based purely on the numbers.
The second point I can make is just that
at this point we haven't gotten to the point where
we've established exactly what extent of changes we're
going to allow. I was going to bring that up later.
Will we allow this program to be relaxed --
specifications? Will we allow actual equipment to be
removed from the plant? Those types of decisions
haven't been ironed out yet.
MEMBER BONACA: Although, you want to have
some articulation on how issues like the ones that
have been raised here are going to be dealt with. You
have option three, the framework that you developed.
You have reg guide 1.174 as principles for defense in
depth. You already have there some elements of the
MEMBER BONACA: Do you? Within the
MS. DROUIN: Right. I think you'll see
later on that Alan is going to get into it. You're
not looking at the ECCS reliability just against the
challenge of loss of off-site power. You're going to
have to look at all the initiators and all the
challenges. He's going to get into more of that I
think and that will answer your question in more
MEMBER LEITCH: Okay. I'll hold off for
a little while and see where he goes. Thanks, Alan.
MR. KURITZKY: This slide got talked about
quite a bit already today. With the risk informed
alternatives, GDC 35, what we're looking at doing and
what we're envisioning is offering two approaches for
demonstrating ECCS reliability commensurate to the
frequency of the challenges through both a plant-
specific approach and a generic approach. These
approaches would be specified as we're envisioning in
a regulatory guide not a rule itself. They would
serve the purpose of demonstrating the ECCS
reliability without using the prescriptive assumptions
of the current GDC 35.
In the plan-specific approach, the
licensee with appropriate consideration of
uncertainties would demonstrate that they meet NRC
established acceptance guidelines. For the generic
approach, the NRC would -- and establish a minimum set
of ECCS equipment needed to meet those guidelines
based on a plant grouping, some form of generic plant
grouping. Both of these approached were derived out
of using the guidance and the direction of option
three framework.
As Mark mentioned at the April, we
provided the rule making people with an interim report
on some of the work being done on the plant-specific
approach to a risk informed GDC 35. We're continuing
to do work on the generic approach. We have a
deliverable due in July which will hopefully
demonstrate the feasibility and practicality of that
The technical work that we have done, have
worked on so far and in some cases are still working
on for the plant-specific approach. These things
would ultimately apply to the generic approach too.
There are three principal technical areas that we've
been working on.
One is the acceptance guidelines for
demonstrating the appropriate ECCS reliability. The
second on is coming up with LOCA frequencies. As Mark
mentioned earlier, that's a key input to these
activities. The third thing is a conditional
probability of loss of off-site power given a LOCA.
That's particularly of concern or interest when you're
looking at the simultaneous LOOP assumption of whether
or not that is a risk significant or high enough
frequency of probability of that which needs to be
considered in the design basis.
MEMBER KRESS: Does LERF enter into this
because of the bypass accidents?
MR. KURITZKY: The bypass is a part of it.
But also just in general we don't want to just focus
on preventing core damage. We also want to prevent
early release. We want a multi-prominent approach
from the framework which is going to address
preventing core damage, preventing large release. We
have consequence mitigation, et cetera.
MEMBER KRESS: I always thought the LERF
would always show up in bypass accidents.
MR. KURITZKY: That's probably a driver.
MEMBER WALLIS: How do you determine
acceptable LOCA frequencies?
MR. KURITZKY: When we say "acceptable"
that means acceptable for something we would be
willing to let the licensee use in their calculation.
MEMBER WALLIS: Eventually it's not just
a calculation. You're predicting something which is
likely to happen. So you envision a world where
perhaps we have one LOCA every ten years and one CDF
every 100 and one unacceptable LOCA every 1,000 or
something. LOCA becomes a new criteria?
MR. KURITZKY: Again, I think the word
"acceptable" maybe is confusing. It's not acceptable
as they have to be able to meet this certain LOCA
frequency. It's rather just LCOA frequency. I'm
going to discuss a little bit later on frequencies.
In fact, Rob Tregoning is going to go into more detail
on it. Right now the existing LOCA frequency is used
in PRAs. There are questions and concerns about them.
We need to determine what are some usable LOCA
MEMBER KRESS: You're asking questions in
frequencies that are used in PRAs.
MEMBER KRESS: Let me ask another question
about that. I was beginning to think the LOCA
frequency was a cut off value at which you would for
defining your design basis accidents. Is that somehow
related to that also?
MR. KURITZKY: That is more probably the
long term, the redefinition of LOCA spectrum for
locations. It may get into some of that. We are
looking at from a risk point of view some essential
cut offs in terms of risk contributors, so the LOCA
frequency is one factor in an equation for that. It
does replicate to that. As far as a direct cut off
with LOCA frequency, that's probably more in the
domain of the long term project.
MEMBER KRESS: The reason I ask that of
course is the question of what is the large-break
LOCA. It feeds back into this question of are you
going to get rid of the double-ended guillotine break.
MR. KURITZKY: Yes. That's going to get
discussed probably a little more this afternoon. It's
also part of the long term project.
Okay. Let me go over those three
technical layers. The first is the acceptance
guidelines. Before I go into exactly how we are
proposing these CDF and LERF acceptance guidelines, I
want to grab the concept of two different types of
changes that we envision licensees may propose
relating to the ECCS. My description of the guide
will be slightly different for them
The first is a change in ECCS design or
operation which is actually requesting the extension
of and allow time for a piece of equipment or possibly
moving some equipment from the plant or no longer
maintaining it with certain standards. The second
type of change is a change in the design basis which
is actually moving some accent from your design basis.
MEMBER KRESS: Now why would we want to do
that unless they wanted to do the first bullet?
MR. KURITZKY: Yes. You're right. The
change obviously will be back to one of those. To
come in with this second one, we have to have some
means of avowing. That's the whole reason I made this
split right here. I would need to talk about how you
would evaluate the second one.
So now I just want to go over the
guidelines as they pertain to the design/operational
changes. The licensee if they had proposed an
operational change would need to demonstrate that the
ECCS functional reliability is commensurate with
frequency of accidents for which ECCS needs to operate
to mitigate that challenge and prevent core damage or
large early release. That is accident, not just
As we all know, CDF responds to a whole
spectrum of accidents. It responds to a lot of
transients. It can respond to external event
initiated scenarios and even during shut down, ECCS
can be -- to respond. So there's a wide breadth of
things ECCS must respond to.
A licensee can accomplish the first bullet
by demonstrating that the following acceptance
guidelines are met. We have two acceptance
guidelines. The first is a baseline total plant CDF
and LERF which needs to meet the quantitative
guidelines from the option three framework.
MEMBER APOSTOLAKIS: Are these different
from the quantitative goals that we're using?
MR. KURITZKY: Derived from.
quantitative guidelines in the option three framework
for CDF and LERF are the subsidiary goals on the
quantitative objectives driver.
MR. KURITZKY: Actually now I think the
new framework is going to have an appendix that
actually documents and traces back that derivation.
MEMBER KRESS: Ten to the minus four and
ten to the minus five.
MR. KURITZKY: Correct. Have you seen my
next slide?
MEMBER KRESS: I haven't read any of them.
MR. KURITZKY: The second acceptance
guideline is that the resulting delta risk or the
change in risk from a proposed change must not
represent a significant risk increase. Quickly
jumping to the next slide to explain the values. As
Dr. Kress mentioned, ten to the minus four and ten to
the minus five respectively are the option three
framework guidelines for CDF and LERF. They are
derived from the QHOs.
Again, since these values apply to a full
scope PRA, we need to look at the total plant CDF and
LERF not necessarily just the part that comes from the
response to LOCA and not just the part that comes from
ECCS values. This is for total plant all modes of
MEMBER KRESS: Would you also look at the
total site LERF if you have multiple plants on the
MR. KURITZKY: That's a good question.
Right now we haven't specifically called that out.
MEMBER KRESS: Of course, whatever your
uncertainties, it doubles if you have two plants on
the site. It might be in the hash (PH).
MEMBER ROSEN: Well, there are some sites
that have three plants.
MEMBER KRESS: It still may be in the hash
MEMBER ROSEN: Here we're considering
internal events, external events, shut down.
MEMBER ROSEN: All of it.
MEMBER ROSEN: It's going to be added
MR. KURITZKY: Yes. Which will lead to
some of the issues that I'm going to bring up later.
Also I just wanted to point out that the ten to the
minus four and ten to the minus five are not set in
stone. They're consistent with reg 1.174. There's
some flexibility that we would probably allow in that
depending on the extent of the delta risk. They are
a flag to give more regulatory attention or more
rigorous analysis that show that you're fairly
accurate with your research.
MEMBER ROSEN: One more comment, Tom. In
two plants you need to multiply it by two and in a
three plant site by three, the ten plant site.
MEMBER KRESS: By ten for LERF, but you
don't do it to the CDF.
MEMBER ROSEN: No, just to the LERF.
MR. KURITZKY: Okay. The other point I
want to make on acceptance guidelines is that the
option three framework only has absolute risk values
in there. It does not have incremental or delta risk
values. For that second acceptance guideline, we
would be using the reg 1.174 acceptance criteria for
delta risk, for changes in risk.
I want to re-emphasize, I mentioned before
that consistent with the option three framework these
quantitative guidelines only one part of a risk
informed defense in depth approach. Decisions are not
to be made entirely based on these values. Rather
that's one input to the decision making process. The
defense in depth principles cannot be violated.
MEMBER WALLIS: That's a very strong
statement. I think you may always find someone who's
going to say you can't possibly do this because of
defense in depth.
MR. KURITZKY: Yes. It is a good point,
Dr. Wallis. The principles that we refer to here, I
think they're detailed in the framework. They're also
detailed in the reg 1.174.
MEMBER WALLIS: They have to be more than
principles. They have to be quantified or something
so you can apply them. Otherwise, you're always going
to find someone who interprets defense in depth as
being you can't do this because of defense in depth.
MR. KURITZKY: Right. There's half a
dozen of these principles. Depending on who views and
interprets those principles, they can say absolutely
nothing can be changed or there is leeway for some
things to be changed. I think that --
MEMBER WALLIS: You have to change their
way of doing things though. You have to change the
way in which you talk about defense in depth. They're
not just principles that can't be violated. You have
to be more specific about what they really mean.
MR. KURITZKY: Yes. That's true.
MEMBER APOSTOLAKIS: Maybe you can use a
mild diversion. Say defense in depth philosophy
should be satisfied or met. "Cannot be violated" is
too strong. You can say there is something out there
that you should try to comply with.
MEMBER WALLIS: It's like obscenity. You
can always find something that violates somebody's
sensitivity. Therefore, it's not allowed with them.
defense in depth is obscene?
MEMBER WALLIS: I'm saying if they trying
to apply defense in depth principles to obscenity, I
think they'll get into great trouble.
MEMBER APOSTOLAKIS: Yes. So maybe comply
with the philosophy.
MEMBER APOSTOLAKIS: Which essentially
says deal with the uncertainties, but not how to deal
with them. It doesn't say how, but it says deal with
MEMBER ROSEN: I think what you said is
that depending on the issue and depending on which
part of the staff is involved, defense in depth may be
interpreted quite differently. That situation is
unacceptable to you which I think is the right answer.
MEMBER KRESS: But when you get ready to
try to figure out what this defense in depth means, I
would recommend the ACRS rationalist approach which
says that no set of sequences will contribute in order
to the uncertainty in the final risk result. That's
how we tied uncertainty in. It's -- to what that set
of sequence is you're dealing with. If they
contribute all the uncertainty and there's not much
left over sequences, then that's a new ordinate. You
have to somehow factor that into it. I don't know
what ordinate means either, but you have to figure
that out.
MEMBER APOSTOLAKIS: Since you raised
that, I think a pragmatic approach, suppose there is
something that you may want to explore here. It may
need to apply defense in depth in a more -- fashion
with the high level, but then apply the rationalist
approach at lower levels. The reason for that is
because in the structuralist approach, they're also
claiming that defense in depth protects you in case
you are wrong in your calculations or you are wrong in
some assumptions. Now if you apply that to every
little detail then you'll never get out of it.
At some high level you might say I've done
this beautiful analysis but what if I'm wrong. Why
don't I put this extra protective system or some other
measure to protect me? It's not a very satisfactory
state of affairs, but I think it's a pragmatic state
of affairs. At this point, given the uncertainties we
have, a lot of them remain unquantified.
MEMBER KRESS: The problem with that is
you don't have any guidance on how good that extra
level of protection has to be.
MEMBER KRESS: So you get back into the
same problem you had.
MEMBER APOSTOLAKIS: We don't, but at
least you are beginning to limit the applicability.
MEMBER KRESS: You limit where you're --
MEMBER APOSTOLAKIS: Otherwise, we might
as well forget about all this. If we keep applying
defense in depth at every level, why do all this? Do
you want to say something?
MS. DROUIN: All I was going to say was in
our next version of the framework paper in option
three we have taken your discussion on the rationalist
and the structures, the high level and the low level
and expanded the discussion quite a bit to address
these things and the uncertainties. That's all.
MR. KURITZKY: Okay. The proposed
acceptance guidelines for the design basis changes.
Essentially they have to meet the same acceptance
guidelines as the design operational changes that we
just discussed. The only difference and the point we
want to make here is that this is an analytical
change, not a physical change at least initially. As
it was pointed out, obviously you'd only make this
change because you have some physical change in mind
down the road.
Because it's now an analytical change, we
need a method for counting what the delta risk is
associated with it. Therefore, what we are proposing
was that the design basis or set of events as a
candidate to be removed from design basis would be
assumed to go to the record of core damage because the
plant would be designed to be able to respond to it.
If you assume that the assessment went directly to
core damage and you were still able to meet the
acceptance guidelines, both the absolute and the
relative, then that would be essentially meeting the
acceptance guidelines for the change.
MEMBER KRESS: You intend to have an
absolute value on the deltas that's acceptable.
MR. KURITZKY: On the delta? It's from
the reg guideline 1.174. I wouldn't say it's absolute
because it's a cut off. In the reg guideline 1.174,
there's a fuzzy chart in there which is based on
baseline CDF and acceptable changes. There's
different regions in there and intentionally fuzzy
transitions between the regions. But it gives you a
ball park of what's acceptable. We'd be going on the
same --
MEMBER APOSTOLAKIS: I don't understand
the second bullet. What does the second bullet mean?
MR. KURITZKY: As an example if the design
basis, actually you wanted to move your design basis,
a large based LOCA coincides with the loss of off-site
power. What you would do is assume that a large based
LOCA and LOOP would directly do core damage, adjust
your PRA model accordingly. If you still met the two
acceptance guidelines, then that would be acceptable.
In other words if you assume large break LOCA and
coincident LOOP led directly to core damage and you
still had a CDF below ten to the minus four if the
delta risk or the delta CDF --
MEMBER APOSTOLAKIS: You assume that a
large LOCA coincident with loss of power leads to core
MEMBER APOSTOLAKIS: Now what probability
are you calculating then?
MR. KURITZKY: It's no longer in the
design basis.
CHAIRMAN STACK: No. The conditional
probability is one, yes.
MEMBER APOSTOLAKIS: So you're calculating
the probability of the coincident occurrence.
MR. KURITZKY: The frequency --
MEMBER APOSTOLAKIS: If that is less than
MR. KURITZKY: If the frequency of the
large break LOCA and the conditional probability of
LOOP, that quantity, then the conditional probability
of core damage is at one. If that meets let's say the
reg guideline 1.174 delta risk acceptance guidelines
for CDF and LERF --
MEMBER WALLIS: It seems to me -- has a
large break LOCA or other symptoms of large break LOCA
being likely to occur, then you suddenly change that
probability. You find that half the plants are no
longer in compliance. What do you do? Do you go back
and put in some different kind of LOCA?
MR. KURITZKY: Well, there's two points
you made there. One is that risk any time you're
making decisions in a risk informed environment you
always run the risk that you're understanding of your
data or whatever can change on you. One of the topics
that the working group on this project is trying to
work on is how to work the rule making package such
that if something should change later on we don't have
to go through a back-fit process to make a change.
You make a change based on the current
risk picture and that changes, then you should be
required to go back to the way it was before. There
shouldn't be a big burden of back-fit arguement to
have to be addressed. That's one area that we're
looking at right now.
The second thing as far as whether or not
something would change and totally destroy the LOCA
picture, that's an issue. Maybe it'll be brought up
more when we discuss the LOCA frequency in detail and
Rob or someone discusses LOCA frequencies in detail.
We're looking at a range for LOCA frequencies and
uncertainty. Obviously it's a very uncertain
parameter. You have to hope that an event here or an
event there isn't going to radically change your
perception of what that range of LOCA frequencies is.
Obviously we've had a couple of event in recent times
that made us all sit down and re-think what we're
MEMBER WALLIS: TMI did have a change, did
make changes occur. The waves were pretty large after
TMI. The waves would be pretty large after anything
comparable like a large break LOCA.
MR. KURITZKY: If something not expected
to happen actually happens of course and it really
hasn't been accounted for, certain analysis to the
distributions, that risk is yes, that exists. That
would be with any risk informed application.
MEMBER ROSEN: I see this as much more
simply than you're discussing it. If something
happens that puts a plan outside its analysis, then
it's outside it's licensing basis. You know how to
deal with that. We take steps to put it back within
the licensing basis. This can happen, and when it
does we know what to do.
MEMBER BONACA: If I remember, you already
showed the results before that showed the LOCA and
LOOP combined is an extremely low probability. What
you're saying is if that is confirmed by plant
specific calculation for a specific plant, you would
treat it the way through those criteria 1.174 by
saying that a contribution to risk of the particular
combination is so small that you don't have to have
lots of off-site power capability or assumption in
your LOCA.
MR. KURITZKY: Yes. That's correct.
MEMBER APOSTOLAKIS: Why do you need the
second bullet? Why isn't the first one sufficient?
MR. KURITZKY: The second one explains.
For instance, if you differentiate between what I
mentioned before the design and operation changes. If
someone came in and proposed to take a Lipsi Pump,
they have four Lipsi Pumps and they propose taking one
out of their plant and no longer maintaining it, they
would do a calculation to show now we only have three
Lipsi Pumps. What is the change in risk from our
baseline? They would then see whether or not they
meet the acceptance guidelines.
MEMBER APOSTOLAKIS: Let's go back to what
Mr. Leitch said earlier. Unless I'm wrong, the
current requirement is LOCA plus loss of off-site
power. Then you say if you want to change that assume
that the conditional probability given these
circumstances of having core damage is one. But I
have my diesels. Don't I? Why is that one? Don't I
get extra power?
MR. KURITZKY: You have your diesels
because that's in your design basis right now. If you
take it out of your design basis anymore, you may not
start your diesels rapidly enough to be able to
respond to a large break LOCA.
MEMBER APOSTOLAKIS: But you're not even
giving me the chance of investigating that. You're
saying I have to assume I have core damage. I'm
missing something here. Why don't you say just the
first one? Do the first one. You want to change the
requirement of simultaneous concurrent loss of power
and the LOCA, fine, analyze the sequence, show us how
CDF and LERF change and go the normal way. Why do you
have to assume that there is --
MEMBER BONACA: You're right. I think
you're right. It's intriguing because this is a heavy
burden on the licensee.
MEMBER BONACA: The assumption of having
to use the diesels on your LOCA. That really places
the most restrictive requirements on the diesels. You
have to start them within a certain time and load
MEMBER APOSTOLAKIS: Fine. That's their
problem. Why should I put a second bullet?
MEMBER BONACA: I understand. I'm saying
that the example given however in that kind of context
for the licensee is an important issue. That's one of
the areas where licensees are going to look for an
MEMBER APOSTOLAKIS: The issue here is the
MEMBER BONACA: I understand.
MEMBER APOSTOLAKIS: So far I haven't seen
an application of 1.174 where we told the licensees
what to assume.
MEMBER APOSTOLAKIS: We just said do your
calculations and come to us. If they are acceptable,
fine. Now we're going one step beyond that. We're
saying and we want you to assume this when you do your
calculations. I'm afraid that this is going to lead
to --
CHAIRMAN STACK: This is an example. This
is their second step. I mean, you don't have to go
this route. This one gets rid of the design basis
accident. Once you do that, you can live in design
basis space again.
MEMBER APOSTOLAKIS: But why can't I do
that with the first bullet alone?
MR. CUNNINGHAM: You could. We're getting
at the issue that we're applying the reg guide 1.174
structure to the removal of a design basis event as
opposed to a tech spec change or something like that.
So there's a perception that it is a more significant
change in the requirements. Maybe there needs to be
an additional test. This is one way of making that
additional test. Maybe it's not the right way.
MEMBER WALLIS: In George's way then, you
don't have to meet this large break LOCA criterion,
but you still have to analyze it because you have to
evaluate the CDF. Therefore if you get a peak clad
temperature of 3,000 degrees or something, you analyze
the consequences.
MEMBER WALLIS: You still have to analyze.
MEMBER APOSTOLAKIS: The whole thing, yes.
MEMBER WALLIS: If you did the second one,
you wouldn't even have to analyze it. You wouldn't.
MEMBER APOSTOLAKIS: You'd just assume
that you're damage in the core.
MR. CUNNINGHAM: And we're assuming that
you can remove this scenario from the set of design
basis events purely on the frequency of the event.
MEMBER WALLIS: Which is one way to do it.
MEMBER APOSTOLAKIS: If you demonstrate,
that's fine. But should it be a requirement?
MEMBER KRESS: But you don't remove it
purely on the frequency. You use that as your first
judgement. Then you go through the PRA calculation
and show that you meet your risk criteria. It's the
combination of that and --
MR. CUNNINGHAM: But you're right. There
are others that can accomplish the same thing.
MEMBER BONACA: Bullet number one seems to
be the approach.
MEMBER BONACA: The second bullet is just
an example of how you can get there.
MEMBER APOSTOLAKIS: Exactly. If you said
an example --
conservative calculation, assume that you are in core
damage and you still satisfy the reg, that's fine.
But it shouldn't be the same kind of bullet.
MR. CUNNINGHAM: Okay. Good point.
MR. KELLY: What happens if the
frequencies change with time? Say deregulation or
whatever. How do you take care of that?
MR. CUNNINGHAM: That's an issue that
comes up any time you're trying to make requirements
more performance based. You have to make the
judgement of whether or not the decision you're making
is likely to be sensitive to changes in frequency over
MR. KELLY: So would they have to put the
diesels back in?
MR. CUNNINGHAM: If we want to put it back
in, the burden becomes the staff's burden to justify
it. So the staff has to be comfortable when it's
removing this, for example, something from the design
basis that it's not likely to be an issue down the
MEMBER ROSEN: Again, I see this more
simply than that. I see the licensee that proposes a
change that's based on some off-site power reliability
numbers is now bound, some range obviously he's going
to have, and has got that in his licensing basis. If
the deregulation or some other factor leads to a
degradation of that reliability, he's operating
outside his licensing basis. He and his staff both
should be interested in that.
MEMBER ROSEN: The corrective action is to
get back within the licensing.
MR. CUNNINGHAM: And what we're doing is
getting into the implementation of this concept.
You're right. There are lots of ways to implement it
so you don't go way out of balance and things. We're
getting ahead of ourselves in terms of where we are in
the development of rule changes.
MEMBER KRESS: One more comment, not to
throw a monkey wrench into the system. In my view the
reason reg guide 1.174 ended up with this four
dimensional set of acceptance criteria with the
absolute values and the deltas is because we wouldn't
face up to the need to have an absolute CDF and an
absolute LERF as your acceptance criteria. If we had
those and maybe some expression of the competence
level in which you have to meet them, you wouldn't
have to do the deltas.
MEMBER KRESS: That would quit penalizing
those plants that are already good. So I just wanted
to make that comment. There are things in 1.174 we
ought to be thinking about.
CHAIRMAN STACK: But there are those of us
who philosophically would also object to allowing the
plant to increase its risk simply because we picked a
limit. If they were ten to the minus six and the
limit was ten to the minus four, do I really let them
go to ten to the minus four?
MEMBER KRESS: That's a nice philosophical
MEMBER ROSEN: Why is that abnormal?
CHAIRMAN STACK: We're not here to debate
MEMBER KRESS: We know 1.174 will allow
that to happen in set of increments anyway. You can
do that with 1.174.
MEMBER ROSEN: There are several steps.
I'm not sure we want to go into that.
MEMBER KRESS: We already said we're going
to allow that is what I'm saying.
CHAIRMAN STACK: Let's get back on track.
MEMBER ROSEN: I would point out, Mr.
Chairman, that you took us off track.
CHAIRMAN STACK: No, no. I was responding
to a diversion.
MEMBER KRESS: I took us off track.
MR. KURITZKY: I'd like to talk just for
a few minutes about the issues of PRA scope and
uncertainty analysis. As we mentioned previously, the
acceptance guidelines are intended for comparison with
a full-scope PRA; external events, internal events,
shut down, all different modes of operation.
Recognizing of course that the majority of PRAs out
there are not full-scope. You'd be hard pressed to
find even one that's truly full-scope.
The significance of the out of scope items
needs to be addressed. The importance of those items
is going to be somewhat of a function of where your as
calculated values line up compared to the acceptance
MEMBER APOSTOLAKIS: Are you going to
demand the full-scope PRA?
MR. KURITZKY: No. We're not going to
demand the full-scope PRA.
MEMBER APOSTOLAKIS: Even for such a great
MR. KURITZKY: Well, let me say this. We
don't currently envision demanding a full-scope PRA.
Whether or not use of a limited PRA for these
applications is appropriate is a decision that maybe
has not been rendered yet.
MEMBER APOSTOLAKIS: How about the level
two full-scope PRA? I mean, shouldn't you be
demanding that? You're giving them something --
MEMBER ROSEN: Let me take you around the
trap that Dr. Apostolakis is trying to put you into.
I think you have on slide 17 already said that the
numbers are 1E-4 or 1E-5 and they apply to full-scope
MEMBER ROSEN: So someone who comes in and
asks of that has to have the tools to show you that he
meets 1E-4 or 1E-5. That's full-scope PRA.
MEMBER KRESS: Or he has to satisfy the
second bullet on this slide.
MEMBER APOSTOLAKIS: That's where the
problem is.
MEMBER APOSTOLAKIS: The second bullet
again is a way out of this. We'll start waving our
arms and --
MEMBER KRESS: Like I did on the LERF.
You'll say what is the significance of --
MEMBER APOSTOLAKIS: How much does it
cost? What are we talking about? Is it a major
undertaking to do a level two PRA?
PARTICIPANT: It's a million or two
MR. CUNNINGHAM: To do what?
MEMBER ROSEN: Once you have a level one
PRA, it's an incremental cost. You have to do some
containment stuff.
MR. KURITZKY: Including shut down in all
modes and full external events, et cetera.
MEMBER ROSEN: Now that's different.
Level two is incremental. The shut down is another
story and external events is another story.
MS. DROUIN: My experience in the past of
doing these in terms of what we would bid for these
jobs --
MEMBER ROSEN: When NRC was the bidding.
MS. DROUIN: No. In my previous life.
You're looking at a million dollars for an external
events PRA. Although the level two is incremental,
it's not a small incremental. You're probably looking
at about $800,000 for a full level two.
MEMBER ROSEN: You're making way too much
MS. DROUIN: The point is it's not a small
amount of money.
MEMBER APOSTOLAKIS: But it is not an
amount of money you are spending for one particular
reason only. This model is being used now for all
sorts of changes and requests and benefits and so on.
You have to look at it from that point of view too,
that it's an investment of long term.
MR. CUNNINGHAM: That's right. You know
for the last several years since we talked about 1.174
and things, these are voluntary approaches. We have
consciously left the door open for people to come in
and ask for changes to their licensing basis even
absent of whole scope PRA.
MEMBER ROSEN: The leadership and the
utility and industry in the PRA field has level two
PRAs and they have external events involved. Then
they have shut down. Shut down analyses may not be
full quantification but they're moving in that
direction. This is all consistent with the direction
that the industry leadership and the PRA utilization
is going. It's clear that you can find places of
where that's not true, but it's also clear that you
can find lots of places where it is. The direction is
more and more places where it will be true.
MEMBER APOSTOLAKIS: Well, the problem in
my view is that regulatory guide 1.174 has all the
right words, the right discussion and so on, but the
implementation is very different. We are not really
using the proper CDF when we enter the figures. We're
using level one, internal events only.
Then we say how much do you think it would
be if we include the shut down and other stuff and
then factor of two or three. All right. It doesn't
matter. Did you do uncertainty analysis? No. It
doesn't matter. I don't know that anything matters
anymore. You are giving us this beautiful discussion
here. I am really concerned that it will not be
implemented that way judging from what has happened to
1.174. We're going backwards. People look at you and
they seem to be puzzled when you say did you do an
uncertainty analysis.
MR. CUNNINGHAM: It's a fair comment to
say. In this context, we're talking about rule
change. Should we continue to give the flexibility
that 1.174 does for something like rule changes?
MEMBER APOSTOLAKIS: I don't know about
MR. CUNNINGHAM: That's a fair question.
MEMBER APOSTOLAKIS: I don't know about
your second bullet, significance of out of scope
items, because that's what people are going to do.
They're going to do internal events, level one and
then they will start arguing. What do you think if I
put -- in there, what's going to happen? Nothing
MEMBER ROSEN: There's an inconsistency on
your presentation in slide 17 and the second bullet on
this slide, whatever it is.
MS. DROUIN: I think another way to look
at it is that the more you do the first bullet and the
less you do the second bullet is the more benefit
you're going to get. The more you have to justify
things that are out of scope, the less benefit you're
going to get.
example in the last bullet you're saying "where
possible." What do you mean by that?
MR. KURITZKY: Let me just back up to the
second bullet. I was making a point on the second
bullet. I want to finish my thought which addresses
some of these issues.
Right now, obviously Dr. Rosen mentioned
on slide 17 we talked about full-scope PRA. That's
what we reiterate on the first bullet here. However,
we recognize that very few if any plants have full-
scope, all modes of operation, internal/external event
PRAs. Do we want to say that it is a prerequisite for
having any type of a risk informal change? It's not
my call. Right now, we're going along with the minds
that it's not necessarily required.
As such we have to be able to deal with
out of scope items. Right now reg guide 1.174 has
some discussion on how you deal with out of scope
items. I think for this application or this effort
something similar is what we were envisioning
initially. Out of scope items would have to be
addressed depending on how close you are to the
acceptance guidelines. We're trying to lead you to
what type, how much you need to address, and whether
or not you need a very rigorous analysis, whether you
need rigorous PRA analyses for some items, or whether
if you're far away from the acceptance guidelines you
can get by with a simpler analysis or some type of
qualitative argument.
MEMBER APOSTOLAKIS: Why would someone get
the benefits of risk informed regulation when that
person or that entity does not have good risk
MR. KELLY: Dr. Apostolakis, perhaps I can
help out in some of the questions that are coming.
This is Glen Kelly from the staff. The presentation
that you're receiving today is the technical basis
that's going to be presented to the working group from
which we'll try to put together a rule to be able to
do this.
Now as a member of the working group, one
of the things that we'll be looking at is to what
extent we want to allow out of scope items. That
hasn't been determined yet because the rule hasn't
been written about whether it will have to be a full-
scope PRA or whether there will be some aspects that
utility can come in with, a less than full-scope PRA.
At this point, what we're getting is the
technical justification that would be provided for a
rule. So we'll be going forward from there. A lot of
your questions are very pertinent. Right now, what
you've gotten so far is the technical work. What
happens with the technical work and how the final rule
gets written is still to be determined.
MEMBER APOSTOLAKIS: But shouldn't the
technical work then get away from things like the
second bullet and the fourth bullet? The technical
work should say I have a full-scope PRA. Now how do
I use it? The fact that some utilities don't have a
full-scope PRA is a separate story. It's irrelevant
to the technical work.
MR. KELLY: Right.
MEMBER APOSTOLAKIS: Now you're trying to
embed in the technical work ways of getting out. I
don't remember now. Does 1.174 address out of scope
MEMBER APOSTOLAKIS: It does? I remember
it says something in the level two part.
MR. CUNNINGHAM: Yes. Even in level one.
MEMBER APOSTOLAKIS: Well, then it has
been abused.
MR. KELLY: The other aspect of reg guide
1.174 that I think is important to remember is that
reg guide 1.174 was written specifically for licensing
basis changes.
MR. KELLY: The commission has accepted
the reg guide 1.174 as a process that can be used for
making risk informed decisions. The numbers that
should be used for the criteria for making regulatory
decisions, in this case for changes to rules is still
a policy decision that has to be made as to what
exactly the appropriate numbers here to be used. It
may well be that the numbers that are in reg guide
1.174 currently will be the ones that end up being
used. That's still a policy decision to be made as to
exactly how those numbers should be used.
In one case as we're talking about an
option two for 50.46, we're talking about they are
still maintaining the functionality of the equipment.
Here we're talking about the capability of actually
physically removing the equipment or taking away its
capability to operate. It's a whole additional level
of change to the plant that you get under option
MEMBER APOSTOLAKIS: Yes. These are the
policy issues. I'm talking about the technical basis.
MR. KELLY: Right.
MEMBER BONACA: This seems to me it goes
beyond. For example, now that you're making a change
that is based on risk information, you have a need on
the part of the -- to have a commitment to
configuration control in the PRAs itself. As you make
changes that you made on the basis of a PRA product of
a PRA model, you need to verify that as you make
changes in the plant and you go forth you are not
violating those commitments of information that you
submitted there.
To me, that would say also that you had a
commitment to PRA -- PRA that you have to have. It's
something that you maintain and use and you have a
verification process. You have clear flags that say
you make a change. You're bumping into something you
committed for to meet these requirements. So there
are specific needs I think from a risk informed stand
point that need to be defined.
MEMBER ROSEN: Mario, your point is a good
one. In the experience I have and the staff has in
option two with South Texas was the question of PRA
configuration management was dealt with explicitly in
the license. What we had to do to keep the PRA up to
date was because there had been a license exemption
granted. It's exactly right. Now what you do with
the PRA is going to have a much broader application in
the plant then it did before. That becomes part of
the licensing basis. That's part of the bargain that
a utility who gets some relief will have to undertake.
MEMBER BONACA: That was the intent to
50.59 with the deterministic analysis. Now there
isn't to make people stay in there for the PRA. The
fact is that you're right. We have to do that. I
think at some point the standards we expect -- I mean,
I'm looking here at the standard for PRA, the SME
standard that just came out.
There are still the definitions, the
capability category one, two and three. When are we
going to stick out our neck and say that to do such an
application of this nature you need to have a
capability three, for example? I think there is a
need for some clarification there rather than simply
leaving it as an option.
I also see reg guide 1.174 from their
perspective as a historical document. It attempted to
promote user risk information in an environment where
not everybody had the PRAs. Does it mean that we're
now going to support a system where ten years from now
everybody uses risk information that doesn't have
strict --
MEMBER APOSTOLAKIS: Well, the accurate
description that this is a level one PRA informed
regulation, internal events only, partial.
MEMBER APOSTOLAKIS: I don't see why it
should be that way.
MEMBER BONACA: No. That's right.
MEMBER APOSTOLAKIS: I mean, a million
dollars considering the benefits here is really not
that much.
CHAIRMAN STACK: We make approximations
all the time. You can do your Appendix K or you can
do a best estimate. Now that we have best estimate
capabilities, should we forbid people to use Appendix
K? The purpose here is not to advance the technology
but to assure public health and safety. Is it good
I think that goes back to Mary's question.
Perhaps you add conservatism. You're allowed to do
more depending on the information that you have. Even
if you make it a full-scope PRA, then we'll argue
about how good the uncertainty analysis, how good the
models are. It's never-ending. You're always going
to have to make judgements about how to handle that.
MEMBER APOSTOLAKIS: What you are saying
now is that because there is no limit to perfection
let's do a mediocre job.
want to argue about it, you're saying I would talk
about the models. I would talk about uncertainty.
There's no end. This is a standard argument of rules.
If you ask for something more, they say do you think,
Dr. Apostolakis, there is an end to perfection. I've
been asked that question. I had to say no. So, leave
us alone.
MEMBER BONACA: Furthermore, I think the
issue of configuration management of the PRA once you
have commitment based on the PRA, it's an essential
step. Once a utility goes to that step, to that level
of commitment typically it has already decided all
this stays behind. They already have a solid PRA with
a level two. I'm saying some elements for example the
configuration model is a requirement in my judgement
once you make the commitments based on what you have
in that model.
MEMBER KRESS: I presume the guidelines
somewhere along the line will give the plant specific
analysis, will give them the option of actually seeing
if they meet the -- safety goal as opposed to this
LERF value.
MR. CUNNINGHAM: Going back to historical
documents, that option is in 1.174.
MEMBER KRESS: That option is usually in
there. I don't know if it's in 1.174 or not. I
presume it will be retained.
MR. CUNNINGHAM: We're getting ahead of
where we are in the process. We had some boundary
conditions to define for the technical work we're
doing. As we talk, you've sensitizes us to one of the
boundary conditions of whether or not we should be
staying with what's in 1.174, should we be given the
precedents of the last few years, or should we be
thinking differently about that.
MEMBER APOSTOLAKIS: Another way of doing
it though if you want to think in terms of
approximations is has anyone taken a full-scope level
three PRA and work backwards. As Steve said, we have
those. There are some plants that do have those. Say
if this plant can submit at only a level one PRA, what
would have been missed? You see because I have now
the complete PRA and I start comparing. That's how
you determine approximations, by having a more
complete tool and working backwards.
If I take the South Texas PRA for example
and I say I'm going to use now for that plant only
level one, am I missing something? Those guys have
looked at the more complete picture. What is it that
I'm missing? Then come back here and say here is a
list that you might be missing or it's perfectly all
right. Then I think we'll be well on our way of
saying something. Now it's an article of faith.
MEMBER KRESS: You can't do that to a
reactor because every one of them is plant specific
and site specific.
MEMBER APOSTOLAKIS: So by not doing it at
all, that's better.
MEMBER KRESS: You have to do it for every
plant if you're going to get the full --
will have an idea of what's important. Because it's
plant specific then I shouldn't even look into it?
CHAIRMAN STACK: You have to have that,
George, to have any understanding of how to handle
those. That's true.
MEMBER APOSTOLAKIS: Exactly. That's what
I want to see.
CHAIRMAN STACK: The question is do you
have enough of that experience now to be able to make
especially when it comes to --
CHAIRMAN STACK: Well, it needs to be
MEMBER APOSTOLAKIS: If we didn't have the
experience of 1.174, I would go along with this. But
I don't think 1.174 is implement -- so that's why I'm
raising this issue. I don't think it is. I mean,
there's a beautiful discussion on model uncertainty in
the Appendix which I think only I and -- So, at some
point you say enough. Anyway, I have problems with
Why can't someone take a complete PRA and
see what insights we can learn given that this is site
specific I agree? What are we learning from that? If
I used only the level one part, I think South Texas
has one. I think Seabrook has one.
MEMBER ROSEN: South Texas does not have
a level three. It has a level two.
MEMBER APOSTOLAKIS: No. But level two is
good enough for our purposes.
MEMBER ROSEN: For the exercise you want,
we could take South Texas PRA, we could take it and
tell you the differences and results from level two.
MEMBER APOSTOLAKIS: I know there are
three or four of those.
MEMBER ROSEN: There are several at level
three. Minstone (PH). They're typically in
populations that's higher then some level.
MEMBER APOSTOLAKIS: Because then we would
also be addressing a little bit your concern, Tom.
Maybe there will be from South Texas we will learn
this, but look Diablo says something entirely
different. Then I'd like to know that too. I think
it shouldn't be a big deal to do that.
The question would be under what
conditions is a level one internal events only PRA
good enough for these kinds of regulatory
applications. That would be great.
MEMBER ROSEN: But that would be research.
MEMBER APOSTOLAKIS: And all three of them
MEMBER ROSEN: But why would the licensees
do that.
Then from those insights we replace the second bullet
with something more specific.
MEMBER APOSTOLAKIS: Instead of saying
address them, we say this is for example how you
should address them.
MR. CUNNINGHAM: We'll look into that at
this point to see what we can do.
MEMBER APOSTOLAKIS: That's the greatest
MR. CUNNINGHAM: Thank you.
MR. GRIMES: Dr. Apostolakis, this is
Chris Grimes newly installed as the Program Director
for policy and rule making.
MR. GRIMES: Thank you. Save the
condolences for when I need them. I would like to
point out, I think much of what you're exploring was
some of the thinking that went into developing reg
guide 1.174. Our expectation is that we're looking at
as Mark and our colleagues have described work that's
being developed in order to define a voluntary rule
that is going to be an alternative to a deterministic
traditional engineering practice for current licensing
As Dr. Bonaca has pointed out, there's a
certain expectation that in order to be able to be
risk informed and performance-based and maintain the
licensing basis, we are supposed to be offering up a
rule change that will seek public comment on how well
we've been able to articulate not a standard of
excellence for maintaining the licensing basis but the
necessary and sufficient requirements in order to be
able to adopt this voluntary alternative.
MR. GRIMES: Necessary and sufficient.
MR. GRIMES: Yes. That's the regulatory
standard that we build our rules upon. It has
previous reasonable assurance of public health and
safety, but at the same time only be that which is
necessary to justify public health and safety. We've
long argued about the philosophy of whether or not the
regulatory standards should creep into excellence over
time as knowledge is gained.
I think what's important to recognize here
is that the technical information needs to be able to
satisfy the largest population of trying to let the
tools be market driven. We ought to be able to say
that someone who has a level one probablistic risk
analysis or probablistic safety analysis can achieve
some benefit in its application provided that they can
address the uncertainties, the impacts of not having
level two or level three. They might not get enough
to justify the cost, but they should be able to
understand what the threshold is.
Someone who has a level three PRA and
implements it and maintains it and makes it part of
the licensing basis should get a demonstrably larger
benefit or reduced burden. I think that the challenge
that we face in rule making space is being able to
show how the threshold is going to be applied in a way
that is clearly articulateable and understandable to
the public and also demonstrateable to the industry in
terms of if they spend more how much more do they get.
That's the way that our performance as
regulators will be measured, our ability to articulate
rules that have demonstrateable benefits. At this
point, I'm not in a position to say that I agree or I
disagree that the promulgation, the perpetuation of
the reg guide 1.174 approach is the right way to do
it. Certainly it was a starting point. I agree with
Mark. We should be prepared to come back when we
present a rule and say how we would address different
ways to approach your question.
MEMBER APOSTOLAKIS: Well, what you said
is it's certainly a consideration. In an integrated
decision making process, that's certainly a
consideration. You don't want to have a rule that
imposes such demands of the licensees that it's
impractical. I agree with that. The question is
where do you draw the line. All I'm suggesting here
is since this is the situation out there and most
people have a level one internal event PRA, we as the
regulator should understand how that information can
be used so that we don't make mistakes. That's all
I'm saying.
But I still think the regulations are
insufficient, not necessary. You are using
conservatisms. You say if you do this, it is good
enough. That's sufficient. If you do something else
-- So it's not necessary. I'm really disappointed by
the way 1.174 has been implemented. You mentioned
also the public. Let's not forget that one of the
goals of the Commission is to maintain and enhance
public confidence. The rigor of our methods is an
important consideration here.
MR. GRIMES: I agree.
MEMBER APOSTOLAKIS: I don't think we have
a disagreement. It's just I'm asking for this extra
step. You're going too slowly for me. I can't
believe how slow you are.
MR. KURITZKY: I can talk fast.
MEMBER APOSTOLAKIS: Mr. Chairman, are we
going to have a break at all?
MEMBER APOSTOLAKIS: No. It doesn't say
MEMBER ROSEN: No break is shown on the
MEMBER APOSTOLAKIS: No break is shown.
MEMBER ROSEN: 3:00, George.
MEMBER APOSTOLAKIS: Unless we go to 3:00.
CHAIRMAN STACK: Let's get into the LOCA.
CHAIRMAN STACK: Let's go on with that.
MR. KURITZKY: Okay. That's the
acceptance guidelines. The second technical area --
These last two will be a lot quicker. If there are
any questions on this, I'm going to push off to Rob's
presentation. You're going to get a more detailed
discussion on what's going on with the LOCA
redefinition and also some of the interim efforts for
LOCA frequency estimation from Rob Tregoning.
Right now, I just want to mention a couple
of the overview highlight items and some of the
background. For risk informed alternative GDC 35,
obviously we need some kind of LOCA frequency to plug
in. In doing so, we need to consider not just LOCA
initiating events but also transient induced or
consequential LOCAs, RCP LOCAs or stuck open valves.
We're looking at the -- picture. Therefore, we need
all forms of LOCAs, anything that may require ECCS to
have a response.
MR. KURITZKY: Exactly. Actually then
going on to the next bullet when we talk about LOCA
initiating events, it's not just pipe breaks but it's
also any other type of LOCA that can conform. They
would be CRDM, leakage, a pump casing rupture, valve
failure, or steam failure, anything that can result in
breech of the RCS boundaries.
MEMBER APOSTOLAKIS: Let me understand
this. In the design basis, is that how LOCA is
MR. KURITZKY: Well, for 50.46 right now
it's just pipe break LOCAs.
MR. KURITZKY: Just pipe break LOCAs. If
we're going to go to a risk informed approach, we need
to be risk informed which means all types of LOCAs.
deterministic rules?
MR. KURITZKY: You'll have to ask whoever
came up with the deterministic rules.
CHAIRMAN STACK: It really is. It just
says that it limits the size of the LOCA to the size
of the largest pipe. That was intended to bound all
other LOCAs.
MR. KURITZKY: Yes. That's true.
MEMBER ROSEN: It wasn't intended to bound
all other LOCAs. Was it? What about the reactor
CHAIRMAN STACK: It was presuming that --
MEMBER ROSEN: So there is a risk limit
even in the existing --
CHAIRMAN STACK: As I said, the pipe was
intended to bound all the LOCAs that were thought to
be credible.
MEMBER ROSEN: In other words, have a
frequency large enough to be considered.
MEMBER ROSEN: In other words, risk
MEMBER BONACA: Larger breaks would imply
the fragile of the vessel. There was considered low
enough probability that would not --
MEMBER ROSEN: So I'm saying this much --
deterministic basis that we have is in fact risk
MEMBER BONACA: That was the risk
MEMBER ROSEN: It's just how far we've
gone. Now we're going further.
MEMBER ROSEN: We're still not going to
consider those a failure, except the heads, the CRDM
hazards. Be careful with this because there are some
logical inconsistencies that you need to avoid.
CHAIRMAN STACK: The CRDM failure is less
than the size of a pipe.
MEMBER WALLIS: As long as it's just that
MEMBER ROSEN: It doesn't spread and
involve more than one CRDM.
MR. CUNNINGHAM: I'll remind the Committee
that separately we've been talking to you about
pressurized thermal shock as a mechanism for big
failures of the reactor vessel. Part of that is
what's the frequency of these types of challenges.
MEMBER ROSEN: I think that the more
rational approach doesn't limit it. It just said
anything can happen. It's just like the frequency.
CHAIRMAN STACK: Then you image what the
frequencies are.
MEMBER WALLIS: You imagine what the
frequencies are?
CHAIRMAN STACK: When you start dealing
with frequencies that are so low, it's --
PARTICIPANT: Imagination.
CHAIRMAN STACK: Very difficult.
imagination. You do have an idea as to how high they
can be on a technical basis. I mean, it's not ten to
the minus 2 per year. Right? Now, what the shape of
the distribution is below ten to the minus four,
that's speculative. It's not that we know nothing.
In fact, this "understood" there I don't like. I
would say are not well known or something like that,
not "understood." We do know a lot. You don't have
a guillotine break every hundred years. Right? Not
even every thousand years.
MR. CUNNINGHAM: Not so far.
probability fracture mechanics, then --
MR. CUNNINGHAM: I'm going to see that
this afternoon.
MR. KURITZKY: All right. The cause and
frequencies of transient induced LOCAs and very small
LOCA initializing events are relatively well
understood or known depending on your perspective.
However, the bigger concern is with some of the larger
breaks. Even the large, maybe even what's typically
called small, we don't have quite as good a grasp on
PRAs are typically used for the cores of
those LOCA frequencies. It has been WASH-1400 or
NUREG-1150 type numbers which were principally based
on older oil and gas pipeline data and is not
necessarily directly applicable to nuclear power
plants as well as being older.
NUREG/CR-57510 which was a report updating
initiating frequencies of all types for PRA came out
in the mid to later '90s I think. It was based on
actual nuclear power plant operating more recent
experience. However, some technical issues have been
raised regarding the estimation of larger LOCA
frequencies in that report.
Details I think have been presented to the
ACRS at a previous meeting maybe a year or so ago.
Rob when he gives his talk may talk a little more
about some of that. The bottom line is that we have
no clear consensus LOCA frequencies to use in the PRAs
for this application right now. We are working on a
three pronged effort to try and come up with LOCA
Again, Rob is going to go into more detail
on each of these. I just throw them up so you can see
what the three different prongs are. Short term, in
house elicitation to come up with some place holder
LOCA frequencies has already taken place. Those
frequencies are just for our own internal use in doing
some calculations under this piece of work for the
generic approach so that we can crunch some numbers.
Simultaneously or in parallel, there is an
effort to put together a formal expert elicitation
which will include --
opinion elicitation.
MR. KURITZKY: Expert opinion elicitation.
The time frame for that should dove-tail nicely with
the rule making for this effort so that we will have
the benefit of those values if we go to rule making on
a risk informed GDC 35. The third prong is the longer
term effort to redefine the LOCA spectrum of size and
breaks to be used for 50.46. That's I think a couple
of years away.
MEMBER APOSTOLAKIS: The reactor safety
study also did some expert opinion stuff. Right? It
was not formal, but basically that's what it was.
Right? The ten to the minus four that they had there.
MR. KURITZKY: I guess a lot of it was
also based on actual data --
MEMBER APOSTOLAKIS: But as you say they
were applicable. They are not applicable.
MR. CUNNINGHAM: The range of expert
opinions that ended up being used was probably much
more limited.
MEMBER APOSTOLAKIS: It's more limited.
That's correct. I mean, this panel that came up with
the assessed range and all that was a combination of
in house and external expert opinion. But it was much
more limited. That's true.
MR. KURITZKY: Okay. So in any case as I
mentioned, Rob will talk following my talk and give
much more details on the efforts we've done on LOCA
frequencies. The third technical area that we've been
addressing as part of this effort is the conditional
probability loss of off-site power following a LOCA.
Again, in PRAs what's typically done or probably
across the board is that the probability of a
conditional LOOP after a reactor trip or a LOCA is
assumed to be an independent event. The probability
of a LOOP after a reactor trip or a LOCA, it's
essentially taken the frequency of loss of off-site
power initiating event and divided by 365 for a 24
hour mission time type of thing.
However, more recent analysis that was
done in support of generic issue 171 on delayed LOOP
identified that there is a dependency between the
probability of having a loss of off-site power after
there is a reactor trip or a LOCA. In fact, there was
identified a dependency between a conditional LOOP
after a wreck or trip and then even more of a
dependency given a LOCA because of the additional
loads that are going to be thrown onto the safety
buses. The ECCS loads can result in dropping the
voltage at the buses down below the under voltage, the
degrader voltage, really set points. Unfortunately,
there's extremely limited data on these conditional
loss of off-site power.
MEMBER APOSTOLAKIS: Let me understand
this. There is a dependency.
MEMBER APOSTOLAKIS: How strong is that?
MR. KURITZKY: Well, how strong it is has
been undetermined right now.
MR. KURITZKY: That's what we're trying to
work on.
question is --
MEMBER ROSEN: Wait a second. You asked
a good question. Is it one?
MEMBER ROSEN: Oh. But that's what the
regulations says.
MR. KURITZKY: Yes. That's correct.
MEMBER ROSEN: So now we know it's not
nothing or not so low that you can't see it. It has
some value.
MR. KURITZKY: We certainly know it's
between zero and one.
MEMBER ROSEN: Yes. We've got it in our
MEMBER APOSTOLAKIS: But is that the only
thing that's of interest here? Isn't the recovery of
off-site power also relevant here?
MR. KURITZKY: Recovery of off-site power,
it depends. When we're talking about large break
LOCA, we're talking about things happening pretty
MEMBER APOSTOLAKIS: So you're saying the
time scale of interest is not recovery.
MEMBER ROSEN: What does 6583 say about
site specificity of that finding?
MR. KURITZKY: I don't know exactly what
the details are in 6583 in regards to that. I do know
that they're identified. It gave the engineering
reasons why you have this dependency. As part of our
work now to try to come up with the conditional
probability of loss of off-site power after a LOCA, we
have looked into some of these site specific or plant
specific features that could drive that probability.
Obviously the best way to come up with the
probability is with data. That would be the method of
choice. We have very limited data on that. There's
very few instances of a LOCA or a major ECCS actuation
out there.
MEMBER ROSEN: I made my point for a
reason. When you talk about data for pipe breaks,
you're talking about data that could be accumulated
someplace. As long as it's the same material and the
same welding, you can apply the data fairly broadly.
When you're talking about this, you need to understand
that this is much more site specific.
MEMBER ROSEN: Data that you get at one
site which has a given configuration and redundancy
and reliability of the off-site grid may not apply at
all some place else.
MR. KURITZKY: That's correct. We agree
to that. One of our initial efforts was to try and
come up with some generic probabilities of conditional
LOOP but trying to subdivide it based on some of these
plant specific factors that could drive that
conditional probability. Unfortunately, we weren't
that successful in that effort.
In any case just going back from the
beginning, we saw we had very little data. Regardless
of the plant specificity of that data, there just
wasn't enough data in any case to do much with. In
fact, I think the total number of major ECCS
actuations in our database is something like 14. We
actually have one conditional loss of off-site power.
Again, we're dealing with a very small data sample.
One of the things that we did, we
undertook as part of the technical work for the plant
specific option was to come up with a plant specific
method for estimating the conditional probability of
loss of off-site power given a LOCA. That is included
in the deliverable that research passed on to the rule
making people at the end of April. That method is
something that was possible to be included in a
regulatory guide. However, we want to note that it's
also a data-driven method too.
It doesn't need the actual data of number
of conditional loss of off-site power after reactor
trips. It does need data on the voltage levels and
the switch off in the various plants. That's
information that may not be all that available. I
think given the current trend in the industry that
data is going to become more and more available as
time goes on. As far as having it archived or having
a sufficient database to do some calculations, it
still may be a limiting factor.
MEMBER LEITCH: There are several kinds of
losses of off-site power too. I mean, you could
postulate the collapse of the grids in which case
you're probably hours away from getting the electric
power back. Maybe some of the dependency between LOOP
and LOCA is more driven by voltage transients or
operator error or a breaker is just mistakenly open.
All you have to do to re-establish off-
site power is just reclose the breaker. You still
have the grid out there. Depending on what kind of a
situation you're dealing with, there's a wide range
and time to restore off-site power. I think a lot of
the dependency to me just thinking about it a little
bit seems to be -- I don't see the relationship
between LOCA and loss of the grid. I do see a
relationship between LOCA and perhaps false or
misoperation leading to opening of the breaker where
you can just reclose the breaker and get it back
MEMBER BONACA: Although, the burden is
the generators then to support immediate ECCS
MEMBER BONACA: So even I agree that you
may recover power quickly.
MEMBER LEITCH: Within ten seconds, no.
MEMBER BONACA: Not in ten seconds.
MEMBER ROSEN: Typically what happens is
the diesels pick up and then the grid comes back, but
the licensees don't switch back to the grid right
away. It's been unstable and the diesels are not. So
the operators say let's leave well enough alone.
We're fine. The emergency buses are powered. We'll
wait until whatever happens out there goes away and
it's been gone for some time before we try to re-
energize the buses from off-site power.
MEMBER LEITCH: That's true in the case of
loss of grid. What I'm saying is if it's just been a
misoperation of a breaker out in your own switchyard
and the grid is still there, you can close it and go
back to normal.
MR. CUNNINGHAM: As we've been alluding to
though in the context of here, we're talking about the
actuation of safety equipment and the seconds and few
minutes after a large break LOCA.
MR. CUNNINGHAM: So again the timing is
maybe quite different for some of the things you've
talked about.
MEMBER LEITCH: Absolutely.
MEMBER WALLIS: Why is it always a LOOP
following a LOCA? Considerably, you could lose off-
site power and then the plant could somehow mishandle
the transient.
MR. KURITZKY: Actually that was
addressed. 6538 looked at both LOCA/LOOP and
MEMBER WALLIS: It always seems to be
looked at one way.
MEMBER WALLIS: I think this from a
frequency point of view this is the more.
MEMBER ROSEN: I think we have quite a lot
of data that says LOOPs don't cause loss of coolant
accidents in general.
MEMBER WALLIS: They would probably be of
a suck open valve type.
MEMBER WALLIS: There would be somehow a
transient that leads to a loss of integrity of the
circuit and probably a stuck open valve.
MEMBER ROSEN: Fortunately, we don't have
a lot of data on that also.
MR. KURITZKY: Dr. Wallis, NUREG/CR-6538
did postulate a few ways that could occur. I don't
know exactly the resolutions on that were. Just to
get back to what Dr. Leitch said, the issue that we're
looking at right here, it's not the only issue but the
primary thing that we're looking at as far as the
dependency of the LOOP with the LOCA is a scenario
where you have -- Actually the grid is still available
out in the yard. All the homes in the neighborhood
still have their lights on.
What happens is the grids are in somewhat
of a degraded condition but not to an alarmed
condition. Then you have the reactor trip which
further degrades the grid. You may lose voltage
support. Then if you have a LOCA right there and
you're transferring all your loads from a unit
auxiliary transformer onto this stark transformer that
reserve station transformer sites further accident
grid locally. Then you start your ECCS loads. What
happens there is you can bring down the voltage to the
point where you hit those trip set points.
There's not a lot of margin. Those trip
set points have been raised fairly high because they
want to protect the equipment. That's the whole
purpose. Those are in a tough position because you
need to worry about both sides. You can't just always
set them high or always set them low because you have
competing things. So it's a tight fit in there.
The real possibility exists that you'll
drop down to that level and separate the plant. Then
the diesels will come on as desired. But that's a
situation we're trying to avoid. What we're trying to
calculate is to not need to have those diesels come
on. What's the probability of not needing to have
those diesels?
In any case like I said, we have a plan
specific method that we've come up with. I don't know
exactly how practical it will be. We're still looking
at it. Simultaneously, we've been working with
industry. We've been having a series of meetings on
this topic particularly a focus on the LOCA/LOOP area.
We've been meeting with them about every month or two
for quite a number of months or a year.
We have been focusing a little bit more in
detail on the conditional LOOP probability. Industry
has done expert elicitation for the probability of
LOOP after a LOCA. They have supplied that to us. We
had one public meeting to discuss it just a short
while ago. We have another one scheduled for later in
June. The staff right now is reviewing that report.
We have some questions, comments or concerns.
We're going to have a more detailed
meeting with stakeholders later in June. We
ultimately may accept it or adopt it into some method
that we have. We're not exactly sure how that's going
to play out yet. We are taking active measures to try
to come to some kind of resolution on that conditional
LOOP probability.
MEMBER ROSEN: When you say in the first
bullet that the plant specific method is in Appendix
D to the RES report, what report is that?
MR. KURITZKY: That's the deliverable --
MR. KURITZKY: No. This is a package that
you -- That's the report that went from research over
to --
MR. CUNNINGHAM: It's page D10.
MS. DROUIN: It's the April report.
condition of probability will probably have
significant uncertainties because it's really expert
judgement. How are you going to handle that? You
said in an earlier slide you had the words "with
appropriate consideration of uncertainties." Now when
the uncertainties are fairly large and you're about to
make a decision of such an importance, do we know how
to handle those? Are you still going with the mean
value and you say I have -- attention?
MR. CUNNINGHAM: We're not there yet.
MR. KURITZKY: Okay. Now the last slide
I have is just to go over -- Even though like I said
up to now we've been working on --
MEMBER APOSTOLAKIS: The last thought you
MR. KURITZKY: The last slide.
MR. KURITZKY: We've been discussing out
plant specific approach. That was our first milestone
of trying to generate some kind of -- looking at a
plant specific approach. We're still fine tuning some
things obviously. However, we've also started
embarking on looking at the generic approach. That's
also going to be covered in our deliverable in July.
Many if not all the issues and areas that
were brought up by the various members of the
committee and other people are valid for the generic
option as well as the plant specific. There are two
additional items that particularly pertain to the
generic and make it a little bit more complicated.
The technical work besides the other areas
that we've already discussed include formulating plant
groups based on ECCS configurations and support system
configurations and trying to bin them appropriately
and keeping the number of groups manageable,
performing reliability or risk calculations for a
representative plant of each of those groups.
MEMBER APOSTOLAKIS: So there you will
have the issue of what kind of distribution is used
for the failure rates given LOCA conditions. Right?
MR. KURITZKY: We're going to have a lot
of issues like that. When we're looking at it from a
generic point of view, we have that issue. First, as
I mentioned before, we do these calculations to
represent a plant to try to come up with a minimum set
of ECCS equipment to meet the guidelines and also to
look at whether or not the LOCA/LOOP assumption is
risk incidents and will cost a generic basis.
MEMBER APOSTOLAKIS: But if you're using
frequencies to guide you in this selection, what
numbers you use for the failure rates would be very
important. Right?
MR. KURITZKY: You're talking about the
basing of that input data for the PRA?
MEMBER APOSTOLAKIS: You say what kind of
equipment. Right? Then the minimum number. All that
has to be guided by numbers that are themselves
MS. DROUIN: That's correct.
MR. KURITZKY: Not only that, but they're
going to differ. Obviously when we look at a group of
plants, Group A has six different plants in there,
they may have a similar equation, but plant to plant
they may have totally different failure rates for the
different --
MEMBER APOSTOLAKIS: Of course. The other
thing is can you really use the existing distributions
for failure rates since we're using PRAs now. Here
you have a specific event that has happened, assumed
to have happened, a large LOCA. Right? Most of these
distributions especially now that we're using this
base and updating routinely are based really on normal
routine tests that do not have any large LOCAs
anywhere. So I'm not sure that those distributions
are applicable.
MR. KURITZKY: This is not just going to
be looking at that. When we do these calculations,
this is going to be full plant PRA model calculations.
This is not just looking at large LOCAs, regenerating
the full PRA results.
MEMBER APOSTOLAKIS: Yes. But don't you
assume, I mean --
MR. KURITZKY: But the testing regiments
from which the data is derived are intended to be
appropriate to the function of the system, so that a
valve that is tested is tested against the delta P and
applicability of the distributions has to be
scrutinized. You're saying that's fine.
MR. KURITZKY: I'm saying it's likely to
be okay because if it weren't then the tests wouldn't
have been right.
MEMBER APOSTOLAKIS: But still though
having a large LOCA condition. Anyway.
MR. CUNNINGHAM: There are a number of
challenges for this generic approach.
operators come into this at all or is it the time is
getting so short?
MR. CUNNINGHAM: This is ECCS reliability
across a spectrum of initiators not just the broad.
MEMBER APOSTOLAKIS: Okay. So they will
MR. CUNNINGHAM: Yes. Which again brings
a challenge to being in the plants if you will.
MR. KURITZKY: And also the big issue is
a priori scope of quality as we've been discussing at
length previously. Of course as identified, that's
something of concern for when we talk about the plant
specific application. You may have a known
application that you can get your hands around. You
can try to see what it impacts. If there's areas
around a scope, you can try and get a handle on maybe
how important they are.
In the generic approach, we're not going
to have that benefit. We're not going to know the
applications of a priori. We're going to have to just
do an across the board type of determination of what
equipment is necessary based on our calculations
without knowing how it's going to be applied. The out
of scope items are going to be extremely difficult to
address. You can't use some of these qualitative
arguments for application specific arguments to cut
pieces of the pie away. So I think this is one of the
biggest potential limitations to the generic approach.
It's the whole PRA scope and quality
issue. For quality, right now we're looking at the
models in house which are SPAR models. In the most
current versions that have the more complete set of
initiating events have not been QA'd yet. So there's
a decision that needs to be made about whether or not
these SPAR models are usable for a regulatory purpose.
I think there is a plan to have those
things QA'd by the end of next year which would work
out I guess okay in terms of time for our rule making.
But still just the issue of the completeness and the
data used and the scope of these models because
they're of course all just internal power raise some
daunting challenges to doing this on a generic basis.
Theoretically, if we can go ahead and do this on a
generic basis and that's what we're at least for the
next month or two are going to get a little more
information on that as we try to do some sample cases.
MEMBER APOSTOLAKIS: We've been saying for
a long time as an Agency and I think Chris Grimes
repeated it earlier that if you have a detailed
analysis plant specific full scope and everything, you
should have some benefits. Right? Versus the guy who
just comes in there with a very modest analysis. This
is a generic approach. In principle, a guy who does
a plant specific analysis should have more benefits.
Right? So where is that hidden? Where can I have
more benefits?
MR. KURITZKY: Well, I hope it's not
hidden. The fact that you even asked the question
means maybe it is. The idea is that if we can come up
with a set of minimum ECCS equipment and/or design
basis analysis really focus on LOCA/LOOP for generic
groups, then a plant can go through and pick that
themselves and not have to do an analysis and are not
required for an NRC review. That would probably be --
in a regulatory fashion.
However, if they're the limiting plant in
that group, that may very well be the limit of what
they can give in a plant specific analysis. If in
fact they have a more detailed analysis and their
configuration or whatever else is better then what's
the limiting one of the plant in that group, then a
plant specific analysis could get them a lot more
MR. KURITZKY: Anyway, the last point I
want to make is again the equipment in excess of the
minimum would be candidates for design or operational
changes. The full extent of what would be allowed as
far as those changes is still a decision that has to
be made; whether that's allowing relaxation of tech
specs, or whether that's allowing equipment to be
taken out of the plant or to be no longer maintained,
et cetera. That's still an issue that has to be
resolved. Okay. That's it.
CHAIRMAN STACK: This is through. If
there are no more questions, I think it's time for a
MEMBER RANSOM: I have a question. What
is the benefit to the public safety and health of
making these changes?
MR. KURITZKY: Well, to some extent that's
on an application specific basis. Our units made that
if you extend a diesel start time you'll end up with
more reliable diesel. That's one outcome of this
change. Focusing attention of the plant operators in
training on more realistic accidents as opposed to
large break LOCAs with a set of unlikely --
MEMBER RANSOM: Well, if you go through
the full analysis, then you'd conclude that there is
a lower risk to the public as a result of these
changes. Is that right?
MR. KURITZKY: Well, again I would say
that with -- there's a lower risk in the sense that
it's just a regular -- Changes if the risk increase is
insignificant are allowable. So that's different then
saying there's a decrease in risk.
MEMBER APOSTOLAKIS: Well, the way we put
it when 1.174 was formulated was that the results are
a non-quantified part of the benefit and risk.
MS. DROUIN: That's right.
MEMBER APOSTOLAKIS: The overall result is
really a net reduction.
MS. DROUIN: You can't just take the part
that's for the LOCA. You have to look at the overall
effect, the overall plant operation and overall that
would be a decrease.
MEMBER RANSOM: Certainly from the
public's point of view, it would be much easier to
sell these changes or whatever if you're giving
something back. You take something away but you get
maybe more back.
MR. CUNNINGHAM: The expectation is that
this will happen. In some cases, it's not very
quantifiable in terms of how that's occurred or how
that would occur.
MEMBER WALLIS: I have a question. I
don't know how far along you are. I was impressed by
the range of your presentation and all the things you
covered. It seemed to me that it hadn't come together
to the point where you have the final product and
there's some way to go.
MR. KURITZKY: Yes. In fact, I think Glen
Kelly had mentioned that there's a working group that
has been put together. Now there's NRR and --
MEMBER WALLIS: But to go back to Mark
saying that he's going to convince us that you have
all the technical basis, it seems to me that you have
some way to go before you can say now we really
understand things enough that this is the basis for
our decisions.
MR. KURITZKY: Right. As you've heard
today, there's still issues.
MEMBER WALLIS: Is it another year or two?
How long is this?
MR. KURITZKY: We're hoping that by July
we have a much better handle on it.
MEMBER WALLIS: I'm wondering if that's
MR. KURITZKY: It depends on the time to
resolve some of the issues. I think we should have a
much better handle on whether or not it's a viable and
practical alternative by July.
MEMBER WALLIS: Is that realistic?
MR. CUNNINGHAM: The next step then is
what does this start to look like in real changes to
the words in the regulations. Then we'll come back to
the issue of have we given you a basis to do it.
MEMBER WALLIS: We're looking into this or
we're doing work on this but not much on we've
concluded this.
MEMBER WALLIS: If you haven't done that
by now you are unlikely to do it in my view by July.
MR. CUNNINGHAM: July is a point along the
way. Beyond that, it's let's think about what the
rules look like. Then we can focus the technical work
a little better.
MS. DROUIN: We're starting to do the
calculations on the plant grouping to see if that's
going to work out. In terms of the technical work,
that's where we are right now.
MR. KURITZKY: For the plant specific, I
think we're a little further along. But as you heard
today, there are still open issues. To conclude this
up just to let you know when we come back, Rob
Tregoning is going to talk about the technical work to
support the changes for ECCS spectrum of break sizes
and locations.
CHAIRMAN STACK: Okay. Let's take a break
then until 11:00 a.m. Off the record.
(Whereupon, the foregoing matter went off
the record at 10:39 a.m. and went back on
the record at 11:00 a.m.)
CHAIRMAN STACK: We're on the record.
MR. TREGONING: Now that you're
sufficiently warmed up and had a break. I'm assuming
everyone's ready to jump into LOCA discussions. So
we're going to be talking about two things; both LOCA
frequency redefinition and redefinition and also the
LB LOCA break size redefinition. I'm Rob Tregoning
from RES. Lee Abramson is not here, but he's
participated with me on this effort.
I just wanted to present the first slide.
It's a bit of an update of what we've been doing since
the prior ACRS briefs. You've heard about the LOCA
definition a number of times. Probably the last
substantive brief was given in March of last year.
During subsequent meetings of both the sub and the
main committee, there have been overview sorts of
information provided on LOCAs similar to what Alan
provided earlier today.
I'm really considering this as the first
one since March that you've really heard any gritty
details on. Since March, what has RES been doing in
this issue? First, we developed a technical position
paper documenting issues that need to be addressed for
LOCA re-evaluation. This paper was provided in the
packet that was sent to you prior to the meeting. As
well as all of the issues involved in that paper have
been briefed to the ACRS before. I'm not going to
cover in my slides a lot of those issues. However,
they're certainly open for discussion here.
The other thing we did is we have some
very real and definitive goals that are outlined in
the SECY papers first 01-0133 which has been
superseded by 02-0057 that we need to meet. Oh,
thanks. We have everything. We're really skirting
the outsets of technology. You said we don't define
or push the boundaries of technology here. Well, we
have everything. We have a laser pointer. We have a
Power Point presentation and everything. The NRC does
push the boundaries of technology regardless of what
people might think.
MEMBER ROSEN: Our defense in depth is we
have an overhead projector and we still have fingers.
MR. TREGONING: In my lack of defense, I
didn't bring slides. I'm not practicing defense in
depth on this talk. You would not want to grant me a
license or anything for giving talks or anything like
that obviously.
So what have we done? We've formulated an
approach for realizing both the near term goals and
the long term goals which are outlined in these
papers. We've actually completed the near term
elicitation to develop what we're calling interim LOCA
frequencies but also to develop ideas and issues that
need to be probed more fully when we launch into this
formal or what I'm calling here this intermediate term
elicitation. I'm going to go into a bit of depth on
the near term elicitation; how it was structured, what
some of the outcome was, and even what the results
In terms of public interaction, we've had
essentially three formal meetings in August, October
and March 2002 which dealt specifically with LOCA
issues, not LOCA/LOOP or other issues related to the
50.46 revision. Specifically it was LOCA issues.
Next please.
I put the ending slide in the beginning
because of not knowing how far I'll be able to make it
through the presentation. I figured we'd hit the end-
points here. All you'll hear from this point on is
essentially filler or additional details. These first
two bullets, again I'm not going to touch on much
other than on this slide. Mainly the talk is going to
focus on the efforts that we have ongoing.
In terms of summary, we know that
historically LOCA estimates have been based primarily
or entirely essentially on service history experience
and database and experience with pipe break failures,
not just in the nuclear industry but also in other
industries. In some cases, we've used other industry
information to provide bounding information. In some
cases as in the WASH-1400 studies, we actually used
other industry experience; specifically oil and
pipeline, gas transmission pipeline, military, even
some commercial power experience to provide estimates
that we've used in various studies.
However, the only problem with service
history experience databases is since the last study
was done there have been several potential LOCA
initiating events that have occurred that have been
very high profile. Certainly these include VC Summer,
Oconee and most recently Davis Besse. These were
events that people had never really considered as
being plausible LOCA initiators in the past. The case
of Davis Besse as you discussed earlier had not really
even been considered at all. Like you had said, the
initial rule was deterministically based and Davis
Besse type events, things that could happen in the
reactor was essentially the risk that people were
willing to live with at that time.
MR. BOCHNERT: A question. What about the
two recent events overseas with the hydrogen
MR. TREGONING: In terms of Hamo (PH) and
Brumsbeter (PH)?
MR. TREGONING: I didn't list those
specifically, but there's another case where we've had
pipe failures that have occurred very dramatically
without any precursor events. Those types of events
definitely need to be considered also.
MEMBER WALLIS: One was very close to the
vessel. There happened to be a valve in the way. It
could have occurred close enough to the valve to
prevent its closing.
MR. TREGONING: Yes. That's correct. So
those events definitely have triggered some further
digging and further investigation of potential generic
implications, not just for our plants but then also
specifically for these LOCA frequency estimates.
As Alan mentioned earlier, we have a
three-pronged approach for trying to re-evaluate the
LOCA frequencies to be utilized --
MEMBER WALLIS: Very interesting. If you
would have done this work six months ago, you wouldn't
have had to consider the second bullet.
MR. TREGONING: You're correct. We'll get
into this a little bit later. One of the things we
need to look at when we're evaluating LOCA frequencies
are the occurrence of surprise or potentially unknown
events and how those factor into the database. That's
a very good point and something we need to consider.
It's not an easy thing to consider by any means.
MEMBER WALLIS: It's probably the most
MR. TREGONING: Sure. You're right
because once you know what a mechanism is you can set
up mitigation factors to counter that mechanism
occurring or decrease the likelihood of it occurring
in the future.
MEMBER APOSTOLAKIS: In that first bullet,
you're referring to frequencies, historical LOCA
frequency estimates?
MR. TREGONING: Yes. Frequency estimates.
MEMBER APOSTOLAKIS: Were based on service
history experience?
MR. TREGONING: Experience databases, yes.
MEMBER APOSTOLAKIS: I thought it was
expert --
MR. TREGONING: Well, expert opinion was
used to justify usage of those databases. For WASH-
1400, they used an oil and gas database. They used
expert opinion to justify the use of that.
MR. TREGONING: Expert opinion was
certainly a very important part of the process.
CHAIRMAN STACK: When you get the large
pipes, it's hard to have enough data. I mean it
becomes a combination of data and expert opinion.
MR. TREGONING: It's hard to have any
MR. TREGONING: Mystical science and
everything's rolling into large pipe break LOCAs as
we're going to find out a little bit.
MEMBER WALLIS: Which oracle do you use?
MR. BANERJEE: If it was WASH-1400, then
you didn't have the Flixber (PH) accident in there
which was a double ended guillotine break.
MR. TREGONING: Right. Although if you
talk to the database guys, they would say implicitly
all of these accidents were contained in the database.
It's just at the time that the database was sampled
they hadn't occurred yet so they didn't show up as a
frequency of occurrence. It's a matter of semantics
possibly, but it's realistic nonetheless.
Getting back. We have a three-pronged
approach for trying to deal with this issue. The
first one is complete. We developed interim LOCA
frequency numbers using a staff or an internal expert
opinion process. So this is the near term. We'll
talk about this in detail. The bottom line is the
results that we got. We evaluated both small break,
medium break, and large break LOCA frequency
estimates. They were on the order of two to four
times higher than some of the most recent estimates.
MEMBER WALLIS: Could I ask you about
these expert opinions? I mean, are these people given
enough time and money that they can go away and make
some real calculations or are they just asked what do
you think?
MR. TREGONING: In this effort, no, they
were essentially asked what do you think. In the next
effort as we'll see, the intent is to give them time
to actually do some calculations to support their
opinion. So you're right. If these are done in a
textbook way, you have not only your opinion but then
the uncertainty in your opinion. So your opinion may
not change but you may have more or less uncertainty
depending on the amount of work you've been able to do
in the interim to develop that opinion. Those issues
like you said affect the opinion and then the
MEMBER WALLIS: You would be more
convincing if there was an expert analytical
conclusions rather than just opinions.
MR. TREGONING: Yes. We'll get into this
I'm sure. There have been a lot of analytical studies
over the years. You talk about model uncertainty.
When you talk about LOCAs and specifically large break
LOCAs, model uncertainty, you have to be careful
because that tends to drive the problem. So you get
a certain answer, but a lot of times you're
uncertainty is a large percentage of that answer.
That's really the problem I would say that we've had
in the past and something that we really need to
evaluate very rigorously especially when we're on this
effort but then also this effort too.
MR. SCHROCK: The PERT process is supposed
to be some structured way of integrating expert
opinion to achieve a better judgement than you would
looking at individual expert opinion. Is there some
way you can apply that in this context?
MR. TREGONING: I would say yes. I will
say that I'm not that familiar with the formal
requirements of PERT. However, I'm going to explain
a little bit especially the philosophy and at least
the initial approach of the plan behind the
intermediate term elicitation. Maybe at that time,
you can tell me if it fits within the guidelines of
PERT. If it doesn't, what are the modifications we
need to make to ensure that it's going to be
compatible? Elicitations are somewhat dicey in that
you have to be careful that you structure them
correctly so that the results that you get are
intentional and not unintentional.
MEMBER FORD: The second, the intermediate
term, that is likely to be very plant specific
especially if you're talking about degradation modes.
Will that be taken into account or will it still be a
generic change to LOCA frequencies?
MR. TREGONING: I think the intent is to
do a generic change. Although, you're absolutely
correct. There are certainly many aspects that are
very plant specific. I think we do want to look at
how plant specific differences would effect those
generic numbers. If there's just a few plants that
are driving the generic numbers, then it doesn't make
much sense. I think you look at potentially the
Benning (PH) type of analysis that Alan does.
MR. TREGONING: Maybe you look at Benning
(PH) at that point and develop potentially several
generic. We're not there yet, but that's certainly a
potential approach that could be utilized.
CHAIRMAN STACK: Now when EPRI did their
risk informed inspection, they went through a database
analysis to come up with the frequencies. One of the
things I liked about that was they broke it out for
pipes that were subject to flow assisted corrosion.
They had one estimate to pipes that were subject to
IGS. You had another estimate. So then you can go
back to a plant and cobble up and answer appropriate
to the plant by deciding whether this plant was
susceptible to FAC or not.
MR. TREGONING: That's a good point. In
the past especially with these historical estimates,
we've never had databases that were that well defined
and distinguished. We knew not just piping failure
statistics but also root cause. We weren't able to
factor that in for our near term study.
The hope is certainly through use of some
of the more recent, current databases that we'll be
able to break things down on a mechanistic level.
Then you're right. That's a particularly very solid
approach for evaluating plant specific differences at
that point.
MEMBER WALLIS: If you hired me as an
expert, I would have to say that the major incidents
in nuclear stations so far that the biggest influence
has been human inappropriate behavior. I don't know
how to put that into my -- That's something that
changes all the time.
MR. TREGONING: Yes. Again, when you do
an elicitation process, it's really an elicitation at
that point in time. You talk about a basion (PH)
update. When people get more knowledge, they do their
own basion (PH) update of their opinion. You hope
that the uncertainty bounds that you develop because
you're not just developing best estimates, you're
developing bounds also, that they will account for
that uncertainty or likelihood that things may change
in the future.
MEMBER FORD: Now, will a time element
come into this?
MEMBER FORD: As well as a generic versus
specific aspect, will there be every year a change in
this LOCA frequency because of time dependent
MR. TREGONING: Yes. I think what our
intent is and again it's just an intent, so one of the
reasons I'm here is to outline a potential philosophy
and approach but also solicit ideas from the
collective experience of the group. The initial
intent is that we want to look at LOCA frequencies
over defined periods of time, so from now, going
forward, ten years out, 20 years out, 30 years out, up
to the end of license renewal. You'd like to be able
to use single numbers just for ease. So I think we'd
like to get to the point where if possible we take
these end of license extension numbers and utilize
them within the PRAs.
If the assumption is and again I'm making
an implicit assumption that LOCA frequencies are going
to go up in the interim. Maybe we decide as an expert
group that they're going stay level or go down. Then
you might have a different interpretation of what
numbers to use if that's the conclusion. If they were
going to go down, I would argue we want to be using
bounding numbers essentially in our analyses. I don't
like to use bounding with the PRA guys. It makes them
uneasy. I'm a deterministic guy, so I still think in
terms of bounding or conservative many times.
MEMBER ROSEN: I think the reason that
we're doing this would not be served if we used
estimates that were likely too low, non-conservative.
MEMBER ROSEN: Because we will take
actions based on those non-conservative estimates
which we will not be able to undo potentially when the
estimates are raised, for instance, removing some
equipment from a plant. We have to use conservative
numbers that are good enough for the life of the
plant. Otherwise, we're going to have an unworkable
MR. TREGONING: And I totally agree with
that. However, I'll get back to the point that these
elicitations are points in time. There may be other
information that comes up subsequent to the
information that would cause that elicitation and
those estimates to be revised. We will try to account
for that as best as possible, but there still will be
some probability that there's a Three Mile Island type
of event that calls us to really re-evaluate things
from the ground floor.
CHAIRMAN STACK: But you did ask your
experts whether they thought the frequencies would
increase. That was one of your questions.
MR. TREGONING: Of course. And it will be
for this also. I think we've finished my talk now, so
maybe I can -- I don't think I have many more slides
past this. We'll go through them anyway.
MEMBER WALLIS: Don't you have some
MR. TREGONING: This is it, executive
summary. Then in the longer term and this is really
a separate effort, we touched on this a little bit,
we'll be looking to redefine the spectrum of pipe
break sizes so that we can possibly consider
capability changes. Again, we want to do this
wherever possible within existing PRA and our risk
informed ISI type of framework. This is very fuzzy.
I'll talk a little bit about approach.
Again, this would be based on probablistic fracture
mechanics which I know there's many in the group that
have no love toward PFM, so we'll talk a little bit
about some of the advances within PFM that will be
required to do this. Then it will also need to be
combined with PRA where necessary to augment the
answers provided by the PFM.
You've seen this slide before. These are
Alan's four components. This is just up here to say
the LOCA frequency distribution impacts this in terms
of the reliability requirements, specifically
LOCA/LOOP and then also the spectrum of break sizes.
So there's really two of the four components or sub-
categories that are affected by LOCA frequency
distributions in this 50.46 re-evaluation effort.
These are just overview slides. This is
the three-pronged approach that I talked about. All
I'm showing here is that we're going to delve into now
the near term elicitation. I wanted to provide you
some more details as well as results from that and
then obviously solicit feedback, opinions from this
group that I can take to utilize in the intermediate
term elicitation.
This is how it was structured. We had 11
staff on the panel. It was fairly well balanced
between the regulatory folks and the research folks.
We also had I thought a very good range of expertise
in relevant technical areas. We sample amongst these
11 people, I think six or seven different branches,
maybe even eight different branches within the NRC.
What we really tried to do was get people
that knew something about PRAs; something about the
ASME code of course; structural mechanics; thermo-
hydraulics in terms of loading; piping systems, their
design fabrication and use; certainly seismic,
thermal, vibrational loading; environmentally assisted
cracking which is obviously a very important player;
and thermal aging, so these are material effects; but
then also people that knew something about these
alternative LOCA mechanisms, things like CRDM failure,
Davis Besse, that again historically hadn't been
considered in the initiating event frequencies for
MEMBER WALLIS: Do have a human factors
MR. TREGONING: That's a good point. We
didn't specifically have a human factors person.
However, there were people within NRR that had some
human factors experience. That wasn't specifically
targeted in this. The other thing that we tried to do
is when you're dealing with any group, you're looking
for the optimal number too. We didn't want to make
this to be so unyielding that we wouldn't be able to
provide estimates.
However, when we go into the intermediate
term and again I would like to get some feedback from
the group if we need to specifically consider a
person, two-person, three people that their primary or
sole expertise is in human factors. That's something
I'd like to clear up today so that we can move forward
at this point.
MEMBER APOSTOLAKIS: Are these people, the
11 experts going to work as a group?
MR. TREGONING: In terms of the
MR. TREGONING: Yes. I'm going to
specifically outline how it was done. We individual
elicited each person. However, we went through idea
and issue development as a group. The idea was to
develop the baseline set of issues and the baseline
definition as a group, go away, and then answer your
questions regarding changes in those baseline
definitions individually.
MEMBER APOSTOLAKIS: But what would the
thermo-hydraulics or PRA expert know about the
frequency of LOCAs? Why are they experts? I can see
them contributing to the discussion regarding
circumstances, loads and so on.
The PRA expert will give you frequencies that he has
heard in the past.
MEMBER APOSTOLAKIS: Why should I believe
MR. TREGONING: When you define any expert
panel especially one that requires this broad range of
technical expertise, you're not necessarily going to
get -- Each person is going to have their own
specialty. The way we structured the elicitation was
for people that didn't have the baseline knowledge,
they simply didn't provide responses. Obviously, if
they weren't an expert in fracture mechanics or
environmentally assisted degradation, they weren't
required to provide answers with respect to --
MEMBER APOSTOLAKIS: Did anyone refuse to
give you estimates?
MR. TREGONING: People gave estimates in
areas that they were comfortable giving estimates in.
That's a backward answer to saying, yes, people picked
and chose what questions they wanted to answer.
another issue which maybe you want to implement in
your intermediate term and longer term processes. We
spent a lot of time and Nilesh, I think you were
involved, some years ago thinking about these issues
and expert opinion elicitation in the context of
seismic hazard analysis. Are you familiar with that
MR. TREGONING: Yes, in a general sense.
should become a little more familiar because this is
very relevant to this, maybe not to the near term.
The issues there are similar if not worse than here.
You're talking about very strong earthquakes in the
eastern part of the United States where the experts
disagreed, there are different models and so on.
As I recall, there were different
approaches. One was by EPRI which formed groups of
experts like your group. They recognized that one guy
doesn't have the requisite knowledge. In your case,
you would have say three different groups; each one
having a PRA guy, a thermo-hydraulics guy, a piping
guy and then the group would give you an estimate
instead of individual people.
MR. TREGONING: Okay. So you have three
different estimates from each group.
each group had its own experts in it. That's one
approach. Then we also proposed the technical
facilitator integrator approach and so on. I think
it's extremely relevant to this, and the NRC paid for
it. You should take advantage of it.
MR. CHOKSHI: George, I think in earlier
planning definitely some SHAK (PH) principles and
distribution -- formed technical community and basic
guidelines there.
is not this Shaq.
MR. CHOKSHI: No. That's right.
seismic hazard analysis.
MR. CHOKSHI: That's comedy. Good night.
MR. CHOKSHI: We will intend this to
follow --
MEMBER ROSEN: I think he's seven foot,
one inch.
MR. CHOKSHI: That's another Shaq.
MEMBER APOSTOLAKIS: I think in these
complex issues the ideas of having groups of experts
-- You see because the issue is how do you make sure
that you will have reasonably estimates. That's why
you have separate groups. But these estimates must be
meaningful which means that one guy does not
necessarily know all this stuff, so you form two or
MEMBER ROSEN: There are also several
papers written by learned people on the dynamics of an
expert elicitation panel that you ought to look at.
One of them is even a member of this committee. In
other words, making sure that it doesn't get dominated
by one person.
MEMBER WALLIS: Since the record shows
that George's question the value of having a thermo-
hydraulics person on this group --
MEMBER WALLIS: I thought you did.
MEMBER APOSTOLAKIS: No. I questioned the
value of having that person say it's ten to the minus
MEMBER APOSTOLAKIS: But having him part
of the group.
CHAIRMAN STACK: We didn't ask anybody
that question. They start with databased destinations
and then adjust those.
MEMBER APOSTOLAKIS: I'm not criticizing.
I was just emphasizing the point that the group needs
to have all these things.
CHAIRMAN STACK: The notion that
somebody's going to come in and say the pipe break
frequency is ten to the minus four is when heck will
start to go to the ceiling.
surprised. Many elicitations are --
CHAIRMAN STACK: Well, I know. That's why
you have more faith in an elicitation that doesn't ask
that question.
MR. TREGONING: Right. When we talk about
the intermediate term at least and again, we're
getting a little bit ahead, the idea that we had was
essentially to structure the elicitation possibly two
ways. Both ways you would have a baseline frequency
estimate that you would be providing changes to up or
One would be a more I guess typical
elicitation where the group and individual feedback
provides changes to small questions which then you
recombine those small questions through analysis to
determine what the frequencies would be. That was
essentially how we did this effort. The other way
we're looking at proceeding is actually using the
models to provide us numbers, but then using the
expert elicitation process to provide the input
parameters for the models themselves. So that's
really more along the lines that I think you've
outlined here.
It's not necessarily the small group panel
because you would have maybe one model or maybe two
models that you would exercise. But you would
exercise these models based on the input. Again, the
expert opinion of the group provides the input itself
to the models.
MEMBER APOSTOLAKIS: Yes. The exercise in
the models is part of what is called in those reports
the technical facilitator integrator approach.
MR. TREGONING: But then I get back to
your point earlier this morning about model
uncertainty. That's going to be a big driver
especially when you get into the various codes and
MEMBER APOSTOLAKIS: Well, but this is
what this exercise is supposed to do.
MR. CHOKSHI: I think the SHAK (PH)
analogy with the -- models and things is already
parallel to what we are doing here.
MR. TREGONING: Okay. So back to the near
MEMBER LEITCH: Rob, I see apparently the
absence of people with an operating and maintenance
background there. I would think that might be a
valuable input as well because they have a good sense
of the kind of things that can transpire as plant
operations go.
MR. TREGONING: Right. Again, we were
relying on in house corollary expertise in many of
these areas. Certainly when we do the formal one,
we'll be looking to bring more people that have that
expertise. In fact, this was in house, so you're
limited by your in house knowledge.
MR. TREGONING: The next one is going to
be teamed with international folks in the community.
Certainly the industry is going to participate. It's
going to draw from a much broader pool of people and
MEMBER APOSTOLAKIS: Does that paper from
Sweden make sense here?
MR. TREGONING: Which one?
correlations and all that with the -- Are you familiar
with that?
MEMBER APOSTOLAKIS: Not the old Thomas
(PH). There's an updated one done by a guy using data
that the Swedes --
MR. TREGONING: The SKI. In fact, you'll
see. The SKI stuff we're proposing will be the
foundation of the intermediate term elicitation for a
variety of reasons, not just because it's a newer look
at pipe fracture.
MEMBER APOSTOLAKIS: But that would be
another model, I guess.
CHAIRMAN STACK: When he says that, he
means PFM.
MR. TREGONING: I mean PFM. I mean a
predictive model of the future where --
MEMBER APOSTOLAKIS: That's predictive
too. It's kind of --
CHAIRMAN STACK: It's a different kind of
MR. TREGONING: Right. It's a totally
different kind of model. That's right.
CHAIRMAN STACK: Well, a statistical model
is one thing. A PFM is where the experts give you all
the inputs for a PFM and the experts tell you which
PFM code to use, but the PFM code generates the ten to
the minus twelve.
exercise like this I would like to know what the
results of these other models are.
MEMBER APOSTOLAKIS: If there is a large
difference between the PFM and that, I would like to
understand why.
MR. CHOKSHI: In fact, there is a recent
publication just comparing the work detail from PFM
type analysis and the database. Here all processes
and Rob will explain is to look at all of these
MR. TREGONING: We're jumping around a
bit, but that's okay. That's your prerogative. So
again, focusing back on the near term the objectives
of the near term elicitation was simply to adjust the
5750 Appendix J LOCA frequency distributions to
account for contributions not considered in this
original study; so things other than pipe break
failures, the effect of aging specifically.
MEMBER KRESS: So you automatically biased
the experts to increase the frequencies.
MEMBER KRESS: I don't see how you asked
to do anything but increase.
MR. TREGONING: There are others. We
specifically ask about effects of mitigation, effects
of improvements in ISI.
MEMBER KRESS: Okay. You did ask those
MR. TREGONING: Oh, yes. In fact, we had
a few opinions for large break LOCAs amongst the
experts. Large break LOCA frequencies would decrease.
MEMBER KRESS: When you say "LOCA
contributions" then you mean things that would affect
the LOCA.
MR. TREGONING: Yes. Other things like
CRDMs or Davis Besse as well as contributions to
things in ISI improvement, mitigation techniques,
improved weld repair procedures, anything that could
affect LOCAs on down the road. It's a whole host of
So we wanted to provide some quantitative
estimates. More importantly, we wanted to prioritize
issues and questions which the in house group feel
potentially provide the greatest contributions or
change in the LOCA frequency estimates going forward.
These issues we want to use and make sure that we
consider these issues in the immediate term
This near term elicitation was not just
used to provide numbers. We actually tried to treat
it as a pilot elicitation study. From what I know
about elicitations and Lee Abramson who unfortunately
is not here is really the person that's guiding the
framework of the elicitation process. If you have any
really in-depth questions about the elicitation
process, I may have to defer and have Lee or get back
to you at a different time. According to Lee, the
pilot elicitation of any good elicitation is a
necessary step to making sure your answers are right
and believable.
So here's the approach for the near term
elicitation. We had essentially a kick off meeting
where we provided the background for historical LOCA
estimates; specifically 5750 estimates, how they were
developed, what was used, what was the philosophy
behind it. This was a report done by INEO. We had
Bill Gallian (PH) actually call in and provide this
background talk.
We also had a talk with Joe Murphy from
the staff on WASH-1400. WASH-1400, people certainly
didn't feel it was applicable, but still,
understanding the philosophy that people had even 25
years ago when they were developing these estimates we
felt was very important. We provided background and
we looked at this as providing a baseline of
understanding for where we were so that all the people
on the panel knew where we were coming from.
Then we also presented some of the
technical concerns that we had in terms of new
cracking modes, recent potential failure occurrences
that had happened both nationally and internationally
and then we also talked about the motivation for
updating these frequencies. That motivation was 5046
revision. So there was the kick off meeting.
Then we had essentially this was followed
about a week later by an issue development or
brainstorming meeting. This was all the group. Both
of these meetings was the entire elicitation group.
What we did in the development meeting was we
developed definitions of what LOCAs are and how we're
going to distinguish between a small break, medium
break, large break LOCAs. It seems very basic, but we
wanted to make sure the group was operating from the
same definitions.
Then we were also very careful to define
what we were using as our baseline case. As Bill
mentioned, we didn't ask people to provide numbers, we
asked people to provide relative changes in the
baseline database. So a complete understanding for
the group of what that baseline database is was
absolutely critical. We spent a fair bit of time in
the meeting defining this.
Then we spent probably the greatest amount
of time similar to the Westinghouse Risk Informed ISI
Study where we broke the plant down into pipe systems,
materials for those systems, loadings for those
systems, and potential initiating mechanisms that
could cause pipe rupture or a LOCA within those
systems. We spent a lot of time decomposing LOCAs
into the prerequisite systems and components.
All that the group did is they provided a
very generous threshold in that we decided as a group
whether we were going to consider a certain system or
not. For the most part, we considered certainly all
the major systems within a plant. Then the other
thing we talked about was potentially important
factors which would affect future LOCA frequencies,
again, things that we've touched on earlier in terms
of aging and improvements in ISI and some of these
other issues.
So from this meeting, we developed an
elicitation questionnaire where we decomposed the
issues into very small questions related to specific
mechanisms, systems and components. I have an example
of that later. You'll see exactly what we did.
As we mentioned earlier, we were looking
at the changes going forward. We asked people to
evaluate their expected changes up through the license
renewal process, so approximately 30 to 35 years from
now. Not only did we ask for quantitative responses
but also rationale. So you get into your question of
what make a person an expert.
One of the ways you try to judge that is
not just look at the number they've given but also the
rationale. You have to show that the rationale is
sufficiently based and that you've utilized this
rationale to judge or develop your expert opinion. So
we'd ask people not for numbers but again also their
So then after the elicitation
questionnaire was developed, it was sent out to all
the participants individually. They filled them out
and sent them back in. Then we had a wrap up meeting.
The wrap up meeting is very important. When you ask
people the technical ideas, they don't know what
frequencies you're going to come up with at the end of
the day. So we presented the results to individual
questions, summarized important findings and then also
provided the group with a chance to look at the
frequencies that we were coming up with from their
responses. Again, I'm going to delve into all of
these in detail later.
The other thing that we did which was
equally important is we got some feedback on the
process itself. We wanted to see where some
weaknesses or strengths were from the elicitation so
that there were things we could improve going forward
in this next process. One of the discussions we got
in quite specifically during this wrap up meeting was
delving in the strategies and approaches to making
sure that the experts were being queried and only
providing the answers in areas that they had
demonstrated expertise. It sounds obvious, but it's
not always as easy to implement that. That was
something that we spent quite a bit of time with at
the feedback meeting.
I'm going to show an example for each of
the three sub-bullets I showed before. This is
something that came out of the issue and development
meeting. The next slide will look at a specific table
within the questionnaire. The next slide will show
results. I'm trying to make it consistent, so these
are all for BWR LOCAs. Although, we separately
elicited for BWR and PWR, so we dealt with both of
MEMBER WALLIS: I'm going to finish the
sentence that was in reply to George. This is being
made by a material scientist. It's feed-water lines
cracking, thermo-fatigue and mechanical fatigue. If
you look at history, core sprays have been broken by
hydrogen explosions which is a thermo-hydraulic
phenomenon. How did the gas get concentrated in that
particular place? How did it get ignited?
Feed-water lines have been broken by water
hammer which is a thermo-hydraulic phenomenon. Davis
Besse essentially was a thermo-hydraulic chemistry
phenomenon, and talking about cracking really missed
the point of what was going on there once the crack
got big enough. I'm a bit concerned that all these
mechanisms seem to be material based.
MR. TREGONING: Well, again all this is
these are the systems, these are the materials that
make up --
MEMBER WALLIS: They're all mechanisms.
They're all the cracking expert.
MR. TREGONING: These are the prominent
mechanisms. Now we did discuss these other
mechanisms; things like water hammer, things like
MEMBER WALLIS: They have actually
happened. I mean major pipes have broken in nuclear
plants as a result of water hammers.
MR. TREGONING: Right. We didn't get down
to the level for this elicitation for delving into the
likelihood in water hammer or hydrogen.
MEMBER WALLIS: I think you have to have
someone on this panel who insists that this be
comprehensive and include mechanisms other than
MR. TREGONING: We did consider those
mechanisms. It would be erroneous to say that we
didn't consider those mechanisms. What we didn't do
was we didn't break down and consider those mechanisms
for specific systems. We didn't get down to that
level of detail. However, we lumped them in terms of
global issues. We made an assumption here. I'm not
saying it's a particularly good assumption. It was an
assumption that was made that these global issues
roughly influence all systems equally. We discussed
a large number of them.
We only ended up eliciting on five: that
was the effect of risk informed ISI; hydrogen
combustion which you've talked about; future
degradation mechanisms, such things which could come
up later; and mitigation strategies which would affect
degradation; and then also potential uncertainties in
the leak detection threshold that you have. So we did
consider those. We just didn't break them down into
this level of detail.
It's something with the next elicitation
process that we'll certainly discuss. It will be up
to that group itself to determine how they want to
decompose the issues so that they can best arrive at
the answers. But they certainly will be considered.
They were certainly considered here. They just
weren't broken down into this level of detail.
MR. SCHROCK: Did the recipients of the
questionnaire comment on their view of the adequacy of
the questionnaire?
MR. TREGONING: During the feedback
session, they provided that. The recipients during
the issue development meeting developed this
structure. We tried to craft the elicitation
questionnaire over the ideas and the structure that
arose from that issue development meeting. We tried
to feed them back the questionnaire in ways that made
sense with how the issues were discussed and people
agreed at this meeting they should be discussed so
that there was consistency there.
MR. SCHROCK: Questionnaires it seems to
me almost always reflect the interest in getting a
certain response.
MR. SCHROCK: Recipients of questionnaires
by enlarge are restricted in their participation
because of this. For that reason, I hate
questionnaires. I refuse to fill them in most of the
time. I just don't know for sure from what you've
said how you've guarded against this problem here.
MR. TREGONING: Well, I would say that we
haven't. Like you say in any questionnaire the
phraseology of the questionnaire will tend to
potentially lead you to a specific answer.
MR. TREGONING: We certainly tried to
guard against this. The thing I want to emphasize
here is that this was near term. So under the
constraints we had for this first elicitation, that
was really all we could do. Certainly when we do the
more formal elicitation, there will be no
questionnaire. I take that back.
There will be questions which will be
developed that will be supplied to the participants
beforehand, but then each participant will be queried
individually on those questions. There will be
opportunity to certainly range from those questions as
need be. So it won't be as defining when we do the
final one. With this first one, it was defining
almost by nature so that we could at least try to wrap
our arms around it in a relatively quick way.
So all your concerns I certainly share.
I come from the Navy. The thing we always used to get
in the Navy is you do an analysis and they say well
will you go down on the ship with that analysis. I
wouldn't take the numbers that we've developed and go
into the reactor and stand under the large pipes at
this point. But I will say that the issues that came
out of this meeting I think were very powerful. It's
something we can use to go forward to help craft what
we're doing on down the road.
MR. CHOKSHI: I think on the intermediate
we'll be developing a more formal process using the
guidelines which are available and have been used. So
for a lot of this development of questions and
selection of experts we'll follow a more formal
process to make sure that we don't introduce some kind
of weaknesses. There will definitely be a formal
process which can be documented and people can see.
MR. TREGONING: And I'm assuming that
during this process which we expect to take roughly a
year that the committee will be made available of the
progress of that group and be able to provide feedback
as we go along. So this is something that's a work in
MEMBER APOSTOLAKIS: So ultimately you
envision that there will be some models and I don't
mean PFM models behind this, I mean I'm seconding now
the comments by Dr. Wallis, that you will need to have
some combination of experts in human factors or human
performance, thermo-hydraulics and PRA and so on and
develop some sort of sequence of events that might
lead to these failure mechanisms instead of just
focusing on the failure mechanisms themselves. Or
who's going to do that if you don't do it? Is that
part of the bigger project, Alan?
MR. KURITZKY: (Away from microphone.)
MR. TREGONING: Yes. We're not proposing
to revisit those accident scenarios.
MEMBER APOSTOLAKIS: I'm not talking about
the accident sequences that are already in the PRA.
If you look at what happened at Davis Besse, that's
not in the PRA.
program was not implemented correctly. People didn't
have questioning attitude and so on. All that stuff.
Where is that going to go?
MR. TREGONING: Here's the danger with
something like this. Let's say we did this
intermediate term elicitation a year ago and we were
done and we were presenting the results. I'd be
getting beaten up because it would by why is Davis
Besse not considered in this. If we would have done
this a year ago, I doubt very seriously that a Davis
Besse type of event per se would have been discussed.
MR. TREGONING: So when we're at this
point, we're at a year later. We have to develop
something that goes 35 years forward. There's going
to be, I hope not many, but there will be several
other surprise events. The intent is to capture the
surprise events, not the particular mechanism which
makes up the surprise events.
MEMBER APOSTOLAKIS: Right. I fully agree
with that. So what I learned from Davis Besse is that
the programs are not necessarily implemented the way
they are intended to be implemented.
MEMBER APOSTOLAKIS: Assumptions that
we're making regarding people's vigilance are not
always good.
MR. TREGONING: One of the things -- I'm
sorry I don't mean to cut you off.
MEMBER APOSTOLAKIS: I cut you off all the
MR. TREGONING: I want to provide some
more information. One of the things we talked about
a lot in our issue development meeting was plant
management safety culture. We argued about that. We
just decided at the end of the day because it was so
specific that we couldn't explicitly consider it
because we were trying to define generic issues. That
doesn't mean it's not important.
MEMBER APOSTOLAKIS: Safety culture isn't
generic but I think it's much bigger than --
MR. TREGONING: No. Safety culture is
generic but then you also have a lot of variability.
So you have a generic best estimate but then I would
argue you have wide uncertainty bounds.
MEMBER APOSTOLAKIS: The fundamental issue
here is we cannot ignore and close our eyes to
operating experience. You said if I had done this a
year ago, fine, this is a good intellectual exercise.
The truth of the matter is you're doing it now.
MEMBER APOSTOLAKIS: So we have to do
something about it.
appeal to our technical preferences because it's not
controlled by natural laws and we can't develop a
computer program for that, but the truth of the matter
is that when it comes to reactor safety negligence of
that pipe is very important.
MEMBER FORD: Well, human events is done
on that list.
MEMBER APOSTOLAKIS: By "human" we mean up
until now, this Agency means operator response to an
accident, not the kind of thing that you saw at Davis
MEMBER FORD: Oh, okay.
MEMBER APOSTOLAKIS: That's the same with
PTS. They used the latest in human reliability but
they really mean operator response during the
MEMBER BONACA: Actually the cause of the
factors may have been different, but I put them in the
same category. You have now VC Summer, you have
Oconee. Davis Besse is not. They probably think that
they took about 20 or 25 years to begin to have this
penetration of the RCS by different means, different
locations or piping as a result of aging. So we're
going to see more of that. In many problems we have,
the statement is we will inspect, detect, and fix
before this happens. Well, that's great, but there's
now going to be something assured. Davis Besse is an
indication of that.
The other issue, however, is we have --
and made commitments like for example on license
renewal not to increase the frequency of inspection
like the ISI with age. So there are a number of
mechanisms there of core issues that we have to look
at. I trust that these are very competent people, but
there are so many elements there.
MEMBER APOSTOLAKIS: In truth, it's too
soon really. These guys have to digest their lessons
MR. TREGONING: But your point is still
well taken. You need to consider everything; the
operating experience, the regulatory framework.
Again, if it was an easy problem, we wouldn't need to
do an elicitation. By definition, you do an
elicitation as sort of and I won't say means of last
resort but in some ways --
MEMBER BONACA: The whole dynamics are
changing. For example, not only there are more events
of these types happening but the outages are much
shorter than they used to be ever.
MEMBER BONACA: Really what is being short
changed is not the maintenance of the active systems
which are being maintained on-line. It really is the
inspections that are potentially being short changed.
Judgements are being made that we don't have to look
more than this much and then we can start. So there
are these dynamics coming together. We really have to
understand how they interplay.
MR. TREGONING: You're right. Those are
all vitally important. It also ratchets it up greatly
the level of difficulty with something like this.
MR. TREGONING: Because you put everything
into the soup at this point. You have political. You
have technical. You have economic. Everything is in
the soup. How you stir the soup at that point is
MR. CHOKSHI: I just want to make one
point to that. This near term elicitation followed
the 1150 type of approach. It was not just an ad hoc.
We followed what was done, how the 1150 did, how it
was updated. It has a structure and a lot is going
through the process.
MEMBER APOSTOLAKIS: I don't think we're
criticizing it.
MR. CHOKSHI: No. I know.
MEMBER APOSTOLAKIS: That's what I meant
earlier that it's too soon for you to have
incorporated in your work the Davis Besse kind of
thing. I hope the message you're getting from the
discussion here is that this is a big concern for this
committee. You're going to be hearing about this time
and time again.
MR. CHOKSHI: We'll be --
MR. TREGONING: The reason here is to
solicit that criticism too, obviously. Again, I feel
like I'm on the front --
MEMBER APOSTOLAKIS: You don't have to try
hard to solicit criticism here. Just show up.
MR. TREGONING: I feel like with the
solicitation, I'm on the crest of this staring over a
great abyss. I just think all of us certainly realize
the challenge ahead of us and are not taking it
lightly at all.
MR. TREGONING: We're taking it very
seriously. Now when I come back to you a year from
now, how successful we are --
MEMBER APOSTOLAKIS: Don't be so modest.
You'll do all right.
MR. TREGONING: I don't like to prejudge
anything. I don't like to prejudge LOCA frequencies.
I don't like to judge success probabilities, anything.
CHAIRMAN STACK: Are we ready to move on?
MR. TREGONING: I'm ready. Are you ready?
MEMBER APOSTOLAKIS: Let's move on. Yes.
CHAIRMAN STACK: We've had that same slide
up for 30 minutes.
MR. TREGONING: Maybe we can only spend 30
seconds on this slide then potentially.
MR. TREGONING: We essentially gave I
think there were seven or eight tables related to
various questions. This is one of the tables that we
had for LB LOCA. This one was relative change. We
asked people given a baseline how much did they expect
LOCAs in these various systems to increase or decrease
going forward based on the issues we discussed.
Most of this we've touched on. Every
panel member got their own questionnaire. We asked
them to look at changes over the next 35 years. We
separately considered small, medium and large break
LOCAs. As I said before, we used a quantitative
responses and the rationales to determine. Because
again, this tells the changes but the other part that
I didn't show here is you have to show not just the
relative changes but the importance of the given
system to lead into a LOCA. So you combine your
contributors with these changes to develop your
frequencies at the end of the day.
The other thing that we did was we asked
questions in several different ways to try to remove
as much of that bias as we could. We asked the people
for absolute changes. We asked them for relative
ratio changes. For example, these things were
assuming a small break LOCA frequency. What's the
ratio of medium breaks to small breaks? What's the
ratio of large breaks to medium breaks? Decomposed
the same question in a variety of different ways to
get different answers. The idea behind that is to try
to probe inconsistency.
Also for these global issues, we asked
people what they felt was the global change in the
system related to these issues. These different ways
were utilized to perform at least a very informal
sensitivity analysis to assure the results we were
getting at the end of the day at least were somewhat
MEMBER FORD: Could you give us some idea
as to how these analysis were conducted? For
instance, recirculation LOOPS which is ITS, how did
the expert in this particular case go about assigning
a number for those three categories of LOCA?
MR. TREGONING: Each individual expert had
their own rationale for doing that. As I mentioned
earlier, the time frame with this was short enough
that we didn't allow people chance to go back in and
run models. There was certainly time for them to go
back and look up some background data in terms of
frequency and things like that.
MEMBER FORD: Those numbers must be very
plant specific. They're extremely plant specific if
they're going back to historical data to come up with
their answer. So it can range from zero to 100
MR. TREGONING: That's true. I would
argue that we're looking at developing generic
bounding numbers. So if anyone was being penalized,
it would be the plants that didn't have that problem.
I can speak for myself when I filled out my
questionnaire. I tended to think in terms of plants
that might have the worst problems. I can't speak for
the rest of the elicitors just to know if that was
their rationale. That's another limitation of the
questionnaire. You get what's written down in terms
of the rationale but you don't get verbatim their
MEMBER APOSTOLAKIS: How are you going to
handle that though? Here is an expert telling us that
these are extremely plant specific. Obviously your
experts are not going to give you estimates for each
plant. What do you do about that? I don't know.
Maybe it's time to re-evaluate the whole approach of
expert opinion.
There are also papers. One comes to mind
where the experts are really way off in other
contexts. I don't know. Are the standard approaches
still satisfactory?
MEMBER FORD: And for instance, you
mentioned earlier on quite correctly that the
incidents rate will normally go up if you don't do
anything, but it can go down with the ISI and proven
techniques. Those were also factored into that
specific example of research --
MR. TREGONING: We didn't try to lead the
CHAIRMAN STACK: I'd assume that would be
if you said it was going down, you would say because
all the plants now run on hydrogen water chemistries.
CHAIRMAN STACK: Therefore, you think the
historical rates are probably higher, for example,
then the future rates might be.
MEMBER FORD: The baseline rate, for
instance, that you started off with, I agree with you
entirely, going to a hydrogen water chemistry is going
to go down.
MR. TREGONING: And go down in the near
term. The thing we also said was look 35 years out
and would you still expect it to go down 35 years out.
MEMBER FORD: So was the baseline number
based on when the plants were built, and this wasn't
even taken into account, or was it based on now?
MR. TREGONING: The baseline number was
5750. What 5750 did specifically for IGSEC as you
probably know better than I, they applied a mitigation
factor of one over 20 to their pipe rate frequencies
for BWR plants of course only. So if we use 5750 --
MEMBER FORD: So it's probably going down.
MR. CHOKSHI: The baseline does include
known things. That's mitigation factors. Right.
MR. TREGONING: So you talked about
earlier defining baselines based on specific
mechanisms. That's a very valuable tool. We weren't
able to do that (a) because 5750 doesn't have that
kind of information and (b) we didn't even have access
to the database. There wasn't much that we can do.
Certainly when we look going forward, that's something
we're going to be evaluating. In a SKI type database,
there have been presentations over the last few years
where they have done that. They've queried the
database on a mechanism by mechanism level.
CHAIRMAN STACK: You should look at the
Epry (PH) report where they did their own statistical
analysis of the SKI database. It's at least a place
to look at. They come up by mechanism and by weld.
CHAIRMAN STACK: Again, when you put these
systems together, it will make a difference.
MR. TREGONING: Right. You're right. So
when we define the baseline for the next elicitation,
we have to be very careful that we try to take all
these things into consideration: mechanisms; what
components they affect so you know what your
denominator is and your initiating frequency if it's
welds, if it's elbows for flow assisted corrosion,
things like that. None of that was explicitly
considered in this because it was just a very broad
brush look at this thing in the short term.
CHAIRMAN STACK: But I'm sure all your
experts also look at these lines and figure are these
lines susceptible to FAC or are these susceptible to
MR. TREGONING: It's implicitly in there.
CHAIRMAN STACK: They're doing the
integration in their head rather then separating it
out and then putting it back together.
MR. TREGONING: Right. Any more
discussion on this slide? This one is going to take
us back to 35 or 40 minutes I guess. This is a sample
of the output that we got. These are the sample
quantitative results. Again, I'm focusing on BWRs
large break LOCAs here.
We had essentially 12 issue categories in
BWRs that you saw earlier. A lot of these are NAs
because the group said drain line failure is not going
to cause a large break LOCA. So six through ten is a
bit misleading. It's not that no one considered it to
be a contributing factor. It's that it couldn't be
considered based on the size of the line.
We asked people for their top three
contributors in terms of systems. What you see here
is the number of people and again we had 11 people but
not everyone answered based on their expertise. We
had eight responses for this particular question. Of
those eight, we had six which is a fairly high number,
75 percent, which assumed that jet pump risers will be
a big contributing factor in terms of any potential
large break LOCAs.
So we asked them not just to list their
top three but also make a quantitative stab at what
they thought the percentages of LOCAs would be due to
that system. What I've given you here is the median
response for the respondents. The median response is
that the jet pump risers would contribute 33 percent
of the large break LOCAs so 33 percent of that
frequency distribution. This tells what systems are
important. Then we asked people, what will they look
like --
MEMBER APOSTOLAKIS: I don't understand
the figure though.
MEMBER APOSTOLAKIS: It says the vertical
axis is count --
MR. TREGONING: Within top three. We only
asked the people to provide their top three
contributing factors. There were eight people, so
there were 24 responses within these eight people.
MEMBER APOSTOLAKIS: So if I add all these
heights of bars, I will get 24.
MR. TREGONING: I hope you do. You
MR. TREGONING: Good. You did the math.
I should have, but I didn't.
MR. TREGONING: Some people might have had
it rated at number three. Some people might have had
that rated at number one. All I'm showing is that the
top within somebody's top three. It made their top
MR. TREGONING: Then this number says of
the people that it made their top three what was the
median contribution from all those different opinions.
So those are the numbers here. It's important not
just to say if it made the top three but how much it
contributes. You see here three and four which were
core spray -- We let some systems gather RHR and LPI.
The same number of people considered them important.
Of those people that considered them important, and
this is probably marginal, this is probably not a good
example, but slightly higher percentage coming from
the core spray.
MEMBER WALLIS: Is that really 50 on the
right hand there?
MR. TREGONING: Yes. But there was only
one response. There was one person, but that person
thought BWR stub tubes was very important.
MEMBER WALLIS: Very important.
MR. TREGONING: Yes. That person would
give a relative change for BWR stub tubes. That would
affect the frequency that you get from that person's
MEMBER APOSTOLAKIS: I don't understand
that. To evaluate a change, you have to know what the
baseline included.
MEMBER APOSTOLAKIS: Did they know that?
MR. TREGONING: Again, we had a whole
meeting where we discussed what was in the baseline.
MEMBER APOSTOLAKIS: The baseline came
from where?
MR. CHOKSHI: That whole study. We had a
whole session talking with people about what is the
bases, what was covered --
MR. TREGONING: And as part of the issue
development --
published 5750?
MR. TREGONING: 1998. It covered plant
experience up to 1997.
understand how these people feel that there will be
almost always an increase from a study that was done
four years ago.
MR. TREGONING: Again, we asked for people
looking 35 years out. We asked people to look 35
years out and assume the 5750 gives us where we are
MEMBER APOSTOLAKIS: So they include the
-- favorite subject.
MEMBER ROSEN: This just proves Yogi
Berra's theory about predictions especially on the
future are hard.
MR. TREGONING: It's worse than the stock
MEMBER APOSTOLAKIS: Well, I don't know.
MR. TREGONING: I think we might need Yogi
before all this is said and done.
MEMBER APOSTOLAKIS: So the primary reason
why I have increases in the frequency is because of 35
years? If you had asked them to do it for 70 years,
this would have been longer?
MR. TREGONING: It would have been lower
MR. TREGONING: It could have been lower
percentage change if we would have only gone out 17
MR. TREGONING: I'm sorry, you said 70.
MR. TREGONING: I misunderstood you.
MEMBER WALLIS: The thing I would worry
about in the long term would be changes in society and
training and human behavior and economics and those
sorts of things, the bigger variables then all this
physical stuff.
MR. TREGONING: There was certainly a
sense within the discussions. People were concerned
about the way the people interpreted the safety
culture and which way it was moving. This is implicit
in that. These numbers take that implicitly into
consideration. How each person did it was up to each
person. So it becomes a bit mystical at that point.
MEMBER ROSEN: Just think about Chernoble
(ph) (PH) and Three Mile Island. What were the main
MEMBER ROSEN: People. Safety culture.
MEMBER ROSEN: I didn't want to put that
in along with the same category of those other two
events. The issue is culture. It's nice to look at
this but -- Well, 35 years out the cultural issues are
even harder.
MR. TREGONING: I know. Again, we're
faced with this impossible task. I'll be the first
one to admit that this is an impossible task. We're
never going to be able to account for all these
variables adequately.
MEMBER APOSTOLAKIS: I doubt that these
people really had culture in their mind when they did
MR. TREGONING: We discussed it. I will
say that during the issue development meeting we
discussed at length safety culture.
MEMBER APOSTOLAKIS: The way it is today.
MR. TREGONING: Current. But then there
was some discussion on how people felt, and I'll use
this nebulous word, things were moving to. Things
happen that could change that dramatically. I'll get
back to my original point. Elicitations are a period
of time. We try to do them the best that we do. If
we go back five years and re-elicit pending what's
happened over that five years, we're likely to get
potentially very different answers.
MEMBER APOSTOLAKIS: So if I look at the
first bar there, what does that say? 75.
MR. TREGONING: Which bar?
MR. TREGONING: This is just a box plot.
This shows the percentage that it contributes. This
shows the percentage change that people predicted over
35 years. The box plot shows the min and max. The 75
represents the median value. These are the 25th and
75th percentiles.
So you see for jet pump risers a huge
difference of opinion. One person thought they would
go up 400 percent. Another person said they'd go down
a slight percent.
MR. TREGONING: That's why we used the
median of course. We didn't try to use mean. We used
the median response for all of these because the
distributions and --
MEMBER APOSTOLAKIS: So the frequency can
go up by a factor of 400? Is that what he's saying?
MR. TREGONING: That's what that
particular person said.
MEMBER APOSTOLAKIS: The frequency right
now for a large LOCA is what? Ten to the minus four
or five?
MR. TREGONING: But you have to factor
that in with the contribution. He said that for this
particular system. For that system, he sees it going
up, but he doesn't consider that system to be maybe
that high a priority.
MEMBER APOSTOLAKIS: How do I make the --
Oh, JPR. Is that what it is?
MR. TREGONING: So you need to multiple
essentially this by this to get you your contribution
to the LOCA. (Indicating.)
MEMBER APOSTOLAKIS: Okay. 33 percent of
400 is still a high number.
MR. TREGONING: Yes. You're saying 400 --
MEMBER APOSTOLAKIS: So he expects the
frequency of a large LOCA in BWR to be doubled from
that alone.
MR. TREGONING: From that alone, yes. If
he was consistent for all of those, he would think
that he expected them to go up a factor of four or
five let's say.
MEMBER APOSTOLAKIS: And what is it now?
Ten to the minus four? The large LOCA.
MR. TREGONING: We'll see the results here
very soon. It depends if your talking about Ps or Bs.
It's either ten to the minus five or --
MR. TREGONING: B is ten to the minus
MEMBER APOSTOLAKIS: But the uncertainty
there is what? A factor of ten up and down. Isn't
MR. TREGONING: Uncertainty in this or in
the existing?
MR. TREGONING: In the existing estimate,
they assume -- normal distribution with an air factor
of ten. So it's a very wide uncertainty.
MEMBER APOSTOLAKIS: So what these guys
are saying then is already covered by that
MEMBER WALLIS: Are these things going up
because the person estimates are too low or because
something is deteriorating? I would think as we get
a wider margin they would all go down.
MR. TREGONING: That's one interpretation.
Obviously that wasn't the general interpretation of
the panel. There was a wide range of opinion on which
way people expected them to go.
CHAIRMAN STACK: Historically pipe
failures have gone down. You've gotten smarter. Now
I think they're saying that you've probably maxed that
out and you're --
MR. TREGONING: If you look at failure
curves, they all have that classical bathtub shape.
I think a lot of people are concerned that we're on
the tub of the bathtub and things like VC Summer,
Davis Besse, Oconee might be showing that we're
starting to rise on the slope of the failure curve for
many of these materials and components.
bathtub curve is not going to apply to large LOCAs.
We'll be in deep trouble if it does. This is really
very disturbing though. They all seem to think that
things will become worse. What happens to all these
age management problems we have?
CHAIRMAN STACK: They're run by people.
MEMBER APOSTOLAKIS: I don't believe that
these guys really had in mind culture. It's too soon
for people to do that. Way too soon. Especially
technical guys. It takes a while to digest these
things and start changing your estimates.
MEMBER WALLIS: If you're asking, these
experts are all from the NRC.
MEMBER WALLIS: So if you asked experts
from the industry, they're all below the line.
MR. TREGONING: Right. That's exactly
(Discussion off the microphones.)
MEMBER ROSEN: The industry mantra is that
the maintenance rule and aging management programs are
intended to keep the failure rates constant.
MEMBER APOSTOLAKIS: That's the thing. We
have all these problems in place. Of course, you
might say people don't implement them right. I think
that's a second order effect at this point.
MEMBER BONACA: The whole issue is the
shift from active systems to passive systems
MEMBER BONACA: Those statements are true.
MR. TREGONING: At the risk of shooting
myself in the foot which I think I'll do anyway, but
if you move into risk informed charts of ISI and you
make the assumption that you're prioritizing your
inspection based on probability of failure, that's a
good approach. However, those lower probable things
like possibly large pipes you may be inspecting less
in the future potentially depending on how risk
informed our ISI plays out. I don't know that to be
the case. That's one potential. That was something
that was discussed within the group.
regulatory guide was approved, these guys should have
spoken up if that's what they think is going to
happen. That was not the intent of the guide, to
increase the frequencies.
MR. TREGONING: The other thing I'll say
is --
MEMBER APOSTOLAKIS: We better understand
why these people are saying this. I think we need a
very clear explanation.
MR. TREGONING: Yes. And again, I would
say primarily aging effects were the dominant
MEMBER APOSTOLAKIS: Then again, why have
aging management programs in place if that's the case?
MEMBER LEITCH: One of the things --
MEMBER APOSTOLAKIS: If my regulators feel
that the numbers are going to go up --
MEMBER LEITCH: One of the things that
might be of concern is where do you attract the talent
in years 50 through 60 of plant life. In other words,
are you going to still be able to attract the kind of
high level talent to implement some of these programs
in the last few years of plant life?
MEMBER APOSTOLAKIS: But the question is,
Graham, is that what these guys thought?
MEMBER LEITCH: I don't know what they
MR. MAYFIELD: This is Mike Mayfield from
the staff. I think it's important to not lose sight
of what we were trying to do with this near term
elicitation. To move forward with the 50.46 changes,
we need to get some sense of is 5750 about right. Is
it likely to go up? Is it likely to go down?
So we put together as balanced a group of
staff people as we could put together in a short
period of time and asked them to follow as best they
could in a short period of time the approach that
would be used in a formal elicitation. We recognized
you were going to get a somewhat biased view because
the nature of the panel didn't get you the broad range
of views that you would get in a formal elicitation.
I would urge the committee to not make too much out of
the specifics here. The point for this panel was the
notion that the frequencies would go up. Yes, a bit.
They didn't suggest they were going up
orders of magnitude. They go up two to four range.
So they're up a bit. The idea was to give Alan and
his colleagues some insights that they could use in
deciding whether or not they should move forward with
50.46. We're not broadcasting this as the definitive
answer or even the staff position.
This was an interim piece of information
to enable Alan and his colleagues to move forward.
That's why we want to go forward with as Rob calls it
the interim elicitation, but to do this so that you
get the balance of interest and you could then,
George, I think meaningfully start drawing inferences
about what people really think, what impact risk
informed ISI might have. I would urge you to be
cautious about drawing too much from the details of
MEMBER APOSTOLAKIS: I understand that.
I appreciate that. I have two comments on that. One
is did these staff members who did this also consider
the possibility that if the frequency starts going up,
I mean the Agency will do something? We don't know
that. Or is it the vote of no confidence in risk
informed regulation here?
The second is instead of asking them to
consider percent changes from a baseline value, I
don't think that it would be pushing the state of the
art asking them to consider the whole normal curve
that you mentioned earlier and suggest changes to the
curve. If I have right now an estimate that says it's
low normal ten to the minus four factor of ten up and
down which means we're doing ten to the minus five and
ten to the minus three, these guys give me a factor of
three. That doesn't move me much.
If they start saying, no, you're whole
curve is moving up or something else is happening, I
think these guys are knowledgeable enough to say that.
Maybe when you do your intermediate exercise instead
of giving a single number as baseline ask them at some
point or throughout the exercise to look at the whole
curve. What's happening? Maybe they will tell you,
no, that the 5th percentile there is no way it will
stay where it is. I'd like to know why. This is a
more complete presentation. Maybe we're making too
much out of it.
MEMBER ROSEN: I think we're making too
much of it. I heard the word place holder earlier.
MR. KURITZKY: Exactly.
MEMBER ROSEN: It's just a spot in which
to put a better number.
MR. KURITZKY: Interim. Exactly. One
internal application for temporary usage.
MEMBER ROSEN: This is survey said. Okay.
MR. KURITZKY: Remember I think according
to Rob, beginning to end is a three week effort.
looking at the curves in the future will be very
informative rather than a single number.
MR. TREGONING: And again based on the
time, we only asked for people --
MEMBER APOSTOLAKIS: No, of course. This
is okay.
MR. TREGONING: Certainly we asked on
probing for each individual answer was not only the
best estimate but the ten 90 percent confidence
bounds. We're able to reconstruct those bounds in a
very real way.
MEMBER APOSTOLAKIS: But how does it look
to you guys that our experts in our regulatory agency
think that the frequencies will go up?
MEMBER WALLIS: Are we going to hear from
industry today?
MR. TREGONING: I don't know.
MEMBER WALLIS: When industry sat here and
said that they wanted changes in 50.46, we said okay,
make your case. They said yes, we're going to go
away. We're going to spend a lot of money. We're
going to come back with a good case. They're going to
come back maybe with things like this that look very
MEMBER WALLIS: Then I wonder who to
MEMBER BONACA: It doesn't surprise me to
think that we're going to have more initiators of
cracks. We had programs to inspect and prevent for
the degradation. The fact however is that if you
increase the number of initiators, you're not going to
catch them all. That's not surprising to me that you
will have some increase in initiators in LOCA.
Now, I don't know about the size. That's
a different story. I have no judgement on that. To
me, they're intuitive somewhat because although I
trust that they are going to put in place a good
program for inspections, they're not going to be
always successful in some cases because of cultural
issues, in some cases because it's tough.
MEMBER ROSEN: Well, the regimen is
completely changing you realize. We're going to these
risk informed programs. We used to look at
everything, even pipes that had no mechanisms acting
on them. The results of the programs up to date are
essentially zero. We haven't found anything. With
all this work and inspection, we've hardly found any
So the new programs were to aim at things
that matter; they are risk significant, pipes that
matter, and also pipes where there are active
degradation mechanisms something we know something
about. So I think we'll find more. We're hoping that
these new programs will find more and that what they
find will matter to us. In other words, a break in
what they find would have had some important
MEMBER BONACA: The point I was making --
MEMBER ROSEN: So what I'm saying is
looking backwards doesn't tell you much about what
we'll find in the future.
MEMBER BONACA: The point I'm trying to
make is that the VC Summer crack to the nozzle, I
would not be surprised if it took 15 years of pressure
to develop a regional small defect into the thermal
crack. It takes that kind of length of time for
something to evolve. Therefore, for the first 20
years, they wouldn't have to worry about that if an
inspection failed because it wasn't going through.
After 20 years, it becomes a different
story because they're more vulnerable to that kind of
defect to propagate and to become large. So you
depend much more on the inspections and the quality of
the inspection. I agree with you they're more focused
now on significance. That's the point I was making.
uncertainty analysis?
MR. TREGONING: No. Again, they assumed
the distribution and got their bounds that way. They
essentially just looked at --
MR. TREGONING: Well, they based their
mean estimates on reportable incidents. There was no
formal uncertainty analysis to the best of my
knowledge. I certainly wasn't in the room.
MR. CUNNINGHAM: I think it's assumed that
the error factor can be --
MR. TREGONING: For large and medium
breaks, yes. So they assumed a log normal and an air
factor of ten. They didn't try to develop --
MEMBER APOSTOLAKIS: I want to emphasize
the issue of uncertainty here. I really don't think
it makes much sense in the next trial to have a
baseline and consider it. I would rather go with the
curves themselves. You will be surprised how easy
people will find it is to work with the curves.
They're not normal curves themselves. Ask the experts
how would you change them.
MR. TREGONING: We still have to have a
basis for the log normal curve.
MEMBER APOSTOLAKIS: The baseline would be
the curve itself.
MR. TREGONING: Were planning and maybe
we'll get into this. It's getting close to lunch.
MEMBER APOSTOLAKIS: Because if I look at
slide 12, you're comparing point values. I don't know
how valid that comparison is.
CHAIRMAN STACK: Are we going to move on?
MR. TREGONING: I'm ready. Whenever you
guys are.
MEMBER WALLIS: This went for a long time
MEMBER APOSTOLAKIS: It's okay. It really
MR. TREGONING: This one may be even
longer. These are essentially the results which we've
talked about in a very general sense. Notice the
water marked interim here in large numbers. Again,
this is a very interim or short term result. All this
shows is the box plot for the median estimates.
Again, we only asked people for their median or their
best estimate guesses. All this shows is the
uncertainty in the median estimates, not the
uncertainty in the individual's guesses themselves
which would be much larger.
We didn't use these to develop the bounds
of our distribution obviously because they only sample
a certain percentage of the uncertainty. The
interesting thing that came from us is that the
uncertainty bound was really in the small break LOCA
number. The biggest variability in these median
numbers were driven by people's opinions on the
likelihood of non-piping contributions to the small
break LOCA number.
When you got into large break LOCAs and
I'm going to make a general statement, people
generally didn't consider other failure mechanisms
other than a large pipe break to be significant in
leading to these types of failures. With small
breaks, there was quite a wide variety of opinion. I
think that's reflected in this error bound here.
The other thing because again we were a
slave to the baseline that we chose so 5750 had
differences in BWRs and PWRs and small and large break
LOCAs. Those differences were largely retained.
You'll see here in the final these are actually the
numbers. I've only presented the mean numbers here.
However, we did develop distributions. I just wanted
to present the mean numbers here for a reference
point. I wanted to show 5750 and then WASH-1400.
MR. TREGONING: Oh, yes. Dramatically so.
What you see here is the comparative increases in the
current over 5750 in terms of the means. So you had
the biggest increase in small break LOCAs but still
less than a factor of four. The decrease actually was
greater with increasing LOCA size.
We've talked a lot about these so I'm
going to go quickly if possible. This isn't rocket
science but I think it was worth noting that people
expected all B --
MR. TREGONING: Material science which is
maybe more onerous. People felt that there were
dominant initiators. At least a lot of people agreed
on for small break and large break LOCAs what the
dominant initiators are. For MB LOCAs there was much
less agreement because of the number of the various
things that can cause MB LOCAs. People considered
especially and I talked about this for small break
LOCAs the failure of non-piping components was a
significant contribution, much lesser extent for MB
LOCAs and almost no extent for large break LOCAs.
We talked about these global issues. In
terms of the median response to impacts of these
global issues, the group didn't consider that there
were significant differences or significant impacts to
things like risk informed ISI, hydrogen combustion,
all the lists that I listed earlier.
However, there was a large degree of
opinion about specifically three of those. Those were
the roles of future mechanism and mitigation
techniques, ISI, and the hydrogen combustion issue.
I think we've heard even amongst the panel today a
pretty good difference of opinion for all of these
global sorts of issues. These were the same things we
were dealing with from the group.
MEMBER WALLIS: Well, hydrogen combustion
seems to me is something you can get rid of by proper
operation and design. It's not something like
cracking that's there and it's a question of how fast
in incurs. You can make it go away.
MEMBER WALLIS: For some of these things,
you can make it go away. It's knowing more and
operating better.
MR. TREGONING: Right. That's where the
effective mitigation comes into play. I think I've
touched on these. Again, I'll just highlight this
one. Aging mechanisms are what at least this group
felt would substantially affect and in general
increase the LOCA frequencies in the future. The
other thing that I didn't touch on --
MEMBER ROSEN: Not withstanding the aging
management programs we're putting in place for plant's
license renewals. They're basically expressing a vote
of no confidence on the whole aging management
MR. TREGONING: I wouldn't say that.
MR. MAYFIELD: This is Mike Mayfield. I
think you have to keep in mind there's a notion that
from the time you identify a new mechanism until you
have effective programs in place there is a time
lapse. You don't identify the programs as a generic
mechanism that needs an aging management program
instantaneously. There's some time lapse. Sometimes
that's measured in years.
We didn't recognize that with the BWR
stress corrosion cracking as the prime example. So it
takes some time before you recognize this really is a
big deal that you need to deal with and put effective
programs in place. I don't think it's at all the vote
of no confidence rather it's recognition that there's
a time lapse.
MEMBER ROSEN: For the new mechanisms.
MR. MAYFIELD: For new mechanisms.
MEMBER ROSEN: It's model incompleteness
MEMBER WALLIS: Part of the problem is
that a new mechanism is likely to show up in an
environment where it has to be detected by the
industry, not by the regulators. That's a question
then as to how are you going to follow up some
indication that something has gone wrong.
MEMBER ROSEN: It's a safety culture
issue, Graham.
MEMBER WALLIS: It is a safety culture
MEMBER BONACA: Not only. You have again
we see the degradation mechanism coming through.
There is higher lines inspection. You can't always be
successful in inspections. Some of them we may impute
to culture. Some of them we don't know. The judge is
out. At times you've done volumetric inspections and
you didn't see things. Is it true that people didn't
look? I don't know. You can't judge. All I know is
that we're not successful. We know many inspections
are not successful.
MR. TREGONING: Okay. That covers the
near term elicitation. We're at the point in the
schedule where we talked about breaking for lunch. I
have about six or seven slides. I would say that
we've covered the bulk of this if not all of this and
probably a little bit of this. (Indicating.) I'll
leave it up to the group if we want to try to expedite
the rest of these slides and get you guys to lunch or
break and come back.
MEMBER ROSEN: I don't think you should
stand in the way of the ACRS's lunch.
MR. TREGONING: That's why I bring that
MEMBER WALLIS: I think the big question
to me is whether this kind of elicitation is really
the right way to get enough information to file a rule
making decision. I think that's the issue really.
It's whether this sort of approach is going to get you
where you need to be in order to make a decision.
MEMBER KRESS: Do you have another option?
MEMBER WALLIS: Well, one option is to do
much more thorough research so you understand these
things better rather than going for somebody's
MEMBER KRESS: That's going to get you
there maybe ten years from now.
MEMBER WALLIS: Well, maybe that's the
appropriate thing to do.
MR. TREGONING: The problem with this
issue when you deal with pipe break LOCAs is different
from PTS I'll say which PTS, there's been a lot of
research on. You're dealing with essentially one
mechanism, well defined, well studied. You have a
variety of transients, but it's still relatively well
defined. Here we're dealing with a whole compendium
of mechanisms. Getting research to the point for each
of these mechanisms and I'll defer to Bill on this,
where you really know enough to feel like you can
accurately make predictions in light of all the
uncertainties that you have is really unbelievably
MEMBER WALLIS: In two years we get your
experts saying something and industry experts saying
something else. You're going to get some rules.
We're going to have to give advice. We may well say
we don't believe either of you. We're in the position
to make a decision.
MEMBER FORD: I think you're being
unnecessarily pessimistic. I think if you had the
right panel of industry and academia, I think you
could come to a reasonably very narrow conclusion.
MEMBER WALLIS: If you have the right
panel, you can always reach a narrow conclusion.
MEMBER FORD: That's what I'm saying. I
think it depends very much on the choice of who you
have on the panel to have an honest answer.
CHAIRMAN STACK: You look at the PFM
calculations. If I use one code, I get an answer
that's different by ten to the fifth by the other --
MR. TREGONING: And not just codes, yes,
but the input into those codes.
CHAIRMAN STACK: I think you have a lot
more going for you here. I mean, for one thing unlike
other expert elicitations there is a very large
database here.
CHAIRMAN STACK: We're talking about
whether these things are going to shift by factors of
two to four. Compared to other things we don't know
about, you have a very large database. To find them
shifting by a factor of two doesn't shock me all that
much. Again, we have the data. We also have a much
better mechanistic picture.
The fracture mechanics codes I think are
useful when they are used in the way you've described
them with an expert elicitation to look at the inputs
because again there will certainly be uncertainty in
those. At least I'm confident in the kind of time
scale you're talking about you could come up with
answers that aren't technically justified.
MEMBER KRESS: Frankly, I don't see any
other option.
MEMBER KRESS: If you're going to do this,
and I agree with Bill, I think you have a significant
database. To start from that and get a delta through
expert elicitation, I think that's your only option.
The only question that may be is have you formulated
the right experts and have you given them the right
questions. That you guys know how to take care of.
MEMBER ROSEN: One of the possible
outcomes is that you'll get so much disagreement, the
bounds will be so wide that you'll throw your hands up
and have to say we're just going to use the old
bumpers, what you've always used.
MEMBER FORD: I don't think you're going
to have that disagreement. I really don't.
MR. TREGONING: I suspect there to be a
wide range of opinions.
MEMBER FORD: I also think there'll be a
MR. TREGONING: There may be a consensus.
I don't want to speculate. I hope there will be a
MR. MAYFIELD: But the process is designed
to deal with exactly that issue and still produce an
answer that's usable. The concern always is when the
uncertainty bounds are so large as to render the whole
thing useless. The overall expert elicitation process
that has been refined as part of the new reg 1150
effort and has been further refined over time is
designed to deal with exactly that situation where you
do have a wide range of opinion. It's not that we
expect everybody's going to come together and agree
that the number is increased by 2.73. No. We expect
there to be divergent opinion. The process is
designed to deal with that.
MEMBER LEITCH: Depending upon how finely
you break this down may also tend to converge the
opinions. For example, it looks like now you have
BWRs and PWRs. I guess there's BWRs that are
operating with 304 stainless in the piping, and
there's other BWRs that have 316 nuclear grade the
best we know how to make. If you subdivide those, the
opinions may tend to converge.
That's just one example. There's probably
some kind of a compromise between getting one number
for the whole fleet versus plant specific. Maybe
there's groups of plants that could be used. I just
don't know how finely you intend to divide the
MR. TREGONING: Yes. I wouldn't want to
prejudge that. I'd like the expert panel to come up
with those recommendations in terms of what they feel
is the best way to move forward with the elicitation.
If they feel like the best way is to try to bin plants
for various functionality in terms of materials or
whatever, then I would be prepared to pursue that
approach. If people feel comfortable grouping things
together in one big generic lump recognizing that
they're going to be penalizing certain plants
unfairly, then --
There's an infinite number of ways to
decompose these things. I think our goal is to make
sure the expert panel decomposes them in a way that
they're comfortable providing answers as a group.
That in my mind is the focus, making sure that we put
the expert panel in a framework or in a position to
succeed so that they're going to feel comfortable with
the types of questions that they're being asked to
provide answers to. That's certainly one possible
approach. I'm sure we'll discuss that. I don't want
to prejudge that at this point.
MEMBER LEITCH: I understand.
CHAIRMAN STACK: I'd like to break for
lunch now. You're just not going to get through this
before lunch. Try to get through this in 15 minutes
when we get back.
MR. TREGONING: I can do it in two
CHAIRMAN STACK: No, you can't. Let's
come back at 1:30 p.m. Off the record.
(Whereupon, at 12:38 p.m., the above-
entitled matter recessed to reconvene at
1:30 p.m. the same day.)
CHAIRMAN STACK: On the record. Let's get
back since there's a strong opinion that we ought to
finish by 5:00 p.m. today.
MR. TREGONING: I can be brief
MR. TREGONING: Next I'm going to talk
about the plans for the intermediate term elicitation.
We really discussed a lot of this. Hopefully we can
get through this quickly or maybe not. The process is
going to be designed and implemented by a whole team
at NRC. We have contractual support that will be
provided by Battelle and EMC2 of Columbus to help us
actually guide and develop the framework and the
approach that's followed for the elicitation.
We talked about the panel. The panel make
up is obviously very important. We're going to
solicit panel members certainly from and it's
primarily going to be made I would assume from non-NRC
participants. We'll be looking for industry
contributions, academia contributions, contractors
like Sandia and PNNL and people like that, other
Government Agencies as relevant, and then certainly
international participants.
We talked a lot about this. How it's
important to have the panel members represent the full
range of relevant technical specialties. This panel
make up is obviously critical to the whole process.
The process or the model that we're planning to follow
is one similar to what was utilized in the flaw
distribution determination for the PTS re-evaluation
of 10 CF 50.61.
I believe you've been briefed on this
whole process. That model you should be very familiar
and at least philosophically that framework we plan on
utilizing in some sense for this although we need to
tailor it for our specific issues certainly. That's
not to say we're going to follow this process in lock
What's going to be our baseline that we're
going to be providing updates to? First of all, we're
going to use a different pipe database then we used
for the near term effort. For the near term effort,
we utilized 5750. Here will be using a pipe database
that's been provided by SKI. That'll serve as the
database for the pipe break.
This is updated now through about 1998.
SKI, the Swedish Institute. We have a separate effort
that actually feeds into this where we're updating the
database through a CSNI sponsored pipe database
project. It's called the OPDE project. Currently
there's ten countries that are participating in this
effort. Each country is supposed to provide input to
the database for their country's events.
Right now we're just focusing on '98, 2001
for year one of this project. This is actually a
three year project. Each out year will focus on a
preceding year as well as potentially go back and look
at years earlier than 1998 to more fully complete the
database. So that would either be provide more
information or actually even uncover events depending
on the country that's involved.
MEMBER LEITCH: Excuse me. Does pipe
break mean a crack that's not through wall?
MR. TREGONING: They distinguish between
the severity of the pipe; burst versus non-through
wall crack versus erosion of the pipe due to FAC. All
those things are included. Essentially what it's
based on is any time you have to make a change or a
repair or a replacement those things are considered in
the database.
MEMBER LEITCH: Okay. Thank you.
MR. TREGONING: So more than true
ruptures. The other thing is it's just not Class one.
It's Class one, two and three as well as the support
piping. It's a fairly comprehensive look at the
balance of plant piping within the system. Primarily
right now the database is heavily U.S. biased for many
reasons. That's actually good for our intended
benefits because that's certainly what we want to
focus on.
We'll also be pulling in, because again
it's just not the pipe break but there are other
potential things that can lead to LOCAs, current PRA
estimates for some of these other more traditional
LOCA initiators; valves, pump seals, IS LOCAs, and
generator tubes. The idea is to combine these with
the pipe database efforts to develop what we're going
to call our service history baseline. These would be
the numbers or the frequency distributions that we
would be updating through the elicitation.
Again I use the word bounding here in a
deterministic sense. We also want to pull from recent
information from other industries; commercial fossil
plants, petrochemicals, oil and gas transmission, not
to use the numbers themselves, but just to provide us
with a sanity check to make sure where we're going
makes sense relative to the information contained
within other industries.
Again, like I said we'll be eliciting to
determine if any modifications to the service history
baseline are required. If modifications are required
for the longer term, over the next 35 years if the
expectations are that those modifications will lead to
increases or decreases. So we'll certainly probe the
full spectrum of possibilities for LOCA type rate
This is some more motherhood statement of
things we want to keep in mind. We plan on using some
modelling as we've talked about, utilize to base some
of the expectations on the future changes in LOCA
frequencies resulting from aging, not only aging but
then mitigation of aging mechanisms. This is along
the lines that we talked about earlier where you use
the elicitation to provide input to the models. So
you consider that and then you consider your model
uncertainty, the fact that two models can give two
very different answers in determining what your final
estimates are.
We're still in the embryonic stages of
planning on this. We're really envisioning two
elicitation processes. One is where we do a more
traditional elicitation like we did in the near term
where we essentially query the panel members within
their areas of expertise and develop the numbers for
the LOCA frequencies from that query. So that's one
parallel path. The other parallel path is it's more
the group approach, to have the experts provide the
input to the models and let the models provide the
answers themselves.
So we really at this point we're
envisioning at least two parallel paths, maybe a third
if we talk about breaking into small groups and having
each of the small groups make estimates. The idea
behind that again is to find some sort of sensitivity
analysis or sanity check I like to call it on the
numbers that we're getting.
We talked about this, the effect of unique
events. These would certainly be unique events in the
future; things like Davis Besse, maybe things like
hydrogen combustion, and the emergence of additional
mechanisms that maybe we haven't considered
historically. We'll also probe within the group what
the group consensus about the effect of what these
events are.
Again, the idea is to also factor in ISI
mitigation strategies so that we're not just looking
at aging and potential degradation, but then the
effect that the response would have on decreasing any
increases due to that. So we've talked a lot about
this, but the idea is to consider as many factors as
possible and be as balanced as possible to try to come
up at the end of the day with updated numbers that
again are balanced and seem to include as much as we
can. We'll break things down in the elicitation so
we'll have to recombine everything to determine the
final frequencies that come out of this elicitation.
MEMBER FORD: I'd like to just make a
remark. We were talking about this before lunch. On
that particular item, I do encourage you to bin things
according to not just the reactor types, BWR, but also
how the reactors have been operated and the materials
of construction.
MEMBER FORD: The -- set to 304 versus 316
but also water chemistry control.
MR. TREGONING: One of the good things
about the SKI pipe database different from the earlier
databases is it tends to be much more comprehensive in
terms of the things that are in there. There's root
cause analysis associated with not all but a good
percentage of the pipe rate numbers that are in there.
As I mentioned earlier, that can allow you
to go back and probe frequencies due to certain
mechanisms, and it can provide you a way. You can
isolate on just a certain mechanism and just take that
mechanism through saying this is what this mechanism
has provided historically. What do my PFM models say
should be any additional adjustment for future
considerations for just that mechanisms alone? We
certainly need to do that binning and that
Finally, we have a longer term effort to
actually redefine the spectrum --
CHAIRMAN STACK: Rob, can I just hold you
for a second here?
CHAIRMAN STACK: We have someone from
Westinghouse who is going to talk a little bit about
their model.
MR. AUSTRATER: This is Bob Austrater from
Westinghouse and representing Westinghouse Owners
Group. I just wanted to make a comment. There's been
a few comments made about the fact that we were going
to do a bunch of work related to frequencies and bring
it in, and it would be different than the work going
on here.
We have done a bunch of work in the
Westinghouse Owners Group. We've had discussions with
different staff members in Mike Mayfield's area. The
issues that keep coming up are the issues we expect
this elicitation panel to deal with. We have made it
known that we'd like to be involved in this process.
What we'd like to do is come and have one set of
numbers that everybody results in and not come in with
two separate ones and then we dicker about the
That also injects us into the process. If
we have any process issues, we can get those on the
table and try to address those. So we're intending to
work together certainly from the Westinghouse Owners
Group, and I think that's pretty much true from the
industry. I just wanted to make that point.
MR. TREGONING: That's certainly the
intent of that study. The intent is to like you say
head off the two estimate approach at the pass so to
speak. How successful we'll be remains to be seen.
That's certainly as we stand now the effort.
Okay. Shifting gears a little bit to talk
about the longer term work. Again, this is really a
framework at this point because there's been no real
work going forward done. I just wanted to talk about
the goals, the general approach, the objective, and
some of the technical hurdles that we're going to need
to overcome to accomplish this task.
As we talked about, we're going to be
potentially determining a maximum pipe break size to
serve as a surrogate for a design basis or the
traditional design basis accident of the double ended
guillotine break of the largest pipe in the plant.
That will be the objective, to look at the feasibility
of replacing that --
something. Is it only the size that you will
determine? I mean, the current design basis accident
is not just based on size. It's plus loss of power.
You may be defining the context within this particular
size will serve as a design basis.
MR. TREGONING: Yes. I believe and I'll
defer to somebody that's more experienced than me in
terms of capacity, the capacities for the systems are
defined based on this.
MEMBER APOSTOLAKIS: The current 50.46 has
all kinds of requirements. Right?
MR. KURITZKY: The size and location are
the two things that 50.46 gets at.
MEMBER APOSTOLAKIS: No, the reliability
calculation would not be affected by this? The size
of the LOCA? Sure.
MR. KURITZKY: If you go with the existing
GDC 35, it says whatever the spectrum is you need to
consider the loss of off-site power and the single
additional failure.
MEMBER APOSTOLAKIS: So you mean that will
be used in that context.
MR. KURITZKY: Right. It's the size and
location that's being addressed here especially in
CHAIRMAN STACK: The redefinition could in
fact be combined with the work you're doing to come up
with --
MR. KURITZKY: It's just that the time
scale --
MR. TREGONING: Is different, yes.
CHAIRMAN STACK: Eventually, yes, they
will --
MEMBER APOSTOLAKIS: Eventually meaning
beyond the two years.
MR. TREGONING: We're in the hope for this
by 2004.
MR. KURITZKY: That's a couple years
behind the other stuff we've been discussing today.
MEMBER APOSTOLAKIS: But the eventual
marriage of this with your approach, the risk informed
approach, that will happen before 2004?
MR. TREGONING: The marriage will occur as
part of this.
MR. KURITZKY: Theoretically the other
stuff we already have in the books by then.
MEMBER APOSTOLAKIS: Again, the max pipe
break size, one can take that and go to the current
50.46 and implement it as is with this new size. Is
that correct?
MR. TREGONING: You could do that.
MR. KURITZKY: You could in the future
MEMBER APOSTOLAKIS: You could in August
MR. KURITZKY: Well that's technical work.
MR. TREGONING: We might not support --
could also take that and use it in my risk informed
approach that can be site specific or the generic
approach. Is that correct too?
MR. KURITZKY: What would happen then is
the work that we're doing right now is based on these
LOCA frequencies that will hopefully come from the
interim intermediate term.
MR. TREGONING: It's the unyielding term.
MR. KURITZKY: Now when this work is done,
theoretically we could have better, more confident
LOCA frequency numbers that we then use for the same
purpose. The one thing we have to look for in the
risk is if this is done and it comes up with less
uncertain numbers and they are substantially higher
than what we came up with in the intermediate --
MR. TREGONING: Here's how I would maybe
try to think of it. You have your LB LOCA frequency.
LB LOCA covers the whole spectrum of pipe sizes from
six inches potentially up through 31 inches. Pieces
of that distribution are attributed to different pipe
sizes. Essentially what we'd be trying to do here is
evaluate the dependency between LOCA frequency and
pipe diameter.
If we say for instance just for argument
that 99 percent of the LB LOCA frequencies happens
from contributions from pipes that are ten inches to
six inches, then that perhaps, and I'll say perhaps
because it's not clear, provides a rationale for
looking at a potential design change. I'm saying if
that's the case, then maybe you don't need to consider
a 31 inch pipe break. Maybe you only need to consider
the ten inch pipe break or if you want to have some
margin, I don't know, a 15 inch pipe break. That's
the thinking at least, that we would try to more
definitively evaluate how pipe size relates to LOCA
frequency, determine if we can make an assessment if
there are certain pipe sizes that are just so unlikely
to fail that they can be eliminated.
MEMBER KRESS: Do you have a list of what
you think would be the resulting design changes as a
function of the pipe size you choose for the maximum
MR. KURITZKY: As far as what types of
applications industry would be nursing?
MEMBER KRESS: Yes. Because in order to
do that second sub-bullet I think you need that. You
need to know what changes are going to result when you
choose a different pipe size as your large break LOCA.
MR. KURITZKY: Through a number of public
meetings, we've been meeting with stakeholders on this
topic for probably a couple of years now. Early on,
we had lists of potential applications and benefits
that industry --
MEMBER KRESS: Okay. You're getting an
idea of what they would do if you change that.
MR. TREGONING: You're right. That has to
be factored back in at the end of the day to make sure
your risk guidelines aren't being violated.
MEMBER KRESS: That's right.
MR. KURITZKY: The Westinghouse Owners
Group has supplied us with some information that any
IS --
MEMBER KRESS: Is it a function of the
pipe size you choose? If you choose a real low one,
will that do a lot of things?
MR. KURITZKY: The information we had was
looking in general about getting rid of large break
LOCAs --
MEMBER KRESS: For the diesel start
MR. KURITZKY: For a lot of different
things. It wasn't categorized by if you got to this
size, you could do this.
MEMBER KRESS: Okay. You don't have it
according to size.
MR. KURITZKY: I think if you got all the
way down to six inches, you could do all of these
things. Now, if you don't get that far down, you get
ten or 12 or 14, there's some subset that would
probably --
MEMBER KRESS: That's what I was thinking.
MR. TREGONING: But that feedback on down
the road is going to be important. If we're able to
be successful in saying pipe size versus LOCA
frequency and we come up with some graph, at that
point you would want to bounce that off of what
changes in the plan and how would the risk based
guidelines change due to potential leak plant changes.
This bullet is just to say that determining LOCA
frequencies or LOCA probabilities is just one piece of
the puzzle. You need to combine that in the entire
risk space to make sure at the end of the day you're
satisfying your original intent of those guidelines.
MEMBER APOSTOLAKIS: But the work that
Alan presented earlier this morning on the ECCS
reliability requirements. Until this work is done,
you will go with the current definition of the maximum
size. Right?
work will be done with that. Now in July 2004, some
utilities may decide to go back and revisit the
reliability requirements with a new maximum pipe break
size. Is that correct?
MR. TREGONING: It's possible.
MEMBER APOSTOLAKIS: It's possible, yes.
MR. KURITZKY: Yes. The work we discussed
earlier was dealing with the probability or the
frequency of the LOCAs.
MR. KURITZKY: It can be any of them. But
let's say large break LOCAs. The PRA doesn't
distinguish between a six to ten inch break, or ten to
14, or 14 to 18 or whatever. Large break LOCA is a
category. Different plants have different --
insensitive to the size of the break.
MR. KURITZKY: It's somewhat.
MR. TREGONING: Somewhat.
MR. KURITZKY: The frequency may change
also. This long term effort --
MEMBER APOSTOLAKIS: Because of all this
MEMBER APOSTOLAKIS: Okay. Not because of
the definition of the maximum pipe setting.
MEMBER ROSEN: I thought that you had
exercised risk based things from here. There it is
again. Do you mean risk based or risk informed?
MR. TREGONING: I apologize if I'm not
using the correct vernacular.
MEMBER APOSTOLAKIS: It may be risk --
soon. Now it's informed.
MR. TREGONING: I apologize. I'm never
sure what to put down for something like that.
change the terms every five years.
MR. TREGONING: I just can't keep up.
MEMBER APOSTOLAKIS: It's risk informed.
MR. TREGONING: Okay. Next slide please.
I'm just going to finish with two slides that talk
about some of the technical advances that will be
needed. The first thing that we'll be doing is
evaluating and updating current codes and models to
make sure that we're adequately modelling these pipe
failure mechanisms. We'll be drawing off of work at
Argonne and other places for the latest modelling and
crack rates and things like that under some of these
various relatively severe environments.
The other big change from most of the PFM
work is where possible we want to utilize realistic
loading histories and frequencies for these various
pipes, not code allowables and things like that.
We're going to try to make it as realistic as
possible. Again, whenever you do this type of
analysis, you have to combine your loading with your
residual stress distribution and your pipe boundary
This I would argue is really the crux of
the problem. There's so much variability. There's so
much variety in these input parameters. As Bill said
and certainly Dr. Ford knows the order of magnitude
and difference of your answer that you get from your
model really lies in the assumptions that are made
here. This is going to be very critical to go through
this in a rigorous way, as rigorous as possible. Like
I said, we want to incorporate up to date as much as
possible, material aging and environmental effects
MR. SCHROCK: What is
MR. TREGONING: I don't want to prejudge
how we decide to deal with residual stress
MR. SCHROCK: No, I mean what do you mean
MR. TREGONING: It could be either/or.
There's a couple of different ways to deal with any of
these inputs. Historically what's been done is you
say I'm going to take the worst case I can imagine for
residual stresses and apply that. That leads to an
answer. Sometimes that lone assumption can really
drive the outcome.
One of the things we'll need to be doing
will be looking at the sensitivity to the result to
these types of assumptions. Again, there's a lot of
unknown in here. It may end up being perfectly
appropriate to make this assumption. The goal is and
the hope is that we'll be able to focus in on
something, on assumptions that are as realistic as
But again, there are so many input
parameters and there's so much variability that
sometimes you can't get there for any given parameter.
If you really can't get there in the past what's been
done is people have made conservative assumptions
realizing that there's some margin which may be
quantified or not and then move on.
Again, the goal is going to be to use
realistic inputs across the board, not just for
residual stress distributions, but across the board.
There may be instances when we have to fall back on
essentially conservative or bounding type analysis to
provide us with inputs. I don't think that's unique
to this problem. A lot of problems are forced to make
those assumptions.
Finally, we talked about this a lot. We
want to also incorporate this into our redefinition.
We need to develop some sort of scheme to incorporate
potential or surprise future mechanisms. We'd like to
base that on what we've seen from service history.
structuralist interpretation of defense in depth.
That's how you anticipate surprises.
MR. TREGONING: Yes. So we realize that
we need to add something.
MEMBER APOSTOLAKIS: Take extra measures.
That's what it is.
MR. TREGONING: Right. We need to account
for this. How we do this is really up in the open.
There's going to be a lot of contention in this issue
understandably. It's something that we're going to
strive to do within the effort.
We'll be considering effects from normal
operating loads, but then also certainly transients
and by transients, earthquakes, certainly well known,
but then also thermal transients which are actually
more prominent and in some systems can lead to much
more damage. We need to look at considering updating
our flaw distributions again. We've done this effort
for RPBs but certainly fabrication differences and
piping could lead to differences in these
The other thing that we have to consider
that we don't have to consider for PTS is the effect
of flow initiation; flaws that are not there or maybe
there on a microscopic sense initially, essentially
initiate and grow due to these aging or degradation
mechanisms. The other thing we need to do since we're
talking about LB LOCA is to consider LB LOCA
frequencies from internal pipe failures but then also
external failures because they also potentially
contribute to the LB LOCA frequency.
So just considering pipe break frequencies
is not quite enough especially when you're looking at
potentially removing that consideration from your
design basis accident. You need to think, at least
consider the potential effects from LB LOCAs occurring
from external events. That's it. It wasn't quite 15
minutes, but not bad.
MR. KURITZKY: Let me make one point. I
think Rob began to mention on the long term effort
that nothing was ongoing right now.
MR. KURITZKY: Correct me if I'm wrong.
There's a lot of code work that is being pursued,
modifications to the PFM codes, et cetera.
MR. TREGONING: We don't have a contract
in place. I would say that nothing substinative is
really happening. There's certainly been a lot of
thought put into the approach. We haven't really sat
down and rolled our sleeves up yet and dive into it.
MR. CHOKSHI: I think in part like with
respect to CRDM activities and other things, we are
making some modifications to the codes which would fit
into this project.
MR. TREGONING: That's true. So
modifications for that, yes. Those things will feed
in. So they're corollary efforts that will have an
indirect role in this. But there's been no direct
work per se. Thanks for clarifying that. That's an
important point. Any other questions?
CHAIRMAN STACK: If we can move on to the
acceptance model. Just remember, Steve, 5:00.
MR. BAJOREK: Got it, 5:00. I'll promise
to make up a half an hour here even if it takes an
hour to do so. The package that's coming along is not
as onerous as it looks. We're not going to try to go
through every one of those. There is a little bit of
background information. With the extra time that I've
had, I'm going to try to throw out a few that I think
we've already covered.
What we're going to do the rest of the
afternoon is talk about the 10 CFR 50.46 acceptance
criteria and Appendix K. My name is Steve Bajorek.
I'm here with Norm Lauben. He'll be talking about the
decay. Ralph Meyer has done a lot of the work on the
acceptance criteria. He could not be here today, so
I'm going to go over his information.
Just by brief background, what the three
of us have been working us stems from SECY-01-0133
that has asked us to try to go through, look at the
feasibility of risk informing the acceptance criteria,
look at Appendix K, in specific look at the decay heat
model and three other models in there for potential
relaxation, but also to take a look at what might be
some of the problems of Appendix K. What are some of
the shortcomings that could lead to nonconservatisms?
If we're going to go through rule making, the idea is
we'll take the bad with the good and fix everything at
the same time.
We have been working in those three areas
with regards to the acceptance criteria. Most of the
work has been really a history lesson in where the
2200 and the 17 percent embrittlement criteria came
from and the other models in Appendix K. We'll go
through each of those. By way of outline, what I hope
to accomplish this afternoon is we put up
recommendations that we'll get to at the end of the
day and we'll follow through most of these in order.
First, we'll talk about the acceptance
criteria. Our conclusion at this point is yes, the
remaining ones which are not risk informed we should
be able to do so. We think that in the case of the
decay heat and the other Appendix K models it's
feasible to come up with a new Appendix K which we
might call and Appendix K prime that would make use of
better science, replace the decay heat, and replace
some of the other models that have been suggested for
change. We'll talk about each of those.
Then we'll move into some of the
nonconservatisms that we feel need to be incorporated,
not necessarily in rule making. But because they're
also applicable to today's Appendix K, they should be
pursued outside of rule making. When a new Appendix
K and acceptance criteria do come to pass, they need
to be corrected on that type of a time frame.
MR. SCHROCK: Steve, here you're saying
replace the old decay heat standard with the new one.
In Appendix K, it seems inconsistent with what was
said earlier. It's an option added to the existing.
MR. BAJOREK: It would be an option. The
way that we're thinking of this and have an overhead
to try to -- As I did hear those questions early, the
way we're thinking about this is in the generation of
a new option that would give the Applicant the liberty
to use a more up to date decay standard, the '94
standard in this case.
We would preserve as grandfathered options
the current Appendix K where you would be required to
use ANS plus 20 percent and the best estimate rule
where the intent would have been to use the most
recent decay heat model, but in reg guide 1.157 they
do specify the '79 model. At the end of all the
additional documentation that would take place, we
would anticipate correcting the best estimate guide
1.157 to allow it to use the most modern decay heat
standard as well.
What I'd like to talk about first is the
acceptance criteria and along with that the metal
water reaction correlations that are used in Appendix
K because they're closely tied together. As I
mentioned Ralph Meyer has done most of the work in
that area, but he can't be here today. I'm going to
try to go through a lot of the information that he's
One of the things to keep in mind is out
of the five acceptance criteria that are a part of
50.46, two are already performance based. The third,
the one percent hydrogen generation can be very easily
be made into a performance based because it's
effectively now covered under 10 CFR 50.44. The
hydrogen generation and what would have to be done to
containment in order to cover that was already picked
up elsewhere. We're looking at that particular limit
as being redundant and not necessary.
We would have to do work to modify the
other two which are the Appendix K limit of 2200 and
the maximum cladding oxidation of 17 percent. Both
are currently prescriptive, but they're closely
related. We have to look at these as a pair. The
relationship and the specification of these two
numbers arose from the commission rule making in 1973
that basically was intended to ensure that the core
still looked like a core following a LOCA. If we had
a LOCA, the core will still remain essentially intact.
Our goal and guidelines for the risk
informing of 50.46 is to continue to go after that
intent. When the regulations are complete and we can
risk inform them after a LOCA, the core still
essentially looks like a core. In addition in 50.46
right now, it is specific to two types of clad; ZIRLO
and ZIRC-4. In some of the information I'm going to
show you, we think that it's feasible to eliminate
that need to get an exemption for other types of
cladding material.
The two things that we need to take a look
at are the 2200 limit and the 17 percent oxidation.
We'll break this into two different categories at this
point. When the commission went through the rule
making, the two numbers were specifically derived from
clad embrittlement tests that were done to ensure that
you had margin to fragmentation of the cladding.
The 2200 degree limit was also considered
in light of the possibility that it would be a runaway
temperature excursion due to a very high metal water
reaction. So we've tried to take this issue and break
it into two at this point to try to consider what
would be the temperature effect on runaway type
reactions and what would be the alloy effect and would
that have anything to do with that specific criteria.
This is the statement out of the
commission opinion regarding the runaway reaction. If
you just take a quick look at the numbers on there,
their concern was primarily at temperatures 2,300,
2,400, up to 2,700. The information at the time from
which the Baker-Just equation was developed showed
that there would be an exponentially increasing
reaction rate with temperature up in this region.
The commission using that data and
essentially that correlation wanted to stay away from
those temperatures at which you would have this very
rapid increase. The feeling at the time was that as
long as you stay closer to 2,300 or 2,400, the
temperature criteria would be satisfied. I want to
leave you with the message that for as far as the --
for runaway cladding reaction, there was a comfort
level that something much higher than 2200 would have
been used if they had sufficient information at the
The Baker-Just and correlations that have
been developed since are generally of the form shown
here; exponentially increasing erroneous type
function, where the oxidation rate and the reaction
increases very rapidly with temperature. Since then
it's been shown fairly conclusively that the data that
was used to justify the Baker-Just is overly
conservative in that temperature range between about
2,000 and 2,300 degrees Fahrenheit.
In very high temperatures, it doesn't a
reasonable job. At low temperatures, it seems to do
a fairly reasonable job. In the middle, it's been
found to be rather conservative. Newer data shows
that there's much less scatter in the oxidation data
then there had been in that which was used to develop
the Baker-Just correlation. What we feel that it's
possible to do at this time --
MR. SCHROCK: Are you going to show us the
data or are you just going to show us the curve?
MR. BAJOREK: I'll show you some of the
curves in a couple of figures. I wanted to try to get
the data, but we couldn't get that in the right form
in time. We'll do that in terms of the alloys.
If the intent was to stay away from the
knee of this curve and we can replace using better
information that the Cathcart-Pawel is a better
representation of the oxidation and heat release data
in this temperature range of interest, we can retain
the same amount of conservatism by looking at the heat
release that we would get from Cathcart-Pawel compared
to Baker-Just. Whatever that heat release is, we can
show that Cathcart-Pawel would give that same heat
release at 2,307 F as Baker-Just would at 2200
MEMBER WALLIS: What's the criterion for
runaway? The curves don't give you any criterion for
MR. BAJOREK: There wasn't one in the
Appendix. We haven't specified one in terms of
looking at the curves and the slopes. It's really an
energy balance. That's part of the problem.
MEMBER WALLIS: If it gets hotter, then it
gets hotter. This is a feedback thing. It's
MR. BAJOREK: At some point, you generate
more energy within the clad that you can remove. So
it's not simply a particular temperature. It's not
indexed with --
MEMBER WALLIS: It was dependent on the
MR. BAJOREK: That's correct. So you have
to take a look at the energy supply from decay heat
and what you could remove in steam cooling.
MEMBER WALLIS: In steam cooling the
required mechanism --
MR. BAJOREK: Right now at very low
flooding rates it would be. One of the very difficult
things that you do see in calculations and we have
done code calculations to try to see where is this
runaway. It's one that's very subject to the
conditions that the code is predicting in the odd
assembly which gives us the concern of if you start to
use a code to try to come up with that number, you're
starting to base your belief on what that code can
predict for -- boiling. We don't want to really do
MEMBER WALLIS: Something must give you
this dashed line that you've drawn up there.
MR. BAJOREK: This is a representation of
MEMBER WALLIS: That green dashed line.
MR. BAJOREK: This green one?
MEMBER WALLIS: That came from a code or
MR. LAUBEN: Excuse me.
MR. BAJOREK: This is representing some
energy generation rings. We don't know.
MR. LAUBEN: No. Yes we do. Norm Lauben,
research. If you look at the two equations for metal
water reaction and you assume the same thickness of
oxide, they will give you equivalent heat release at
those two temperatures exactly. In other words, all
you're doing is equating two equations.
MEMBER WALLIS: I understand that. You
don't know your Q delta triple prime very well.
MR. LAUBEN: It doesn't matter. You're
assuming 2200 to Baker-Just and you're solving for T
with Cathcart-Pawel to get the same heat release.
MEMBER WALLIS: -- uncertainly maybe will
be in this.
MR. BAJOREK: -- all try to represent that
the same energy generation based on better information
could allow us the same energy generation at a higher
temperature now. We would preserve that same margin
whatever it was to a run-away reaction keeping in mind
that there are a whole flock of things that affect
that point that would cause the cladding to increase
rapidly in temperature.
MR. BANERJEE: Is the rate of increase the
MR. BAJOREK: This rate out here?
MR. BANERJEE: Yes, usually in a run-away
reaction chemical reaction it's not the absolute value
that matters but the rate of change with temperature
that matters. They are the same?
MR. BAJOREK: They're different between
those two correlations.
MR. BANERJEE: Because when you look at
heat balance, the worst condition is clearly the
condition where you are feeding just enough steam not
to cool but to do the reaction in which case all of
these will run away. It doesn't really matter which.
MR. BAJOREK: That's right.
MR. LAUBEN: This is just a comparison of
the amount of heat generated with the two equations.
MR. BANERJEE: I understand.
MR. BANERJEE: But it's the rate of change
that really matters. It's not the amount of heat.
MR. LAUBEN: Well, this is a comparison of
heat generation. It doesn't tell you anything about
the heat removal.
MR. BANERJEE: I understand but what about
the slope?
MR. LAUBEN: All right. This also doesn't
tell you anything about the slope either.
MR. BANERJEE: Right. That's what it
looks like.
MEMBER WALLIS: Well, Sanjoy is right. If
you cool and heat something it will come to some
equilibrium temperature but it won't come to an
equilibrium temperature if when it departs from an
equilibrium temperature it runs away because the rate
at which heat supply is bigger than the rate at which
MR. LAUBEN: All this is saying is if have
an ideal world in which you want to know what the heat
generation due to metal-water reaction is and all
other things are the same including the thickness of
oxide because the thickness of oxide is one of the
parameters in the rate equation you just --
MEMBER WALLIS: I think what we're saying
is a graph like this equating Q triple prime from CP
to some mythical Q triple prime tells you nothing
about --
MR. LAUBEN: Yes, that is correct. If you
look at rates that I think Sanjoy was asking about you
can see for instances that because the temperature
change with temperature is so much greater with Baker-
Just that you can't get to quite as high a temperature
if you are using the Baker-Just because of that very
reason. It reaches a run-away at a much lower
temperature than Cathcart-Pawel does because of the
rate of change of the rate with temperature. Is that
what you are asking, Sanjoy?
MR. BANERJEE: Yes, probably if I look to
the curves in more detail you may end up having the
same conclusion but it's not just the temperature and
the rate of heat generation that matters but the rate
of change of the heat generation.
MR. LAUBEN: Yes, and what I'm saying --
MR. BAJOREK: You're right. What you find
in an evaluation model is once you get up to these
temperatures on either one of those the increase in
heat generation and the increase in temperature can
not be alleviated by the H T that you probably see at
that part of the logo. You melt in the calculation.
So all we are saying is that in terms of coming up
with a limit newer information will justify a higher
MEMBER WALLIS: I think when you come back
and talk about run-away to this committee you better
have a criterion for run-away and not this sort of
vagueness about heat transfer --
MR. BANERJEE: Well, there's a classical
chemical engineering formula which is T reaction minus
T coolant is equal to TR2 into the activation energy
divided by the constant or something. I can send it
to you. So that tells you why it will run away or why
it won't.
MR. LAUBEN: Well, the problem though as
Steve said is that if you're trying to relate this to
real calculations which is an oxymoron I realize.
(Laughter.) But if you are trying to look at
calculations none of them give smooth behavior of heat
removal especially during reflood in LOCA. You get
water splashing up there and the heat removal will be
somewhat erratic as well.
You can never really have two identical
situations with two different calculations. You will
find that as much as you may try to get these things
as close as possible they never will be quite as close
as possible. And because of this uncertainty and
variability I did about 100 calculations of this sort
of thing and could never quite narrow it down.
But you could see a trend almost a
probablistic type of thing that over a series of many,
many calculations that the run-away will change. It
will change from one minor set of -- You may be only
changing the power by something in the fourth decimal
point and you still won't get smooth behavior from one
condition to another. You do see that in a large
measure because of the nature of Baker-Just and the
nature of Cathcart-Pawel that you reach run-away in
general much sooner for Baker-Just because of the
rapid change in temperature and the slope is very
MEMBER WALLIS: The slope is more
important than these --
MR. LAUBEN: That's it. Sure. No.
That's an easy and quantifiable way to compare it. It
just gives you a minimum measure because what's really
true because of the slope changes so much is that you
can see a much bigger difference. In general I would
say I could never achieve turn-around much above 2300
in the limited 100 calculations I did with Baker-Just
but I could reach something as close to 2800 with
Cathcart-Pawel. Now that's --
MR. BAJOREK: It's an energy balance.
MR. LAUBEN: It's an energy balance.
MR. LAUBEN: Excuse me.
MEMBER WALLIS: Maybe you need to show
these calculations. Something more convincing than
what we heard today --
MR. LAUBEN: I think what we want to do is
see and Steve has given this some thought, actually
quite a bit of thought, is can we possibly do these
calculations in a more controlled way than I did them
because that's really the course of this? How can you
really control these calculations so that when you
make an incremental change you can see where this
MR. BAJOREK: Let us take the action to
try to put some better numbers on run-away which was
not the find of the original (Inaudible.) so we
haven't done that.
MR. LAUBEN: It's hard to control the
MR. BAJOREK: Let's move on because this
really is not the issue on setting the 2200. It's
really in the mechanical integrity. That's where the
2200 came from. This is just an example of something
else that was considered.
Now before we close it out the other
question has to do with what's the difference between
alloys and how does it factor into things in order to
try to see is there a temperature effect that could be
aggravated by changing from zirc-4 to some other
alloy? We went through a fair amount of experimental
information that included zirc-4, zirc-2, newer clads
such as ZIRLO and M5.
Note by the way Baker-Just in some of the
earlier data was based on zirconium data only. It was
not from an alloy which might be one of the reasons
why it tends to stand out in relation to the other
sets of data. I would preferred to have shown the
experimental data but instead we can make comparison
and contrast the alloys by the correlations that have
been developed at the same type of format, in a
Cathcart-Pawel type of relation or Baker-Just.
You see that Baker-Just gives a
significantly higher oxidation growth rate, higher
energy release. I think it's about 50 or 60 percent
higher when you are at 2200. Regardless of whether
you are looking zirc-4 or M5 or some of the other ones
which aren't shown you see there is very little
difference as you go from one specific alloy to the
next. The reason for that is that the dominant rate
controlling step is a diffusion of oxygen through this
growing oxide layer to get where the zirconium is.
The presence of tin or niobium or some of
the other elements that make up the alloy almost as
trace elements have very little effect on that
oxidation growth rate and therefore the energy
released due to the metal-water reaction.
MEMBER WALLIS: So it's the diffusion
limited reaction by the oxide layer?
MR. BAJOREK: It's a diffusion limited
reaction based on its ability to diffuse through the
zirc oxide that is growing on the outer surface. Now
the more --
MEMBER WALLIS: How do you calculate run-
away for a diffusion limited reaction?
MR. BANERJEE: It's still expedientially
growing. The heat loss is linear. Temperential
difference. So at some point it will always take off.
MR. BAJOREK: At some point your oxidation
and energy will slow down based on the growth of that
layer so that comes in. I think it's more of a second
order of fact in relation to the actual temperature of
you're oxidizing.
MR. BANERJEE: I think you'll find this
thing will run away eventually and it's clear that the
activation energy for those two reactions is very
different. So you are right it will run away later
with Cathcart-Pawel.
MR. LAUBEN: Right.
MR. BAJOREK: The real crux of the 2200
and 17 percent however came from the belief of the
commission that the cladding following a LOCA should
still remain essentially intact. They looked at
several different ways of doing that. They considered
thermal shock test. They looked at calculations that
had been performed by the vendors. They looked at
blow-down loads and deformation of the assembly during
the accident.
They concluded none of those were
completely satisfactory and that the only really good
way was to perform experimental tests on samples of
cladding that had been put through a LOCA type of
transient environment and do mechanical tests on there
to insure that this cladding still had ductility
remaining following the preparation of the sample.
Now the difficulty that they had was that
there were two things that the strength of the
material was very much dependent on. First the extent
of the oxidation and how much oxidation you had built
up on the cladding itself. Secondly what was that
oxidizing temperature that the specimen was held at
before you did the test.
If we go back and think about the
Cathcart-Pawel versus Baker-Just, the difference that
we see in alloys is more dependent on the time it was
held at the oxidizing temperature because this gives
the opportunity for oxygen to diffuse completely
through the oxide layer and deep into the prior beta
phase of the cladding. That's what really affects the
strength, the toughness and the ductility.
What had been decided upon in the original
commission work was to perform mechanical strength
tests on pieces of clad that would be exposed to steam
at varying high temperatures. A piece of that clad
would be cut and the so-called ring tests would
determine what would be the oxidation at which that
little ring would fail.
Based on those tests and one thing to keep
in mind, the oxidation was not something that was
measured in these early initial tests. There wasn't
enough information but rather it was calculated using
Baker-Just. The relation between failure, ductility
and oxidation, that was developed from these tests
found that at 17 percent it could survive some load.
I don't know exactly how that load had been
determined. But those pieces that had survived the
shock and retained ductility would pass this test.
When there was more than 17 percent as calculated it
would shatter at that point.
MEMBER WALLIS: It says nothing about the
burn-up level or anything. It has nothing to do with
fuels at all.
MR. BAJOREK: No, this was fresh. I
believe this was all fresh.
MEMBER WALLIS: Like a very academic
MR. BAJOREK: Yes. But it also --
MR. BANERJEE: So at 17 percent calculated
by Baker-Just at some temperature? The 17 percent you
MR. BAJOREK: You calculate using Baker-
MR. BANERJEE: At some temperature, right?
MR. BAJOREK: It depends at what you
oxidized it at.
MR. BANERJEE: So if Baker-Just was too
high then it was less than 17 percent. Right?
MR. BAJOREK: That's correct. But the
actual oxidation of those tests was probably closer to
13 percent.
MR. BANERJEE: The brittleness occurred at
13 percent rather than 17 percent.
MR. BAJOREK: Right. If you use more
accurate information based on something like a
Cathcart-Pawel which you would believe for this type
of temperature range. Now we go back and look at the
PCT limit that became a function of what had been the
oxidizing temperatures.
MEMBER WALLIS: For how long?
MR. BAJOREK: Well, that would depend on
how much oxidation you would grow on there? So what
they were faced with was the lack of experimental
information between 2200 and 2400 degrees.
MEMBER WALLIS: It would be very helpful
if you used "C" or "F" consistently instead of jumping
between the two.
MR. BAJOREK: Okay. Which would you
MEMBER WALLIS: Whatever you like.
MR. BAJOREK: Okay. Between 2200 degrees
F and --
MEMBER WALLIS: What's this 1000 to 1200
degrees mean there?
MR. BAJOREK: That's means 1800 to 2200.
MEMBER WALLIS: Good enough.
MR. BAJOREK: They found that in
considering the data they had a gap above 2200 and
there was no reliable data up until much higher
MEMBER WALLIS: Accept the criterion based
on the limited data you have?
MR. BAJOREK: That's correct. So even
though the available information suggested a higher
PCT based on run-away temperature whatever that may
have been defined as they could not find sufficient
experimental information to allow the ductility to be
specified for temperatures greater than 2200 degrees.
That's what became the limit.
Now as somebody has pointed out, there
were some problems with this. Perhaps most notable
was that's not what the clad looks following a LOCA.
At some places on the cladding, it will swell, balloon
and rupture. Later about 1980 it was realized that
the hydrogen content inside the swollen and ballooned
region was significantly higher than it would be on
the outside of the clad where it would be swept away.
This enhanced the embrittlement in this region near
the balloon.
In order to understand that better and to
resolve the embrittlement question additional tests
were run. I believe it was both Oakridge and Argonne
at the time where they conducted similar tests. But
when they examined the clad, they took pieces of clad
from the balloon region, did tests and found that sure
enough when you did the calculation with Baker-Just
and you reached 17 percent you were starting to fail
in this region.
These tests were then looked at in some
additional detail. They realized that up in this
region it was very difficult to perform this type of
test based on where they were cutting the ring. To
meet the intent of the `73 rule, they went to an
impact test where you would balloon the clad, take a
calibrated weight with a tip, swing this down and in
the type of a Sharpie toughness test determine at what
point the clad which had been embrittled now due to
the swelling, rupture and the extra hydrogen would
start to fragment.
They determine through a toughness test
and by measuring the clad oxidation that the clad
would not embrittle at 17 percent based on the results
of these toughness tests. That after 1980 really
formed the basis of why 17 percent was really an
adequate and conservative number to cover the tests
that had been done earlier.
MEMBER WALLIS: It seems to be that both
these coursing tests and hitting tests, impact tests
and the squeezing tests are not really typical of the
loads imposed on the real cladding. You don't take it
between two plates and squash it. You don't hit it
with a hammer on top of which it's not a radiated
material. I keep wondering what the relevance of all
these tests are to the real truth.
MR. BAJOREK: I think the relevance as we
look at it is (1) there are still questions on how you
determine the ductility, the toughness and whether it
will survive a LOCA type environment. We think that
it is appropriate and feasible to risk inform that
type of criteria rather than saying it's 17 percent
and 2200 which is varied back on the embrittlement
type of studies but rather devise a test to justify
that the clad will not shatter to the LOCA type
MEMBER WALLIS: That would be performance
based. If you are asking this to be performance based
knowing this background they would presume that you
would have to do much better tests than any of these
to really convince them.
MR. BAJOREK: That may well be the case.
Now we have tests getting ready to go on at Argonne.
MEMBER ROSENTHAL: Steve, can I talk?
MEMBER ROSENTHAL: Jack Rosenthal. I know
less than Steve about specific areas but maybe a
broader view because Ralph sits next to me if nothing
else. At Argonne National Laboratory right now, we're
about to perform rod testing. In the course of
calibrating the equipment for the tester, we ended up
reconfirming Cathcart-Pawel. So that gives us a very
nice contemporaneous factual basis for our work.
We have clad from H.B. Robinson and we
have clad from Limerick that's high burn-up cladding
which we will test. So we will have a factual basis
for both fresh clad and high burn-up clad. Let me
remind you that the fuel that incurs the peak PCT in
a core is likely somewhat burnt clad and not thrice-
burnt clad but we will account for that. So in any
case we will be able to put this on a factual basis.
Steve started his presentation by saying
what we want is the idea to have something that looks
like a core standing there when you reflood it. So
then you went to the concept of -- Am I taking your
MR. BAJOREK: No, that's fine. This is
basically a feasibility study to allow us to go ahead
with risk informing these criteria. I think some of
the questions you've asked should be a ring test,
should be a effectiveness test, to be a four-point
bending test which is also included. In fact that's
still yet to be determined.
MEMBER WALLIS: Well, are any of those
tests appropriate?
MEMBER ROSENTHAL: Right. The performance
criteria is likely to be something that you should
either retain some degree of ductility of some degree
of specified toughness post quench (PH). That could
form a very nice performance based criteria. You
would eliminate the 2200, the 17 percent and the zirc-
2 and zirc-4 from the rule itself.
The downside of that and you mentioned it
is that if you go to this performance criteria and a
future vendor would come up with clad-X then we would
expect them to somehow unspecified, unthought out yet,
demonstrate that they need that performance criteria
and they might likely but not necessarily have to do
MEMBER WALLIS: -- criterion is that the
clad maintains its integrity. In other words it's a
barrier that's not breached. Is that it?
MEMBER ROSENTHAL: No, about 1600 degrees
F it will burst.
MEMBER ROSENTHAL: The core should look
something like a core post quench. You've reflooded
it. It shouldn't shatter.
MEMBER WALLIS: Something like that's very
vague. Isn't it?
MR. BAJOREK: It's made of metal. There's
some oxide laying in there.
MR. LAUBEN: It's not laying on the floor.
CHAIRMAN SHACK: I mean you don't it to
shatter and come apart so you basically require it to
have some ductility and you --
MEMBER WALLIS: I can't see how squeezing
a ring is going to tell me anything.
CHAIRMAN SHACK: It tells you how much
ductility though. How much strain the material can
MEMBER BONACA: You mean that as it swells
and balloons still it would be together rather than
fracturing and falling down.
MR. LAUBEN: Excuse me. Graham, if you
look at the whole of these tests that are done you
find out that a lot of them fail just on handling. So
it isn't all of the cladding samples that get tested
because a lot of them have failed on handling. If it
fails on handling, it's failed. Only some increment
of samples do you even get a chance to do tests in so
it isn't as though every piece of cladding that you
oxidize hasn't already failed and I think you can --
MR. BAJOREK: And it may be a better
criteria than what we have now because I think if
Ralph were here he'd point out that they've seen tests
where the oxidation is only at 6 percent and it
wouldn't pass the ring compression or the toughness
test now. But under today's regulations it's less
than 17 percent.
MEMBER WALLIS: It's reassuring.
MR. BAJOREK: That's why going to a risk
informed type regulation based on some material type
of test we think is a prudent thing to do and not just
rely on some number.
CHAIRMAN SHACK: Well, I'm not sure that
it's risk informed. You're maintaining the same basis
for the thing but you're just picking a different
criteria to demonstrate that you've maintained that.
It's a criterion that probably makes more sense.
MEMBER WALLIS: I think you have to show
me that given the results of the test you're now able
to predict what will happen to a core. I don't see
the connection.
CHAIRMAN SHACK: You can demonstrate if it
has that much ductility that it will withstand thermal
shock for example when it's reflooded which is a good
thing to do.
MEMBER WALLIS: Okay. Then that's what
you have to do. You have to do that.
CHAIRMAN SHACK: Well, you can also
demonstrate that it will maintain that integrity.
Steve had a note somewhere that thermal shock wasn't
good enough because one of the things you find is that
it's a trickier test than you think it is to do. But
again if you have enough ductility you can show for a
wide range of thermal shock conditions even though you
don't know exactly what it will be that it will hang
MEMBER WALLIS: So if squeezing or ring
test gives you a property which you can't put into
your calculations of what happens with thermal shock
then you can predict what will happen.
MR. FORD: But there's still a big jump in
faith. You surely must have a correlation between
damage by thermal shock and some other surrogate such
as strain to fractures coming on a VEN test or a
ductility test. You think of this as -- Ductility in
and of itself is not the sole criteria to whether you
are going to survive the thermal shock.
It's not the only one. You're going to
have to do quite a few tests on a radius material of
different fluency levels, etc. surely to have a good
feeling that using one of these criteria, using
different loading rates, etc. that you have the
correct specification that you are going to meet.
CHAIRMAN SHACK: You mean how much
ductility do I need?
MR. FORD: Yes.
CHAIRMAN SHACK: 1.5 percent?
MR. FORD: Right.
CHAIRMAN SHACK: That's a little trickier
number to come up with.
MR. FORD: But isn't that a vital number
to come up with if you're going to come up with --
CHAIRMAN SHACK: You can calculate many of
those numbers.
MR. FORD: So it's how it's ductile. It's
not that the fiber is ductile but how ductile.
CHAIRMAN SHACK: Yes, whether it's 0.1
percent or one percent makes a big difference but they
will have to come up with that limit.
MR. SCHROCK: Isn't there some information
available from the PVF tests about where you start to
lose the geometry of the fuel? I don't know. Is that
looked at?
MEMBER CRONENBERG: That's burned up only
to 20,000 megawatts days per ton or something. Right?
The PVF test.
MR. SCHROCK: It went far enough. They
failed the fuel.
MEMBER CRONENBERG: But it was not to
today's burn-ups.
MR. SCHROCK: I'm sure that's probably
true. But I mean the arguments here seem to be when
do you lose the ability to maintain a core like
MEMBER WALLIS: It hasn't been addressed
really at all yet.
PARTICIPANT: Our Bruce facility isn't a
leaf flood test, is it?
MR. LAUBEN: I don't remember exactly.
All I remember is they did actually fail fuel.
MR. BAJOREK: I don't think it was a
reflood facility.
MR. LAUBEN: It was designed more for a
spike I guess in the energy that was put into the
PARTICIPANT: It was a broad assertion --
CHAIRMAN SHACK: There are very few full
scale tests of a reflood LOCA load down.
MEMBER WALLIS: 2200 has a very iffy
basis. The only justification really is that it is
worked over 30 or 40 years. If you are going to
change it you're going to have to have some really
good arguments.
MR. BAJOREK: The basis for what the new
criterion and regulation still has to be worked out.
Is it toughness? Is it ductility? Is that a
sufficient amount? We look at that as a question what
will be hopefully be answered out of the on-going test
program that's going on at Argonne and really work by
some of the people that really understand materials.
CHAIRMAN SHACK: As you go up in
temperature, the oxygen also as mentioned goes into
metal. You know you're brittle. There are all sorts
of reasons not to go above a certain temperature.
It's not just the oxidation. You embrittle the hell
out of the thing very quickly with relatively low
amounts of oxidation.
MR. BAJOREK: As in the case of the one
cladding material increasing that temperature to 2250.
Maybe be the limit for it.
MEMBER WALLIS: Burn-outs have an
effective on this? Does all the radiation and
chemical environment somehow have an effect on it?
CHAIRMAN SHACK: I think you're mostly
looking at the oxide. Surprising little because it
anneals. The bad news is it gets damaged. The good
news is that it's going up and gets annealed. So if
it makes it up --
MEMBER WALLIS: Up on what? Temperature?
CHAIRMAN SHACK: On temperature. If it
can last the ramp-up you are going to lose the
radiation damage. Now you are picking up oxidation
damage at a fairly furious rate. But again it makes
the 17 percent -- There's a debate over the 17 percent
includes the prior oxidation or it's just the
oxidation during the ramp-up. There are reasons for
various sorts of things but again as he said it's a
the basic notion which I think that it's really the
ductility that you want to maintain and it's the
correct one.
MR. BANERJEE: You're not proposing to
change the limit to anything then. Right? I mean if
you substitute the Cathcart-Pawel in place of the
Baker-Just if the intention is just to really keep the
fuel ductile enough that it doesn't break up, the test
you have shown us suggested that it's really the
temperature that matters not the correlation that
Baker-Just or something. All you've done is back
calculated the oxidation.
MR. BAJOREK: Let me make it clear. The
early tests were done using the correlation to
estimate what the oxide thickness was.
MR. BAJOREK: Tests now that would use the
toughness requirement at Argonne they measured the
oxide. So we've gotten away from relying on the
correlation to determine it.
MR. BANERJEE: And at what temperature do
they become brittle?
MR. BAJOREK: So it's what temperature you
did the oxidation that has potentially the largest
effect on whether you have any ductility once you do
one of those tests.
MR. BANERJEE: So substituting Cathcart-
Pawel or whatever doesn't really matter. It's just a
question of ductility. All that is irrelevant.
MR. BAJOREK: Not on this part. When you
start looking at run-away temperatures, there you have
to predict things.
MR. BANERJEE: It must be time and
temperature, too. Right? The amount of oxidation.
MR. BAJOREK: For this, yes. Part of the
Argonne plan is to take radiated samples, subject them
to a time-temperature history where they would be
exposed and this is 1200 degrees C that didn't show
up, leave it at that temperature for some period of
time and then cool it off and quench it at a rate
similar to what you might expect during a large break
The tests would consider temperatures I
guess both higher and lower this in order to establish
a wider range. But yes, time at that temperature also
makes a big difference because that's what's allowing
the oxygen to diffuse deep into the metal.
MR. BANERJEE: So you aren't suggesting
any revision to the criteria right now until these
tests are done.
MR. BAJOREK: Rule making can proceed at
this point.
MR. BANERJEE: To do what?
MR. BAJOREK: To start coming up with new
MEMBER WALLIS: You can make any rule
you'd like but you better justify it technically.
MR. BANERJEE: But are you suggesting that
you change the correlation and change the amount of
oxidation and change the peak clad temperature? What
is the suggestion on this?
MR. BAJOREK: We're not changing the
correlation with respect to 5046. The acceptance
criteria would be based on material integrity tests.
Those would be such that you would expose the sample
to a severe environment. You would oxidize it. You
would measure the oxidation.
You would develop a criteria probably in
a red guide that would specify how you would do those
tests and under what conditions you would assume that
the cladding had passed the test based on either a
ring compression or a toughness or a four point bend
test. That is still yet to be determined. We would
hope that we would get information from the Argonne
test program to really guide that.
MR. BANERJEE: So that would be
alternative route to satisfying this requirement. I
mean either you could use Baker-Just 2300 or whatever
has worked, 2200, or you could go this way. What are
you really proposing? That's what I don't understand.
MEMBER ROSENTHAL: Why don't put up your
six option slide again? Remember your matrix slide.
Let me point out that Steve is talking about the
MR. BAJOREK: Don't confuse Baker-Just
when it comes to the acceptance criteria. It was used
in the past. It will not be used and has not been
used in the present justification for 2200 and 17
percent. Baker-Just versus Cathcart-Pawel or other
correlations are going to be recommended for revision.
This will be in a new appendix K.
In appendix K presently you are required
to use Baker-Just to calculate oxidation and the
metal-water heat release in your evaluation model.
Our recommendation for appendix K will be to calculate
the heat release using Cathcart-Pawel instead of
Baker-Just. It's better science in that temperature
range that's very important in the LOCA analysis.
Likewise other correlations and we haven't
talked about those yet which are specified by appendix
K, we look at better information and we'll say yes
there is better science. Those don't have to be
prescriptive. So that is where the discussion Baker-
Just versus Cathcart-Pawel should be. It's not with
the acceptance criteria.
MR. BANERJEE: So the Baker-Just was used
to determine the heat release.
MR. BAJOREK: In 1973.
MR. BANERJEE: Right. Now the idea was
that basically you didn't want this thing to run away.
MR. BAJOREK: That was one of the
commission's concerns.
MR. BANERJEE: And what was the other
concern? Was it the embrittlement?
MR. BAJOREK: Clad embrittlement. That is
what was used to determine 2200 and 17 percent.
MR. BANERJEE: Right. So with regard to
the clad embrittlement part, you are going to do some
experiments or whatever to handle that part.
MR. BAJOREK: Experiments to define --
MR. BANERJEE: So whether you use
Cathcart-Pawel or Baker-Just or whatever is irrelevant
there. It doesn't matter.
MR. BANERJEE: Okay. With regard to the
run-away reaction, Baker-Just or Cathcart-Pawel will
give you somewhat different answers.
MR. BANERJEE: I guess then one has to
find out really what the difference is going to be
between the two. It's not obvious that equating the
two heat releases gives you a higher temperature or
MR. BAJOREK: Right. You're going to have
a different thermal time constant of the clad
depending on which energy generation term you use.
Whether it runs away or not, the temperature-time we
haven't put a number on that. Nobody has done that.
That depends on an energy balance.
MEMBER WALLIS: Does it run away more
every time its balloons delivers an attack?
MEMBER SIEBER: I wouldn't think it would
be faster.
MR. BAJOREK: If it's in a balloon?
MR. BAJOREK: If it's in a balloon you
will have a double sided reaction.
MEMBER WALLIS: But it will run away
faster, wouldn't it?
MR. BAJOREK: But it also acts as a fin.
MEMBER WALLIS: (Inaudible.)
MR. BAJOREK: Yes it is.
MEMBER WALLIS: -- 100 calculations
MR. LAUBEN: Right but there's another
thing. This may seem hard to believe. Actually in
the range of 2200 to 2400 and in a balloon region, you
will actually find out that K heat is still the major
heat source. It's only when you get real high does
the metal-water reaction become the predominant heat
source. So you can't ignore the K heat anyway.
So ballooning helps you as Steve says in
terms of heat removal but it also removes you at least
for a time from your heat source until the surface of
the fuel can rise high enough to radiate -- Excuse me.
MEMBER WALLIS: It affects the cladding by
the fuel gets hotter.
MR. LAUBEN: The fuel gets hotter --
MR. BAJOREK: Yes. The ventilation is on
the clad temperature.
MR. LAUBEN: Yes, that's right.
PARTICIPANT: How do you know that it
might be above the burst node also but not necessarily
at the same elevation?
MR. BAJOREK: Which should basically
serve -- We have to be very careful about using codes.
MEMBER WALLIS: I'm going to retire before
we get to the end of this --
MR. BAJOREK: They're never going to allow
you to retire.
MEMBER SIEBER: What about the blockage of
the cooling channels when the ballooning occurs?
MR. BAJOREK: Excuse me.
MEMBER SIEBER: That would cause the
temperature to go up too. Blockage in the cooling
MR. BAJOREK: That would still be in this
new appendix K approach. You would still be required
to look at blockage in swelling and its effects on the
flow distribution in the hot center.
MEMBER SIEBER: But it will occur at
temperatures below 2200.
MR. BAJOREK: Yes. The swelling and
blockage occurs at 1500 degrees F.
MEMBER SIEBER: Right. So it all balloons
out and goes up there faster.
MR. BAJOREK: Yes. All right to try to
move on. The next thing that we would like to go over
is the decay heat model and start moving into appendix
K. This is where we will start looking at models that
came be replaced by better science but at the expense
of looking at some non-conservative issues in appendix
CHAIRMAN SHACK: Before we start a new
topic let's take a break for 15 minutes so we
reconvene at 3:10 p.m. Off the record.
(Whereupon, the foregoing matter went off
the record at 2:55 p.m. and went back on
the record at 3:10 p.m.)
CHAIRMAN SHACK: On the record. Let's get
(Discussion off record.)
MR. LAUBEN: I do have a statement of
religious belief that I wanted to start with.
PARTICIPANT: Was Milton connected in some
MR. LAUBEN: No, it was just that during
all this process. Somebody in research actually had
this quote pasted on their door. I thought this is
true. He was working on something entirely different.
I said this is true no matter what. If we don't know
where the baseline is we don't really know much of
anything. That's true of your best estimate analysis.
It was certainly true of the K heat. If you don't
know what reality is how do you know whether something
is conservative or what. So I thought this is such a
good statement I thought I would put it up here.
MEMBER WALLIS: It wasn't discovered until
August 9, 2001.
MR. LAUBEN: Well, it probably was
discovered a lot longer ago than that. It's just that
I found it in writing.
MR. SCHROCK: I guess what you've told us
is that you have a best estimate. Is that right?
MR. LAUBEN: No. You know it's probably
true that we never have a best estimate. We have
things that are maybe more realistic than they were
before and in this case I should say that part of the
reason I put this up here was that since the last time
we've talked, one of the people in our branch, Tony
Ulses, was able to do some Origen calculations.
Until we had the Origen calculations, I
really had no good idea of whether the standard was
telling us anything that was close to reality or not.
The ANS standard. This was some way to check the ANS
standard as well as to check any calculations we may
have done. Now this doesn't --
MR. SCHROCK: When you read the standard
you find that it tells you that Origen calculations
are one of the sources and in fact a major source of
data upon which the standard was based.
MR. LAUBEN: Okay but we developed a
spreadsheet to look at the `79, `71, and `94 standards
we had some numbers that came out of it. We compared
those to the numbers in the table in the standard.
But that's just repeating what's already in the
standard. This is just that I put the tables in
Also if we compared them to some of the
examples that they gave in the standard, that's also
a help too. But then the question was how close is
this to something else. The something else was
finally our ability compare the spreadsheet that was
supposedly the spreadsheet to the standard with the
Origen calculations. We'll show you those in a few
minutes. It's nice to have something else to compare
with what you've done. Partly I guess that's why I'm
saying this.
It is proposed and this is what we are
proposing to do. I'll start with this right away. It
is proposed that the decay heat requirements in
Appendix K and the best estimate guidance in
Regulatory Guide 1.157 be replaced with requirements
and guidance based on the 1994 ANS decay heat
standard. I think that's no surprise. Steve has told
you that's one of the things that we are doing.
In other words, as everybody knows, 50.46
has two options: the best estimate option for which
guidance is in reg guide 1.157 and the conservative
option which Appendix K which virtually everything is
specified in Appendix K. To date there are no
regulatory guides that describe anything that's in
Appendix K. Appendix K is self-contained so far.
The Appendix K option in 50.46 currently
requires fission product decay heat be modeled using
the draft 1971 ANS standard with a multiplier of 1.2
and the assumption of infinite irradiation. A
separate paragraph of Appendix K requires
consideration of Actinide decay heat but it doesn't
say that you have to use the Actinide equations for
neptunium and uranium 239 which are in the `71
standard. They're also in the `79 standard. They're
also in the `94 standard. It's almost identical there
which is not surprising since you're talking about the
same two isotopes.
MR. SCHROCK: Did you just say that it
does not say that you must use those?
MR. LAUBEN: Appendix K does not say you
must use what in the standard.
MR. SCHROCK: Oh, Appendix K doesn't.
MR. LAUBEN: Appendix K does not.
MR. SCHROCK: I thought you were referring
to what the standard says.
MR. SCHROCK: Okay. It's Appendix K.
MR. LAUBEN: Appendix K does not prescribe
the standard when it comes to Actinide decay. It just
says account for Actinide. That's what Appendix K
MR. SCHROCK: I think it says use the
standard plus 20 percent.
MR. LAUBEN: It says use the standard plus
20 percent for fission product decay.
MR. SCHROCK: It doesn't' say for fission
MR. LAUBEN: Yes it does. If you look at
the standard there are two separate subparagraphs.
I'm going to get the standard. Okay. The scribe will
read the bible now. But it says for fission product
decay use the `71 standard for Actinides. Then the
next paragraph says consider Actinide. It doesn't say
how. It doesn't use the standard. Are we going to
get --
MR. SCHROCK: I agree.
MR. LAUBEN: Okay. An alternative would
permit the use of the 1994 ANS decay heat standard and
that's the K heat standard, which involves more
sophisticated uncertainty methods and a greater number
of options left to the user. The `71 standard is very
It is so simple that Appendix K is able to
describe everything that you need to know in a few
sentences in Appendix K which is not so with the `79
standard and the `94 standard. They also use more
recent available data and methods.
MR. SCHROCK: But it's incorrect, Norm, to
describe these as options. They are not options.
They're statements of the reality of the physics and
calling to attention things that the user must do and
justify in order to apply the standard.
MR. SCHROCK: These are requirements.
These are not options.
MR. LAUBEN: Some of the things in the
standard are left to the user. In fact we can look at
one through five --
MR. SCHROCK: But there are things which
the standard does not specify which must be included
in the bottom line.
MR. LAUBEN: They must be considered --
MR. SCHROCK: That's the language of the
standard. It says these things must be provided and
justified by the user.
MR. LAUBEN: Let me see. If you will bear
with me. You've said we've discussed several times
and I thought today I would once more go through the
standards and the `94 standard in particular. I want
to make sure that I had correctly remembered those
things which the standard says is left to the user or
whatever. I could put that up there but they are the
same things as --
For instance, number two is something
that's left to the user but you have to obviously
include it. There are no values specified for
recoverable energy. G(t) it says here of wanting to
do it but it also leaves it to the user to decide if
they want to do it that way or some other way.
There is Actinide contribution which the
standard gives the Actinide equations for neptunium
and uranium 239. It also says that there are other
Actinides which an improvement to the standard would
include those additional Actinides.
MR. SCHROCK: I don't think that's the
language used at all.
MR. LAUBEN: Okay. I got the language
MR. SCHROCK: What it says is that --
MR. LAUBEN: If you want to talk about the
language, excuse me. Get me my book. Okay. I mean
the language says "further revisions to the standard
are planned to include contributions from Actinides
not already included." That's what the language in
the standard says.
Otherwise if you look in the standard
about Actinides it just talks about the two that we've
already mentioned. That's what the standard says. Do
you think I've missed something?
MEMBER WALLIS: I don't understand what
this bullet says.
MR. LAUBEN: All this bullet really is
trying to identify is the things that the standard
MEMBER WALLIS: Does it come up with an
agreed procedure for calculating or a recommended
procedure for calculation or does it just say that
these are the things you should consider?
MR. LAUBEN: No. In some cases, it's
pretty explicit as to how you calculate it. In other
cases it says it's up to the user to do something. If
you will that's what I was trying to go through just
The standard method of the standard which
describes fission product decay from four isotopes
provides tables for that and it provides equations and
it provides methods for calculating, fission product
decay from those four isotopes without neutron
absorption and that sort of things. But it also says
you have address neutron absorption. Here are some
equations that might help. But it's up to the user to
come up with anything else that might be used to
justified. If you want the exact working I can tell
you what that is.
MEMBER WALLIS: You said the four
isotopes. That's 235U, 239Pu, 238U --
MEMBER WALLIS: Are the plutonium isotopes
significantly different from the uranium?
MR. LAUBEN: If you look at the tables
they're significantly enough different that it makes
a difference.
MEMBER WALLIS: So burn-up makes quite a
bit of difference to the K heat.
MR. LAUBEN: Burn-up makes quite a bit of
difference. That's right. It's usually conservative
to assume 235U only. But that's right. There is a
significant enough difference in them. That's why it
was included and that's why a whole new standard was
put together. Right, Virgil?
MEMBER WALLIS: It was also the
recoverable energy for fission varies depending upon
the isotope mix.
MR. SCHROCK: (Inaudible.)
MR. LAUBEN: What's the matter?
MR. SCHROCK: Go ahead and make your case.
Then I'll comment when you are finished.
MR. LAUBEN: Okay. Anyway, these are the
five things that the standard either in some way very
explicitly tells you how to consider it or in some
cases it is not as explicit. Anyway these are the
MR. SCHROCK: You can look upon these
things as being physical realities that are
dependencies that the decay power has.
MR. SCHROCK: And a best estimate
evaluation will require that these things be taken
into account. The standard was devised to provide a
best estimate methodology not an Appendix K
MR. LAUBEN: You're saying that I should
-- No, let me just continue. The performance based
realistic option in 50.46 would allow use of the 1994
standard today. What I'm saying is that the best
estimate in 50.46 is performance based.
I'm saying then that this specification of
1994 standard as an acceptable method in Reg. Guide
1.157 would facilitate its use. In other words, right
now Reg. Guide 1.157 which describes acceptable ways
to do the best estimate calculations does not specify
the 1994 standard. It specifies that the 1979 is an
acceptable one.
So it would make sense to update Reg.
Guide 1.157 to describe a more modern standard that
would acceptable for use. In addition to that it
makes sense in that Regulatory Guide to specify things
a little bit more clearly than it does with respect to
some of the things that are in the Reg. Guide
regarding several things.
MEMBER SIEBER: Is the 20 percent margin
Ader (PH) still going to be there?
MR. LAUBEN: That has nothing to do with
the best estimate option. Twenty percent Ader (PH) is
not there. The uncertainties that are provided for
the four isotopes in the `94 standard are one sigma
uncertainties. Those uncertainties plus any other
uncertainties would be assessed in terms of the entire
uncertainty of the analysis when you are doing it.
That's similar to what holders of the best estimate
evaluation models do today. The 20 percent is an
Appendix K requirement only, not the best estimate
MEMBER SIEBER: For the purposes of
providing margin, right?
MR. LAUBEN: No, originally the 20 percent
was something that was to look at the uncertainty in
the K heat only and, Virgil, you can correct me of
MR. LAUBEN: Then as time went on it was
discovered that this was much more than what was
needed. People realized that it could be thought of
covering other uncertainties as well.
MR. LAUBEN: But originally and in fact I
think there is a curve in the `79 standard that shows
the uncertainty in the `79 standard. It compares it
to the 1.2 --
MEMBER SIEBER: Yes, it does.
MR. LAUBEN: -- and the `71 standard as
MEMBER SIEBER: And the 20 percent is
outside of the uncertainty base.
MR. LAUBEN: Certainly right. That's
MR. SCHROCK: But it's conservative.
MR. SCHROCK: The `71, `73 standard had
error bars that were placed there simply by eye
balling all available data against the selected mean
curve and also time dependent. At some time interval
the negative uncertainty was much larger than the
positive uncertainty and so forth.
In the earliest time, the uncertainty was
20 percent first thousand seconds. After that it
changed to a smaller number. Then it changed again at
longer times. In writing Appendix K it was selected
from the early time because I think of the loss of
coolant accident application.
So the 20 percent exists in the rule
without any reflection of larger detail which was in
that standard. That standard also provided a means of
assessing the role of finite operation as opposed to
infinitely long operation. Again that provision of
the standard was not incorporated into Appendix K. It
was implicitly or essentially ignored. So there are
differences between Appendix K and that standard.
It's not a direct comparison.
But the standard itself had an uncertainty
which had no statistical meaning whatsoever. It was
based on a very minimum amount of information. As
I've said it was simply looking at a graph that had
all available data shown and those uncertainty
included everything which was then available which was
quite inadequate.
MEMBER SIEBER: Right. Thank you.
MR. LAUBEN: Okay. What I'm showing here
is a table of information about nine different
calculations that are going to shown in the next
several graphs having to do with the different ways of
calculating in decay heat. I grouped them into three
The first calculation, case has only one
member of its group and that's the current Appendix K.
It tells you what model it is, ANS73, 1.2 multiplier,
infinite operating time, 100 percent U235 which is the
assumption in `71, `73 standard. So these other
things, capture time, and fission energy are not
applicable because they are not written in those
terms. Actinide yield, the 0.7 and isotope tables,
etc. are not applicable because of the way that the
standard is written.
The next four cases are Appendix K
proposals that would look at the `94 standard. The
first one which is case number two looks at 2å for the
individual isotope uncertainties. It uses the
additive technique for uncertainty that is provided in
the standard.
Number three is the same thing but instead
of using the additive technique for uncertainties it's
in the standard that uses the root mean squared
technique. 3a and why did I use 3a instead of another
number was because 3a was sort of the last thing we
did but I wanted to show it was closest to case three.
That just has 2å. I didn't say what kind because in
all the instances assuming 100 percent U235 similar to
what's done in the `71 --
MR. SCHROCK: That is done because
plutonium produces less K heat as a fraction of its
fissive heat.
MR. SCHROCK: So you are being
conservative. But if you actually looked at the
average over the life of the plan with taking real
burn-up you'd actually have less.
MR. LAUBEN: You will see in the next set
of curves this stuff doesn't make a lot of difference.
MR. SCHROCK: But there is less to K heat
from a plutonium fuel.
MR. SCHROCK: So you still have the
conservatism in here by assuming this 100 percent
MR. LAUBEN: Right. That is correct.
When you look at the next figure you will see. Case
four is not adding any uncertainty associated with it
but maintaining the choices that are shown in the rest
of table there. That takes care of that group.
The next group is what I would call best
estimate calculations. Case five is Origen
calculation for 17 X 17 PWR assembly. Case eight is
an Origen calculation for a BWR 10 X 10 assembly. I
think that's quite a span of different things. I mean
one has boron in it and one has veritable poisons of
different sorts. They are really very different fuel
assemblies and yet you will see that the decay heats
that are calculated there with each one are pretty
Also then using the Origen -- Now in the
third column operating time in cases five and six they
use the same operating time as what was done in the
Origen calculation. The cycle average values for the
fission fractions. In other words Origen will
calculate a continuous fission fraction change of the
various isotopes as a function of time. If break it
up into cycles and take the average value for each
cycle, that's what I did for the ANS94 calculations.
MR. SCHROCK: For the Origen calculations,
Norm, did you just input a constant power?
MR. LAUBEN: No. The Origen calculations
are for three cycles. Cycle shut-down, back up to
full power --
MR. SCHROCK: That's what you input for
the power.
MR. LAUBEN: Yes. You input a power
history. That's correct. But between shut-downs it's
a constant power. It doesn't use --
MR. SCHROCK: But you allow some shut-down
time and let things decay away and change.
MR. LAUBEN: I don't have the charts with
me but I think it was like a 30 day shut-down or
something like that between cycles. It tries to be
not untypical of a real reactor.
MEMBER SIEBER: And the power is put in as
watts per gram.
MR. LAUBEN: The power is put in as -- I
don't think it was an average assembly. Unfortunately
Tony is not here but there is a burn-up and there is
a power density.
MEMBER SIEBER: Power density. Right.
MR. LAUBEN: In the decay heat calculation
in ANS94, you don't have to put that in.
CHAIRMAN SHACK: Norm, can you try to
finish up by 3:45 p.m.?
MR. LAUBEN: I can try. If you don't mind
let's just go to the next slide then which is a plot
of nine calculations. I think the point is that they
group together very closely. ANS plus 20 and `71
standard is way up there by itself.
Two, 3, 3a and 4 which are the various
proposals which conservative choices that the user
might make to bound his operating conditions before he
knows what to do. You know the reactor operating --
is going to look like or doesn't want to argue with
the NRC about what things are. He could use those
choices and the other choices that are shown in that
MEMBER WALLIS: I don't understand why 4
is so high. Four is without uncertainty and it's not
MR. LAUBEN: No, four is below.
MEMBER WALLIS: But it's up with two 3's
and the 2. The two 3's are with the -sigma so why is
4 up with them not down with the 5, 6, 7, 8?
MR. LAUBEN: No, 4 is lower.
MEMBER WALLIS: Why is it not down with 5,
6, 7, 8?
MR. LAUBEN: Because the uncertainty
doesn't make that much difference. If you make those
choices the uncertainty in the values in standards
doesn't make that much difference.
MEMBER WALLIS: So it's the Origen that
makes the difference.
PARTICIPANT: It must be the infinite
operating time before.
MEMBER SIEBER: There's something to that.
MR. LAUBEN: I'll tell you what. I have
another set of slides but I'll never finish by 3:45
p.m. if I show you those. On the other set of slides,
I looked at the individual bases for these things and
I could put up those slides or I could just provide
them to you.
CHAIRMAN SHACK: Provide them.
MR. LAUBEN: Provide them and you will
see. But infinite operating time doesn't make that
much difference. I'll provide you the slides.
MEMBER WALLIS: What is the difference
between Origen and the others? Origen is down below
all the others. What is the reason for that? That's
all I'm trying to get at.
MR. LAUBEN: Origen or ANS plus 20 with
Origen input. These are all ANS plus 20 with Origen
input too. This is saying that the standard in Origen
not surprisingly, right, Virgil, are going to look the
same if they have similar? It's just that the input
values if at operating time was one of them but if you
want maybe I can dig out that slide.
CHAIRMAN SHACK: Not in six minutes.
MR. LAUBEN: Not in six minutes. Okay.
I'll provide it.
MEMBER BONACA: Looking at this it seems
that ANS71 without multiplier is very close to number
4, to the ANS94 model without uncertainty. You don't
have the ANS71 by itself.
PARTICIPANT: He wants to take the 1.2
MEMBER BONACA: If I take the 1.2 off.
MEMBER BONACA: I can draw on the top of
the --
MR. LAUBEN: I think if I show you the
next slide. We'll forget the next slide in fact and
go to the next one after that. Here is where you will
see I hope, Mario, what you were asking about.
What we are doing here is dividing the
different values on a curve by ANS71. So ANS71 with
a 1.0 is this straight line here. ANS with 1.2 like
we showed before is way up there by itself. Here are
the various Case 2, Case 3, Case 3a, Case 4 divided by
ANS71. They group in there together. Here are all
the best estimate ones here with Origen or ANS94 with
The thing that was somewhat troubling
about this curve by the way, and this is the segue of
the next set of curves, is the fact that here is ANS
94 with Origen input and it's lower. Eight and five
are the Origen divided by ANS71. So for what reason
is the Origen higher than the ANS94. Over here they
are pretty close together. Now they seem to diverge
MEMBER LEITCH: Why are there so many
giggles? Is it because of all the isotopes behaving
differently? I would have expected a smoother curve.
MR. SCHROCK: Are you using in your origin
calculation the same N diff data that were used in
generating the values in the 94 standard?
MR. LAUBEN: I think so.
MR. SCHROCK: You could check that. That
could be what --
MR. LAUBEN: But I'll tell you as you can
see from the report the attachment one in the report
that I gave you when we asked Oakridge about this,
they said the reason that ANS94 doesn't account for
all the Actinides as you go out in time.
MR. SCHROCK: This is where you have
difficulty in understanding the language of the
standard. If the standard calculation was done
correctly, you would be including the other Actinides
and justifying how you got them.
MR. LAUBEN: That's right. I think it's
a great idea by the way. In fact, I would do that
except I don't think I need to do that for this
Appendix K stuff that I'm talking about now. The
reason is because with all these choices that I have
here whichever one of these terms I want to choose I'm
still well above the Origen calculations. So why add
something else on to it for now?
The best way like you say is to really
account for all the Actinides in the best way that you
can. That is true. That's really what my next set of
slides is all about. I probably don't want to do them
MR. LAUBEN: -- because I understand in
one minute I can't do that. Let me just say that this
curve was provided to us by Oakridge. There's a
bounding line that I put which shows the contribution
from ours according to what Oakridge says. This is
the contribution of the other Actinides other than U279
and Np279 which are already accounted for in a
This is the contribution percentage of
Actinide components that are not already taken account
of. As you can see it grows as a function of time
until out here 109 seconds which we certainly don't
care about. These other Actinides are 80 plus percent
of the entire total. For our purposes we're really
down here at somewhere between 50 and 104 seconds for
-- analysis.
MEMBER ROSEN: Let me ask a question and
make sure I get the message.
MEMBER ROSEN: Go back one slide to this
one. (Indicating.)
MEMBER ROSEN: Right. Slide 27. The
message you started with was use the ANS94.
MR. LAUBEN: Right.
MEMBER ROSEN: And the reason you did that
is because it's less than 1.2 X ANS71 but it's not all
the way down at the Origen.
MEMBER ROSEN: On the four cases you
showed there are about not very much different.
MR. LAUBEN: Right.
MEMBER ROSEN: So it's part way to the
right answer let's say. A step in the right direction
kind of thing. It makes sense and it's still
MR. LAUBEN: It's still conservative and
it's --
MEMBER KRESS: But it's conservative only
because you used a 2-Sigma there.
MR. LAUBEN: No. Because four has no
MEMBER KRESS: Oh, that was the mean.
MR. LAUBEN: Right.
MEMBER ROSENTHAL: Get rid of the 1.2. Go
to some reasonable compromise. Specify some of the
parameters to keep life simple. Reduce the
unnecessary conservatism. You know that you still
have left yourself some margin. It's a reasonable
MR. LAUBEN: Can you see this? At one
time we were looking at what I'll call the simplified
technique in the `79 standard which was like the 1979
times a factor of 1.1 very close to that would be.
That's going to be about half way between here and
there. (Indicating.) So the `94 standard whether you
have 2-Sigma or not is still getting you more
advantage than you had with the `79 standard which is
not unexpected. You have better data, better methods.
MEMBER ROSEN: Well, what you're proposing
here is instead of using the standard that is 31 years
old, use a standard that is only eight years old.
MR. LAUBEN: Right. If you make some
choices I don't even careful you account for all the
Actinides if you make certain conservative choices
between here and here. (Indicating.)
CHAIRMAN SHACK: Okay. I think we're
going to have to stop here so we give Steve a shot at
it for the rest of the presentation.
MR. LAUBEN: Okay. I guess this one is
enough. You probably don't need that other chart
unless somebody wants it.
MEMBER ROSEN: That's as far as I'm going
to need.
MEMBER CRONENBERG: Just let me point out.
They only received this package from Farouk this
morning. I only got it Tuesday.
MR. LAUBEN: That's right.
MEMBER CRONENBERG: All your stuff is in
a package that I gave them this morning.
MR. LAUBEN: All the stuff is in a
package. I don't have to give you that because they
are all in that package.
MEMBER CRONENBERG: Dated the 23rd which
was given to me the 25th. So it wasn't mailed out.
MR. LAUBEN: But you got this.
MEMBER CRONENBERG: It's all in here.
MEMBER WALLIS: This looks like a case
where you are ready to make a recommendation based on
some good information. The question is just where you
should draw the line for regulatory purposes. It's
very straight forward.
MEMBER ROSEN: 3A is the one I like.
CHAIRMAN SHACK: There's no question in
their mind where to draw the line. They've told us.
94 standard.
MEMBER ROSEN: Well, is it 94 or is it the
2 Sigma?
CHAIRMAN SHACK: We can read the
MR. BAJOREK: Okay. What I'd like to do
then is pick it up with Appendix K and the other model
revisions, decay heat being the most important one.
We've done that one. Three others have been suggested
for revision as part of the SECY 1133. These were
replacing Baker-Just with another correlation
Cathcart-Pawel, eliminating the requirement for steam
cooling only below one inch per second and perhaps
leading the requirement not to allow the return to
nucleate boiling during blow-down.
Looking at the acceptance criteria, we've
already gone through and looked at the correlations.
Our conclusion in taking a look at alloys and newer
experimental information is coming out of Argonne is
that Cathcart-Pawel does a much better job than Baker-
Just especially in this range near 2200 degrees
Fahrenheit. So our recommendation is going to be for
a revised optional Appendix K that Baker-Just be
replaced with Cathcart-Pawel and it not be restricted
to any particular alloy.
There is a caveat though that we need to
be concerned about. This figure showed up earlier in
your package. It's in smaller form on page 34. But
what it shows is a pressure dependence in the zirc
oxide growth. The black solid line here is Baker-
Just. Now experimental data by Pawel and presumably
his correlation which matches his data very well was
done primarily at low pressure. This information is
down here. (Indicating.)
If you follow the experimental data as you
go up higher in pressure typically it would increase.
If it is used at high pressures, typical of small
breaks, 600 to 800 p.s.i., and you have to look very
carefully at the data, that's 40 to 75-bar in the
units here, you do see that the experimental data
starts to creep back closer to Baker-Just. So in a
risk informed Appendix K you would Cathcart-Pawel is
acceptable at low pressure. If it is used at high
pressure, some type of a correction would need to be
applied in order to insure that it does not become
MEMBER ROSEN: What do you mean "if it is
used"? It's going to be used for any accident that
hangs up in high pressure. Right? Where you have
damage at high pressure?
MR. BAJOREK: Well, this would be up to
the stakeholder to revise his evaluation model --
MEMBER ROSEN: I'm saying if it is used.
But all stakeholders have to analyze those kinds of
MR. BAJOREK: They all have to analyze
MEMBER ROSEN: So I don't know why you are
being so permissive. I would have expected you to say
something like for accidents that hang up at high
pressure where core damage occurs at high pressure you
have to understand and use this data.
MR. BAJOREK: Okay, that might be a better
way of phrasing it.
MEMBER ROSEN: Unless you can rule those
accidents out in your plant. It's a plant specific
situation but I don't think you can.
MEMBER SIEBER: No, because you can get a
bubble. Have high pressure but no cooling.
MR. BAJOREK: It may be with the amount of
relaxation you get with decay heat you may not care
about replacement. You can adjust with Cathcart-
Pawel. So the option would be there to stay with
Baker-Just for a small break calculation.
MEMBER ROSEN: Yes, I think some places
may just replace the decay heat term and not this.
MR. BAJOREK: If you are limited for large
break at low pressure then it's a fairly simple change
MEMBER WALLIS: Let me ask you. This
pressure dependence you said that what's happening
here is that of a defusion limited reaction. Is the
pressure dependence that is seen here modeled by
taking into account the effective pressure on density
and defusivity and so on?
MR. BAJOREK: No, Cathcart-Pawel does not
account for that.
MEMBER WALLIS: It's the intelligible
thing to do only if you have the mechanism which says
it's a diffusion limited reaction. Then you ought to
be able to figure out is that reflected by this trend
with pressure or not. If it's not then change your
idea about it being a diffusion limited reaction.
MR. SCOTT: Steve, could I interrupt?
This is Harold Scott. Some of the literature suggests
that maybe it's the formation the way this oxide layer
forms and whether it cracks or not at higher pressure.
So it's still a diffusion but if the layer's really
not as thick as you think it is because it's cracked,
it's easier to diffuse and therefore the same
phenomenon doesn't occur.
I'd also mention that this chart shows 900
C and we've been talking 1000, 1100, 1200. The
pressure effect doesn't appear at higher temperatures.
MEMBER WALLIS: So it's not so simple.
MR. SCOTT: It's not so simple. Right.
MR. BANERJEE: Is the oxidation limits
still going to be left at 17 percent?
MR. BAJOREK: That would be in the
acceptance criteria.
MR. BANERJEE: But if you are changing
this are you going to change that to 13 percent or
MR. BAJOREK: Not necessarily, no.
MR. BANERJEE: Because you told us that
the oxidation limit was being set by the embrittlement
experiments which were based on Baker-Just. So if you
get rid of Baker-Just and replace it then you should
change the oxidation limit as well.
MR. BAJOREK: That was the use of Baker-
Just in the original --
MR. BANERJEE: To calculate the limit.
MR. BAJOREK: To calculate the limit back
in `73.
MR. BANERJEE: Right. But you are
changing that now.
MR. BAJOREK: In `80 they went to
measurements of that zirc oxide thickness. They got
away from the use of Baker-Just --
MR. BANERJEE: And did they find 13
percent or what?
MR. BAJOREK: No, they found 17 percent.
They were able to justify 17 percent using the tests
of argon which went to more of a toughness --
MR. BANERJEE: I'm puzzled now. You told
us that the experiments that were done found 17
percent on the basis of the Baker-Just. Correct me if
I'm wrong.
MR. BAJOREK: Say it again so I'm sure.
MR. BANERJEE: Okay. You showed us some
experiments and you said that in the first experiments
MR. BAJOREK: The ring compression tests.
MR. BANERJEE: Yes. You said that they
calculated that 17 percent in that temperature range
based on the Baker-Just correlation.
MR. BAJOREK: Right. When they did those
tests they calculated the oxidation.
MR. BANERJEE: There is something
inconsistent which I don't understand. I'm just
asking for clarification.
MEMBER ROSENTHAL: Can I try? I believe
and of course we are back on the criteria. Let me
just say that at least my mental model is that we will
get rid of the 17 percent. We will get rid of the
2200. We will go to some material property because
our vision is and I know it's fuzzy that you will have
a free standing core. You wouldn't want a debris bed.
You'd want it still standing which is some material
property. We would just plain get rid of the 2200 and
17 percent.
Having said that, when the original work
was done we didn't know about hydrogen embrittlement.
In about 1980 we knew about heightened embrittlement
which made things worse. The Japanese adopted a
slightly different standard. In the U.S. it was Gunam
(PH) He Chung and company at Argonne who went ahead
and looked and said okay if I account for hydrogen
embrittlement and if I do a impact covenance test will
I have integrity of this.
They concluded that although the 17
percent, 2200 might not be quite right it's okay. It
insured safety even though we now knew about the
heightened embrittlement that we didn't know at the
original time. So the story just gets more and more
complex the deeper and deeper you look.
MR. BANERJEE: I get more and more
confused at that moment.
MR. BAJOREK: Wait a second. Now if we
were to go to this that would be one way of specifying
material but still have to calculate what the cladding
does and whether the core temperatures in that when
you are doing an evaluation model and you need a
correlation now to predict that energy generation due
to the metal-water reaction.
We would recommend then going to Cathcart-
Pawel. What it's calculating with respect to
oxidation may not matter anymore because it may be
other criteria that are used to gauge whether that
clad survives to the quench.
MR. BANERJEE: If you change everything
consistently that's fine. But if you change one thing
and leave the other then it's not consistent. So if
you use Cathcart-Pawel then you should change the
oxidation criteria.
It seems to me that if you are basing it
on those tests then they would have to change because
you just said that Baker-Just was used to calculate
the amount of oxidation in the first set of tests
where they were hitting it with a hammer. I don't
remember where it was but something or other they were
hitting it with.
So to be consistent you must change
everything consistently or you completely disassociate
the oxidation calculation from Cathcart-Pawel. It has
nothing to do with the oxidation calculation then.
MR. BAJOREK: Steam cooling below 1 inch
per second is one of the other. During refill and
reflood right now in Appendix K if that flooding rate
drops below one inch per second, you need to ignore
the entrainment, any droplet interaction, and have to
go to a convective cooling only type of correlation.
MEMBER ROSEN: Could you show us
physically for those of us who don't understand the
whole history?
MR. BAJOREK: I'm sorry.
MEMBER ROSEN: Why do you have to just
neglect it if you're filling this thing up less than
one inch per second? Is that what it is? You can't
take credit for steam cooling if you are reflooding at
one inch per second.
MR. BAJOREK: Let's just think of the
physics for a second.
MEMBER ROSEN: Maybe I have it wrong.
MR. BAJOREK: You have the core sitting
there very hot. When water hits the bottom or very
close to the bottom of the rods there is a lot of
energy released, quenching. The vapor generation and
vapor velocities are sufficient to entrain droplets
and bring those up through the core. Those droplets
can strike the clad. You can get radiation to the
droplets. You have other mechanisms for cooling that
are available with those droplets in your flow field.
In Appendix K when they envision this they
thought when you get down to a very low flooding rate
you may not have the steam velocity sufficient to
entrain those droplets. So ignore their effects and
assume that your heat transfer is solely by convection
from the wall to the steam that is flowing through the
hot assembly.
MEMBER ROSEN: That's obviously why that's
true, why you wouldn't still have the velocity but
that's just history.
MR. BAJOREK: That was the assumption that
was made.
MEMBER ROSEN: Dr. Wallis could tell me
right away but he's gone.
MR. BANERJEE: The velocity has to be
roughly 1,000 inches per second if the steam is an
inch per second.
MR. BAJOREK: Our experimental tests have
universally shown that you almost always get droplets
entrained even from very low flooding rates. This
figure which is not in your package shows the
carryover fraction meaning how much of that liquid
that's brought into the bottom of the bundle is
entrained and carried all the way through up to the
upper plenum of that test facility.
There are various flooding rates here.
The two lowest curves here are an inch per second and
eight-tenths of a inch per second respectively.
(Indicating.) But notice once you get out into the
transient even those very low flooding rates are
entraining better than half of the fluid that is
coming in at the bottom of the bundle so restricting
the calculation artificially to convective cooling
only just because it's an inch per second doesn't make
sense looking at the data.
Now the only time you do have a period
where you would say that it's steam cooling only is
this part very, very early when your quench front,
your water, is moving over those parts of the rods
which are so cold they can't vaporize enough of the
water. But that's very short and if we look at other
tests at even lower flooding rates, you see the same
We see it in some more modern tests. We
were up at Penn State for the rod bundle heat transfer
tests watching some of those and with better
instrumentation now than what these were run with,
laser camera and much faster optics. It's obvious
that as soon as the water hits that bundle that
there's a very high fraction. In fact, it's even
greater in those tests than what were in Stein and
Flecht. So our recommendation is that this steam
cooling requirement is invalidated by the data we've
seen. There's is really no sense in keeping it.
MEMBER KRESS: The carryover amount is the
full story because if the effectiveness of the
droplets depend on their size.
MR. BAJOREK: That's right.
MEMBER KRESS: Which will depend on this
velocity to some extent.
MEMBER KRESS: And I don't see that
reflected in what you say.
MR. BAJOREK: We see this as a way of not
having to require the steam cooling requirement. Now
modeling the process will take more work.
MEMBER KRESS: So you would have to maybe
consider the size of these droplets when you do the
MR. BAJOREK: In your evaluation now are
you predicting entrainment correctly? Are you
predicting the right droplet size? Is there droplet
impaction and interaction with the rods? You would be
allowed to do those but you would have to use
experimental data to justify those models that you
would want to have.
MEMBER ROSEN: Is this a big thing now?
How big a deal? Is this a big positive --
MR. BAJOREK: It depends on the transient
length. Unfortunately we do not have any good numbers
on what this is. I would venture that it's 100 or 200
degrees F over a course of a large break transient.
I'm going to show you some numbers for some of these
effects in a second.
MR. SCHROCK: I have one comment related
to that. Maybe Professor Schrock of Energy might want
to comment on that too. I think a lot of the reason
for the steam cooling only was that was a regime where
the heat transfer coefficient is not very well known.
As a result of that it was a conservative assumption
to assume steam only. Now I don't believe that the
heat transfer coefficient is much better characterized
today in the post CHF regime. So I'd say how are you
going to take this into account?
MR. BAJOREK: This is where I start
looking at best estimate models as really having a
distinct advantage. Because to answer that question,
we would have to do simulations against Flecht other
experimental data, characterize the performance of the
code with an uncertainty and then use that uncertainty
in your evaluation model. That's the way I was
involved in having done it in the past and was
approved. How this would get incorporated into an
Appendix K model is something that still has to be
determined. We're not making a recommendation that
this is necessarily easy.
MR. SCHROCK: And a lot would say it's not
easy from what I know.
MR. BAJOREK: I would agree.
MR. SCHROCK: They still can't predict the
progress of a reflood front because of the inability
to predict this precursory cooling that occurs above
the front. So you would be involved in significant
uncertainty there I think.
MR. BAJOREK: I agree.
MR. SCHROCK: Sanjoy, are you going to
comment on that?
MR. BANERJEE: Well, one of the problems
also with LOFT was that there were external
thermocouples that were preferentially -- which is why
you can't take any credit for rewetting after the
dryout has occurred.
MR. BANERJEE: The same problem would
occur here. It would be very difficult to justify
credit for droplet heat transfer I would think.
MR. BAJOREK: Yes, so it doesn't
necessarily make easier.
MR. SCHROCK: But it would take it out of
the realm of a prescriptive requirement and put it in
the realm where engineers could do the best
experiments, the best analysis and make their case.
MR. BANERJEE: The problem is the case is
never clear so they can lash together always a case
which convinces some people. I remember we spent
years over LOFT because some people would maintain
that yes this was a real effect and some people would
say that no it isn't.
It is the external thermocouples. So the
conservative approach in that case was to say no. You
don't get a secondary rewetting. Probably here it's
conservative to say you just get steam cooling. It
will close a can of worms. Now you can open one. It
will take you a long time to settle it.
MEMBER ROSEN: Now wait a minute. If I
think what Jack is saying that if licensee meaning the
fuel vendor wants to do experiments or whatever --
MEMBER ROSENTHAL: If he thinks the value
of this is high enough that he would like to set up a
loop and do some good experiments with modern
instrumentation and data acquisition and bring that in
and show it to the staff he might actually advance the
science. Then if you were convinced.
So it would become a commercial advantage
for some vendor perhaps to do this sort of thing. I
think it seems to me a good thing to do to at least
set up a playing field in which vendors might be
tempted to do that.
MR. BAJOREK: That's basically the
approach that was adopted for the best estimate rule.
This was eliminated. You didn't have to assume steam
cooling but taking advantage of it meant many
simulations, characterization of the code, models and
correlations in ways that was very difficult to get
MEMBER ROSEN: So if a licensee has some
kind of problem with ECCS in some future time he can
go to his vendor and say I need some more help. He
can say well if you want to pay for this or join me in
paying for these tests it's possible that we can show
etc. As long as the regulations allow the showing to
be made, you have provided some flexibility to the
industry to move ahead.
One of the things that I've been arguing
before with Peter, we were thinking about research.
I've been arguing that we in the ACRS need to
encourage things that looks a little bit more to the
future. We're not always getting there. The day we
get there we have to say something like we can't do
that because we don't have the research. Instead of
that we do the research and maybe that enables some
things. Here's a case of that.
MEMBER KRESS: Could you just spend one
minute on the rod bundle heat transfer experiments
that we are now doing?
MR. BAJOREK: Yes, the figure I showed you
earlier was from the Flecht series of experiments. It
was done in the mid `70s to the early `80s. This took
a look at a full height bundle up to 161 rods, a
fairly large bundle well instrumented with
thermocouples and DP cells. Essentially it's the
basis right now for developing your models for heat
transfer for any of these evaluation models. There
are other tests but these are the ones which are
principally referenced.
Well, there were some shortcomings in
Flecht. The DP cells were very far apart which made
it very difficult to determine what was the void
fraction. They had windows on there and the only way
you could an idea of the droplet size was to take some
rapid movies which were good for only a few seconds.
You let a technician go in there and count the
droplets and measure them. I knew the guy who did
that. He quit.
You had very limited ways of getting the
information from Flecht but it was very useful and
demonstrated the conservative in many of the Appendix
K models that we are looking at now. It was realized
that in order to get better models, best estimate,
more realistic models for droplet breakup, grid
effects and heat transfer, film boiling and things, we
needed tests with better instrumentation.
Several years ago a bundle was constructed
at Penn State that was making use of much more
detailed instrumentation, more thermal couples, more
DP cells. They have several windows and a laser
camera that we saw a couple of weeks ago. We're still
three feet off the ground because when they ran a test
with the visual cameras we saw the entrainment.
With the laser camera, they had
essentially a real time measurement of the droplet
size and distribution above and below the grids. The
grid by the way were again demonstrated to have an
enormous effect by breaking up the droplets, stripping
away the boundary layer causing it to be reestablished
and you see the rods red hot, a grid and cold. It
goes up the bundle that way.
PARTICIPANT: It gets hot again.
MR. BAJOREK: Hot again and cold. Hot
again and cold.
PARTICIPANT: Above the grid.
MEMBER KRESS: The major effect is the
heat transfer between the steam and the droplets.
They lag. They have a higher MCCP (PH) and that this
steam gets heated from rods and passes that heat on to
the droplets and the whole thing just cools down.
MR. BAJOREK: There are several effects.
MEMBER KRESS: Yes, there's radiation and
then there's droplet impingement. That may be
calculations. It was mostly the droplets and the
steam interaction.
MR. BAJOREK: Yes, and you saw in the --
MEMBER KRESS: That's a strong function of
the droplet size. When you go by those grids and
break it up it really makes a big difference.
MR. LAUBEN: Break up the properties.
MEMBER KRESS: And grids just breaking up
in really small droplets.
CHAIRMAN SHACK: We're running out of
time, gentlemen.
MR. BAJOREK: Okay. Anyway we're getting
more information that adds to the support of these
conclusions especially for reflood. Let's move on.
I think I just heard Dr. Banerjee also read my next
overhead here.
When it comes to allowing rewet during
blowdown, here we don't feel there's a real strong
case not that we don't think it will occur. But the
tests that have been run like LOFT with the external
thermocouples, semi-scale which had some questions on
its scaling, other tests which have been run with Ink
and L (PH) as the cladding as opposed to zircaloy and
knowing that there's a major material effect and
minimum film boiling leads us to the recommendation
that leave this one go and pursue the other first.
There may be better information to change this or to
do it under a best estimate context. But doing it
right now under Appendix K we think would be wasting
people's time.
The final thing that I want to go over is
with what we call the Appendix K Non-Conservatisms.
First let me define what that is. We see three
different ways of something being non-conservative in
Appendix K. The first and what we focused on are
those physical processes and phenomena that have been
identified through experimental programs since 1973.
If they didn't know about them, they couldn't put them
in the rule. They didn't know enough about them in
1988 so they couldn't have been captured in a rule
change then.
In addition we've known for quite some
time that these codes have very large calculational
uncertainties. We recognize that. It hasn't gone
away. We realize that if we take margin out for
whatever reason we have to account for the accuracy
and uncertainty of the code in addition to these new
Now the processes that we've identified
over the last few months which are strong candidates
that need to be corrected are downcomer boiling,
reflood ECC bypass and fuel relocation. Let me just
take a couple of minutes on each one to characterize
what they are and I'll show some effects of all of
Downcomer boiling, you can read that in
the interest of time. Typically you assume that after
reflood the accumulators and your low head pumps come
on and your downcomer fills. New experimental data
from CCTF and UPTF however show that after some period
of time I call it about 200 seconds enough energy
comes out of the vessel wall in the lower internals to
start subcooled and saturated boiling in parts of the
As the downcomer froths up part of that
liquid is pushed off into the break, boiling continues
and the net result is a downcomer that is partially
voided. This results in a driving head that is much
smaller or can be much smaller than what it would be
if you did the typical Appendix K assumption that your
downcomer is full and you ignore boiling. I'll show
some effects on that in a second. Let me just go
through the other ones we've identified.
Very closely related to that is downcomer
bypass during the reflood period. This is observed in
some of the UPTF tests and to a smaller extent in
CCTF. If my downcomer is pretty close to being full,
sufficiently high steam velocity coming from the
intact loops could entrain part of that liquid, carry
it off and throw it out the break. Like downcomer
boiling this depletes the driving head and reflood.
My reflood rate is slower. This is a non-conservatism
if you don't at least account for it.
The other one has been around for several
years. We hope to get better information in some of
the newer tests that are being devised right now.
They are going to be running some tests with better
instrumentation on the nuclear rods to try to get at
fuel relocation which has been observed in tests in
Germany, France and the U.S.
When we get this ballooning that occurs in
the rod, it's possible that these fragmented pellets
due to the vibrations can migrate down into the burst
and rupture zone. The typical assumption in Appendix
K is that these pellets remain as a concentric stack.
Now I was talking to Dr. Ford who said why
is this cladding temperature going down after it
swells. It's good because you've swollen the cladding
away from its heat source. If you are at low
temperatures and zirc-water doesn't make any
difference, this is a fin.
It's not a fin if you consider fuel
relocation. It becomes much worse if there is a
rupture involved and you have zirc-water reaction
because now you've relocated the pellets, your local
power is increased, you have very good communication
now between the pellet fragments and the cladding
itself. I have lost that fin effect. You see varying
estimates on this. But we are identifying this as
something that needs to be accounted for in future
We've thrown a lot of different processes
and changes at everyone in looking at change to K
heat, change to the zirc-water reaction, look at
downcomer boiling and what not. What we've tried to
do is to go through documented literature, information
we see in journals, information that has been
submitted to the staff, other information that is
publicly available to try to gauge what is it we are
giving away. If we say you have to account for these
non-conservatism, what's that throw back at you?
In the tables that follow you can see some
of these numbers. Decay heat for a large break and
I've broken this into a large break and small break
table. For large break, typically you see something
like 400 degrees as being the benefit by going from
ANS71 plus 20 percent down to something realistic.
Most of this is with `79. With `94 you would expect
that to increase but rule of thumb may be 400 degrees.
Changing from Baker-Just to Cathcart-Pawel
provided you keep your core temperatures high, that
change is something on the order of 50 degrees. It's
worth zero if your peak cladding temperatures are low.
So again you would have to have a power increase to
get you back up to where that benefit would be
Now a bit surprising has been the
estimates that we've seen publicly and submitted to us
for downcomer boiling. If we take a look at a
WattsBar FSAR that's been submitted and we look the
peak cladding temperature that you would get before
downcomer that you would get before downcomer boiling
occurs and a second peak that occurs later in time
after downcomer boiling, we're seeing an increase in
the PCT 400 degrees.
We've done some other calculations or I
should say one of our contractors has done it using
RELAP for system 80 plus unit uprated. This
exaggerates the effect because the transients are so
much longer when you go to an uprated condition. If
you look at that, they are looking at 800 degrees. I
think that's a problem with the code in that the
interfacial drag for that part of the downcomer is too
high. I think that's exaggerated.
I do another code for CE plant at much
lower power shorter transient, I see a smaller number.
If I had to take a pick of these I would probably look
at something like this 400 degree as more of an
estimate of what this effect is for a unit that has a
long reflood and you have a chance for the downcomer
to boil.
Short transients you don't see it. The
reason you didn't see this before is because with best
estimate now people are uprating. Transients are
getting longer and you're allowing that 200 to 250
seconds to pass by so that your downcomer can boil.
Other estimates on the table for fuel
relocation originally had been estimated at 40 some
degrees. But the French have some recent work looking
at different filling fractions. Their estimate is
higher, 300 degrees.
I've also listed some estimates on code
uncertainty. In 1986 when they refused to change
decay heat because of large uncertainty they didn't
know what they were. At least now we can go through
and look at some of the best estimate codes that have
been used, WCOBRA/TRAC with Westinghouse, SEMENS has
a model, GE has a semi-best estimate approach. Look
at a 95 percentile compared to a 50 percentile BCT and
we see numbers that are typically on the order of 300
or so degrees between what a realistic 50/50
temperature would be and what happens if you have to
account for heat transfer uncertainty. I think that's
where a lot of that is really coming from in these
Small break is much harder to define
because it hasn't been the leading accident for very
many plants. We get very large estimates on what the
benefit would be in going from `71 to a more
realistic, anywhere from 500 to 1000. Metal-water
reaction there are a couple of estimates which are
very similar to what we see for the large break, less
than 100 degrees.
No one has produced a best estimate small
break model so we can't assess the uncertainties. But
numbers that have been reported to us by people that
have played games with nodalization, looked at
operator action, looked at models for the LOOP seal
clearance in level swell and what not show that you're
looking at numbers on several hundred degrees up or
down depending on how conservative or non-conservative
your model may have been in the first place.
So our recommendations with regards to
evaluation model changes or excuse me due to non-
conservatisms is that if we go to a performance based
Appendix K we feel it's important to include the non-
conservative effects of downcomer boiling, ECC bypass
during reflood and fuel relocation. Now these will
likely be pursued outside of ruling making because
these same issues affect plants and their evaluation
models now. So that's why it would be separated from
the rule making.
But when we start looking at the numbers
for code uncertainty, plus 400, minus 400 we feel very
strongly that if we go to this optional Appendix K
there must be something in the regulatory process that
makes people demonstrate that there is sufficient
conservatism in that evaluation model.
We don't have this plus 20 percent on
decay heat to give everybody the assurance. We can
sloppy in some models because it's accounted for
somewhere else. As those major models become more
realistic it's going to become important that we find
a way to demonstrate that there is still the
conservative intent that was there in 1973.
MEMBER BONACA: Maybe Appendix K is not
conservative is you adopt these numbers regarding core
uncertainties and fuel relocation as well as downcomer
MR. BAJOREK: The curve I would throw at
that is you're seeing these numbers. You see of it in
the data but it's hard to estimate the PCT because
those big numbers are coming from codes. If Dr.
Wallis were here, he would say I don't believe those
MEMBER ROSENTHAL: And they are non-
MR. BAJOREK: Yes, they are not additive.
The experiments were it didn't look like a big effect
in the package that I think you have we note that if
you look at the scaling for those tests, those weren't
designed to look at these issues. So the amount of
energy in the downcomer versus in what you have in a
PWR is much smaller in the tests. So you wouldn't
expect the tests to predict those effects to anywhere
near the magnitude like it's being predicted. Your
only conclusion at this point is that maybe the code
is right because it's the only thing that we have to
try to estimate the magnitude of those effects right
MEMBER ROSEN: Steve, comment on this
concern. The `94 model, the K heat is the pure
physics that hardly anybody argues with. So going
ahead with that makes obvious sense. The staff's
concern about non-conservatism is also real. But to
equate the two somehow doesn't intuitively makes as
much sense.
You're saying okay we're going to take
credit for this. Let people take credit for this but
hardly anybody argues about that. But at the same
time you have to factor in all these non-
conservatisms. But the state of the art of these
non-conservatisms is kind of uncertain compared to the
K heat curve. How do you reconcile that?
MR. BAJOREK: I guess my own view on that
is that's why you need to have realistic codes
assessed to get an uncertainty. The problem that we
see with plus 400 and minus 400 is that you get into
this game of compensating errors. This is okay
because I know I'm high here and I'm low over here.
But until you can come up and can quantify your
accuracy with the code uncertainty or some other
technique the answer is still wanting I think.
MR. SCHROCK: Along those lines --
MEMBER ROSEN: That's not exactly the
answer to my question. The positive change for the
decay heat there is fairly solid. Whereas all the
non-conservatisms all be it that they're there -- I
understand the mechanisms that you are worried about
and I'm worried about them too. But there is so much
uncertainty with respect to all of those. How do you
equate those things, something without any uncertainty
effectively with something that has tons of
uncertainty? I mean even in the models might be
completely wrong.
MR. SCHROCK: Can I make a comment on
that, Steve? There are some things that aren't even
included in the codes for example heat transfer
correlations, drag correlations. They're all steady
state. They don't have any transient effects in them
actually so it's kind of a quasi-steady model of the
process. Yet you speak in front of the best estimate
point of view. Those effects are not even included in
these analytic models.
It seems clear to me that you're going to
need some bias however you want to come up and justify
it. Appendix K was kind of a gross attempt at putting
bias into these calculations so that when you use them
to predict a course of an accident in a plant you can
be quite assured that it was a conservative
Now they erred in some directions as you
say. There are some things that are non-conservative
that say you'd like to take advantage but I believe
you will never get rid of some bias. You have to
account for things that the code just is not an
adequate model for. I don't see any way to get around
it. When you talk about best estimate codes as though
somehow they're going to be exactly correct or at
least in some probablistic sense.
But there are some things that are
probablistic about the models that you have in there
like correlations for drag, correlations for heat
transfer or data error in those and things like that
which you could be accounted for in that way. But
there are other things that aren't even included or
approximations that had to be made for example one
dimensional flow in pipes. Well the flow is not one
dimensional but that's a reasonable approximation.
You will never agree exactly with the physics. So
somehow you have to account for these limitations you
might say by some kind of bias I would guess as well
as some statistical uncertainty.
MEMBER KRESS: I don't think the bias is
the right way to go.
MR. SCHROCK: Well, I don't know what you
want to call it. Whether you want to call it bias or
MEMBER ROSENTHAL: Let's take advantage of
it's been 30 years since we did Appendix K. There are
some things for which we have better knowledge. I
think that we in general think that we have better
codes also. We're talking about large potential
reductions and unnecessary burden. I mean big changes
which will taken either in operational flexibility
like FQ or taken in just plain power outbreaks.
So by taking out the prescriptiveness of
Appendix K and at least allowing a K prime, all
licensees may choose to go to best estimate models.
At least it puts it in a realm where the vendor, the
licensee, could come in, take the pluses, take the
minuses, take the best story with circuit 2002, let
Ralph Caruso reveal it as he would any other submittal
and so let the science move forward from where you
were locked in 30 years ago.
MEMBER KRESS: I was about to say pretty
much what you were saying, Jack. The proper approach
for this is to have a best estimate situation and the
way to have one is that you have to quantify the
uncertainties. Now you don't have best estimate
unless you quantify the uncertainties. I think that
we ought to equate those two together.
How you quantify the uncertainties is
there are as many opinions as there are people
probably but you have to somehow do the Monte Carlo or
the things you know the distributions for, you have to
account for model uncertainties by an expert opinion,
you have to do all sorts of things. But the best
estimate process should quantify your uncertainties
for you.
Then you have a baseline. You have a
distribution. Then you say I won't allow an Appendix
K type calculations. The way to get the Appendix K is
to take your best estimate and say now what can I do
in the way of conservatisms so that I can give them an
easier way to do it but it accounts for the thing and
gives me basically the same nature with perhaps a
little bit of conservatism in them because I'm trading
off an easier way to calculate.
See we did it just the opposite. All we
did is start with the Appendix K and now we're going
to best estimate. We're trying to balance these
things off. The best way to do it is to start with
the best estimate model, quantify your uncertainties,
then work backwards to what you want for Appendix K.
MEMBER ROSEN: But the answer is not a
zero sum game.
MEMBER ROSEN: We should have no prejudice
about what the outcome is.
MEMBER KRESS: That's what I'm saying.
MEMBER ROSEN: It may be more restrictive
than the current techniques or less but we have to let
the science decide that not regulatory policy.
MEMBER KRESS: But what I was saying is
that it doesn't matter that your decay heat is known
much more precisely than some of these heat transfer
correlations of the model. You account for that in
your uncertainty. You integrate that in your
uncertainty analysis and you end up with final product
of the uncertainty in the outcome you are trying to
calculate. Of course you have to have acceptance
criteria there also which means to me I would have
some confidence level in the results you want. That's
another issue which is how do you arrive at that
confidence level.
MEMBER BONACA: I realize I would like to
say that I completely agree with that approach. The
only thing is that the shift brings you to the point
where there is even further burden on the staff to
review the proposed models that come in or present
certain approaches to address the issue such as
downcomer boiling and so on and so forth. Because you
are trading off something as you say, Steve, you know
pretty well, this conservatism in the K heat for some
effect that you are claiming that is being modeled and
is going to be hard to demonstrate I'm sure. Some of
these are not going to be easy modeling. I don't
disagree that it's a better way to do it.
MR. CARUSO: Dr. Bonaca, this is Ralph
Caruso. My name has been thrown about a bit. I'm
just responding to your question about the burden of
doing the review. There is a certain school of
thought floating around that not much use will be made
of this proposed Appendix K prime because there is a
certain school of thought that thinks that best
estimate is the best way to go as Dr. Kress says.
Since all three vendors now have best
estimate methods and it cost them money to maintain
multiple copies of methods, it is not to their best
interests that they would prefer to shift everyone
over to best estimate models. So I personally am not
too worried about the resource impact of this.
MEMBER BONACA: No, I wasn't talking about
PARTICIPANT: That's because it won't get
used. At least that is an analogy of NFP805 maybe.
CHAIRMAN SHACK: I don't want to cut off
the discussion but we're running out of time and we
would like to hear from NRR. Can you make this ten
minutes, Sam?
MR. LEE: I might be able to do better
than that.
MR. LEE: Good afternoon. My name is Sam
Lee. I work for the policy and rule making in NRR.
The objective of my briefing is to tell you a little
bit about the rule making activities associated with
all the proposed changes that were discussed today
under this umbrella effort of risk-informing 50.46.
What you have before you is a two page table that
lists the proposed changes as well as the second
column showing some of the industry interest in the
rule changes by way of their rule making submittal.
But before I get into this I just want to
refer back to the slide that Alan showed you this
morning. As you look at where we are with respect to
our effort to risk-informed 50.46 and we have reached
a major milestone of completing or nearly completing
all of the technical studies, we are in the phase here
of just beginning rule making effort so I just wanted
to let know you know this is where we are.
I just wanted to share a couple of points.
I don't know that I will go through the detail of this
table but the point I wanted to make is that as we
discuss and consider four of the changes, these ECCS
evaluation acceptance criteria, the ECCS reliability
as well as the long term redefining LOCA, the industry
has submitted a related petition for rule making
associated with each of these proposed changes. So
that really is an indication that the industry is
highly interested in what we are proposing.
Where we are with respect to each of these
changes (1) with respect to ECCS evaluation model is
you've heard today that the technical study is
complete or near completion. Upon which NRR and not
just NRR but the staff will form a working group which
will be composed of both NRR and research
representatives to tackle the rule making associated
with this proposed change. We as you can see here
have one technical report that was delivered and that
goes back to the third one that has to do with ECCS
As Alan pointed out this morning that of
the two proposed pieces one being plant specific
approach and the other being the generic approach, the
plans for the plant specific approach was delivered in
early May and the working group has been formed. We
are reviewing the report as well as identifying all
the milestones that we need to accomplish for really
reducing the rule making package associated with that
The other important thing that I didn't
point out earlier is that the rule making associated
with making these changes is we will have separate
rule for each of these proposed changes. So what we
will have eventually is basically four working groups
each of them dedicated to each of these proposed
changes. That's our plans at this point. That's
about it. Are there any questions? I will be happy
to answer them.
MEMBER ROSEN: Are you going to link them
together the whole idea being that? I assume that the
staff is saying you can't just create the flexibility
because you can't consider K heat without considering
the other non-conservatisms.
MR. LEE: Absolutely.
MEMBER ROSEN: So if you don't link these
rule makings together you might get out of phase and
have permission to use the K heat without considering
the non-conservatisms.
MR. LEE: Absolutely. We will have a
working group that is dedicated to each of these
proposed changes as well as an oversight group that
looks at the links between them.
MEMBER ROSENTHAL: If I could. There is
just a subtlety as follows: the rule speaks to the
`71 standard. So you can change the standard in the
rule. You need some means of addressing what you
know. You can not know what you know. You know that
there are problem with just taking out the margin
without addressing these other matters.
That will be done right now within what
I'll call the regulatory framework. The regulatory
framework is some combination of our rules, our reg
guides, our SRPs, our reviews, topical reviews I mean
the greater regulatory framework. The regulatory
framework has to take on the down side but it may not
be in rule making per se. It is yet to be worked out.
MR. LEE: Does that answer your question?
MR. LEE: Any other questions?
CHAIRMAN SHACK: So it's envisioned that
you wouldn't haul out for example the acceptance
criteria separately. There would just only be an
Appendix K prime that would link all these things.
There's not a K prime and a K double prime.
MR. CARUSO: I don't think we've gotten to
that point yet. We're going to have separate working
groups working on different parts of them. As you
said they are interconnected.
CHAIRMAN SHACK: Some of that criteria
seems a little bit less linked than others. Maybe
MR. CARUSO: I had some ideas about how I
was going to word that yesterday. Not today.
CHAIRMAN SHACK: You have to be a little
bit more clearer.
MR. CARUSO: I have a feeling that we will
be working on all four of them and tacking them to one
another. They will not be done in isolation.
MEMBER ROSEN: You did start out by saying
you didn't want (Inaudible.)
MR. CARUSO: Correct.
MR. LEE: That's correct. And we look
forward to having additional sessions like these to
inform you of the progress.
CHAIRMAN SHACK: What's the time frame for
the next session presumably at the end of July?
MR. GRIMES: This is Chris Grimes. We
have asked for time on the whole committee calendar in
July. At that opportunity I wanted to share what NRR
and research view as to the oversight functions that
are going to attempt to define some long range
outcomes that we want to see from all our rule making
and all of our risk-informed performance based risk
management program issues. So we expect to come back
to the full committee in July and provide you with a
status report and hopefully a better picture of where
we view the agency going in this whole arena.
MEMBER ROSEN: Will that include the
framework and I think that was Jack's word that will
keep us out of trouble here?
MR. GRIMES: Yes, what we call this our
vision about coherence is where does the rule making
fit into the overall regulatory process? We would
expect to be able to describe the framework as Jack
describes it. There is also the use of the word
"framework" in terms of option free framework. We
need to evolve that.
It is something that is a more practical
picture about where we are going, a road map, and a
set of outcomes and program performance measures. How
will we know if our rule making is successful? We
need to be able to measure that.
MEMBER ROSEN: I would suggest that one of
the measures that you might want to think about is if
anybody uses any of the new flexibility.
MR. GRIMES: There's the thinking in the
sense that says is it worth the investment. Do we
have a customer base that's going to take advantage of
the rule making? That has to be a part of the vision
in the future as well.
CHAIRMAN SHACK: Any comments from any of
the other committee members? Questions? We have
about five minutes left.
MEMBER BONACA: The only question I have
is to what extent I mean I'm sure there is an
interaction going on with the industry. There are
three petitions from the industry and a fourth one
from Performance Technology. The sense I'm getting as
we go through is that maybe the expectations when we
went after 50.46 were higher than actually the
research work that was being done may be able to
deliver because all of the conservatisms and so on.
I'm sure that there is a dialogue with the industry
and the industry with us.
MEMBER ROSENTHAL: We have that with the
industry and we were planning a public meeting again
sometime in June. There are public meetings planned
on the reliability issue at whatever date. We were
going to try to organize a public meeting on the stuff
you heard this afternoon hopefully in June. We have
had meetings with regulated community with the public
in the past.
MEMBER ROSEN: Are you talking about June
of this year? That starts tomorrow.
MR. LEE: Yes. With regards to the
reliability piece we have had as Alan mentioned this
morning an on-going meeting with the industry almost
on a monthly or bi-monthly basis. We will have
another one at the end of June to talk about the
condition of -- probability as well as LOCA
frequencies. So we have engaged the industry along
the process.
MEMBER BONACA: The reason why I am asking
that is two years ago it was also a necessity in the
industry that for example the 1.2 1971 the K heat
curve was like a freebie. It was just there for the
taking. We got a different kind of message when Steve
presented his presentation and showed the effects of
downcomer boiling and other effects that were not
accounted for and the trade-offs with that.
Now I hope that already it's sinking out
there that the K heat by itself is another freebie,
there are other things that are on the table. If you
take that you have to pay some other attention to our
effects. That's why I had to know where the industry
is and I'd like to know if there is open communication
and understanding of these issues.
MR. CARUSO: I would like to add that at
a higher level we have a right paper from NEI on their
view about a risk-informed performance based
regulatory structure for future reactors which we are
taking as input to the framework question that Dr.
Rosen referred to. That is how are you going to
structure all of activities. They expect to have a
series of meetings not just on their ECCS but we have
a proposed 50.44 on the street now. We also have the
future reactor activities.
Sometime this summer we would like to hold
a workshop and try to sort out all of these in terms
of the sequencing and the timing and the utility. I
would hope that the workshop would provide us not only
in the industry prospective but also I've engaged our
public interest groups. I've asked the public
interest groups to prepare to participate in such a
MEMBER CRONENBERG: Chris, they didn't see
the NEI of 0202 yet because I don't know when I get a
complete package from you for the July meeting. But
that will be mailed out to you I'm sure.
MR. CARUSO: The public interest groups
received the NEI 0202 directly from me in an E-mail.
MEMBER CRONENBERG: But these guys didn't.
MR. CARUSO: These guys didn't, that's
correct. The ACLS did not.
MEMBER ROSEN: I think I want to take back
something I said a minute ago after thinking about it.
We're in an open thinking session here. I said that
one of the things that we want to think about is
whether anybody would use this stuff implying that if
the word gets out that some of the requirements are so
injurious let's say that people would just back away
from the whole thing and that you shouldn't do that.
I take that back. I think even if it
turns out and nobody knows how it will turn out
whether or not this will provide more or less
flexibility if you went through the end with all of
this. I think it's the right thing to do to the
extent that we really can better model the actual
physical processes.
You shouldn't stick on the 1971
technology. You should move ahead even if it turns
out that maybe people won't use the new flexibility.
That's really not a good reason to not use better
technology and put it in place.
MR. SCHROCK: Another way of looking at
that is though the response to this petition could
have been that Appendix K is what it is. It was
created in a timeframe when knowledge was what it was.
It's different now and we're heading towards a best
estimate methodology regulatory process. We don't
wish to confuse the two.
We retain the old one for convenience in
sustaining licenses that exist for those that think
that the burden of best estimate calculation is so
onerous that they can't do it but the agency is
committed to improvement of scientifically based
regulatory process. We won't get there by
modifications of an Appendix K type of regulatory
process. That would have saved you all of the staff
effort and I think it would get you to about the same
endpoint. It would insure that you have some progress
in best estimate methodology.
MEMBER LEITCH: Well said, Virgil. I
second that.
MEMBER BONACA: Yes. I didn't make too
many comments. I was thinking how much work has gone
into this.
MEMBER SIEBER: I guess it's not obvious
that you are for sure going to come out with a benefit
because there are things that aren't modeled properly,
there may be conservatisms in there but perhaps new
test data and new insights would show maybe that the
existing Appendix K isn't as good as being
conservative as we think we are. To me I think that
there is if nothing else a moral obligation to
continue to work to make sure that whatever we come up
with that's new is acceptable and to verify somehow or
other that we haven't been running for the last 30
years in using models that are inappropriate.
MEMBER KRESS: To follow up on that just
a little the thing that I see missing from what I've
heard so far is this if I want to change the Appendix
K process to something with still an Appendix K but
have different things in it. What I'm really
interested in is reality. What actual peak clad
temperature or what kind of result do I get if my ECCS
and I need a best estimate model to do that and I need
acceptance criteria.
The other thing I need which I haven't
heard much about yet if I change my Appendix K by
whatever I do, what is that going to do to the ECCS?
Are they going to up to power? Are they going to ask
for changes? Then I take those changes to the ECCS
for the fleet of plants by the way because this is
rule. I put it in my best estimate model and say if
I make those changes I still meet my acceptance
criteria for all the plants or the significant
fracture of them or whatever the process is. That's
the part I haven't seen. What would be the result of
the changes that you might make in the Appendix K in
terms of the actual operation of the plant and the
ECCS provisions.
MR. BAJOREK: Well when you're doing the
best estimate work one of the difficult things in
making a comparison of best estimate to Appendix K was
that immediately everyone wanted to use that new
margin to increase power. Everybody went from
Appendix K to best estimate, took I think it was
generally 5 percent increase in power, FQs would
increase from 2.3 to 2.4 up to 2.5 close to 2.6. Hot
channel entalthy (PH) factors from 1.6 to 1.7. Almost
immediately eaten up in increased power either to the
core itself to the hot assembly to give you some
better core management.
To a lesser extent it went into with
relief in tech spec to give you a wider window for
your accumulator levels and some things like that.
But virtually everybody used it to increase the power.
MEMBER SIEBER: Any other questions or
MEMBER BONACA: The only thing I would
like to thank the presenters. I think it was an
outstanding presentation, a lot of information, a lot
of work in it and then clear.
MEMBER KRESS: Yes, I thought it was very
well done myself.
on to those who are not here to hear that.
MEMBER SIEBER: I hate to say thank you
when the person isn't here but it's better than not
saying thank you at all. I think we've concluded all
our business. This meeting is adjourned.
(Whereupon, the above-entitled matter was
concluded at 5:00 p.m.)

Page Last Reviewed/Updated Monday, July 18, 2016