Materials and Metallurgy - January 15, 2002

Official Transcript of Proceedings


Title: Advisory Committee on Reactor Safeguards
Materials and Metallurgy Subcommittee

Docket Number: (not applicable)

Location: Rockville, Maryland

Date: Tuesday, January 15, 2002

Work Order No.: NRC-175 Pages 1-274

Court Reporters and Transcribers
1323 Rhode Island Avenue, N.W.
Washington, D.C. 20005
+ + + + +
+ + + + +
JANUARY 15, 2002
+ + + + +
+ + + + +
The Subcommittee met at the Nuclear
Regulatory Commission, Two White Flint North, T2B3,
11545 Rockville Pike, at 8:30 a.m., F. Peter Ford,
Chairman, presiding.

F. PETER FORD, Chairman, ACRS, Member

NOEL F. DUDLEY, ACRS Staff Engineer

FRED SIMONEN (on the phone)

I. Opening Remarks, P. Ford, ACRS . . . . . . . 4
II. Status of Pressurized Thermal Shock (PTS)
Technical Basis Reevaluation Project,
E. Hackett, RES. . . . . . . . . . . . . . . 8
A. Probabilistic Risk Assessment
(PRA) Group, RES
B. Thermal Hydraulics (T/H) Group, RES
C. Probabilistic Fracture Mechanics
(PFM) Group, RES
III. Modeling Process, RES. . . . . . . . . . . .31
A. Derivation of new screening criteria
B. 1999 White paper
C. Constraints, models, and uncertainties
for PRA, T/H, and PFM
IV. Oconee Results, RES. . . . . . . . . . . . 250
A. Dominant transients
B. Predicted vessel failures
C. Relation to existing screening
V. Recess, P. Ford, ACRS. . . . . . . . . . . 274

(8:37 a.m.)
CHAIRMAN FORD: The meeting will now come
to order. This is a meeting of the ACRS Subcommittee
on Materials and Metallurgy. I am Peter Ford,
Chairman of the Materials and Metallurgy Subcommittee.
The other ACRS Members in attendance are: Mario
Bonaca, William Shack, and Graham Wallis will be here
at lunch time.
The purpose of this meeting is for the
Subcommittee to review the status of the Pressurized
Thermal Shock Technical Basis Reevaluation Project.
In particular, the staff will present the initial
results of the reactor vessel failure frequency of
Oconee Unit 1 as calculated by the FAVOR (Fatigue
Assessment of Vessels - Oak Ridge) probabilistic
fracture mechanics code.
The Subcommittee will gather information,
analyze relevant issues and facts, and formulate
proposed positions and actions, as appropriate, for
deliberation by the full Committee at the next meeting
in February.
Mr. Noel Dudley is the Cognizant ACRS
Staff Engineer for this meeting. The rules for
participation in today's meeting have been announced
as part of the notice of this meeting previously
published in the Federal Register on December 19,
A transcript of this meeting is being
kept, and will be made available as stated in the
Federal Register Notice. In addition, a telephone
bridge has been set up to allow individuals outside
the meeting room to listen to the proceedings.
It is requested that the speakers first
identify themselves and speak with sufficient clarity
and volume so that they can be readily heard. We have
received no written comments or requests for time to
make oral statements from members of the Public.
The staff briefed ACRS Subcommittees on
the PTS Reevaluation Project in March, April, and
September 2000, and in January and July 2001. The
ACRS commented on the PTS Reevaluation Project in a
letter dated October 12, 2000.
In this letter, the ACRS stated that the
Project was well thought out and recommended that the
staff examine the implications of using a large early
release frequency (LERF) acceptance guideline based on
an air-oxidation source term on the acceptance value
for reactor pressure vessel failure frequency.
The staff has issued SECY papers
concerning the following:
ø development of the current PTS screening
criterion and the motivation for reevaluating
the criterion;
ø identification of PTS scenarios and estimates
of their frequency, thermal hydraulic boundary
conditions for the fracture mechanics analysis
of the reactor vessel, and the probability of
thru-wall cracks; and
ø identification of key inputs to the
probabilistic fracture mechanics analyses, such
as, generalized flaw distribution, neutron
fluence, fracture toughness models and
embrittlement correlations, FAVOR code
development and the associated verification and
validation process, and calculation of PTS
thru-wall crack frequency.
We will now proceed with the meeting and
I call upon Mr. Michael Mayfield, of the Office of
Nuclear Reactor Regulations, to being.
MR. MAYFIELD: Good morning. Staff has
come today, as Dr. Ford has suggested, to present a
briefing, a fairly detailed briefing as you can tell
by the size of the package, on the work on pressurized
thermal shock and reevaluation of the technical basis
for the PTS rule.
We have appreciated the time and effort
that the ACRS as a full committee and this
subcommittee in particular has invested over the last
year or two in supporting this activity. We have
taken this on as a major effort within the Office of
Research and we think it is providing some direction
for future risk informed activities for rules like
The uncertainty analysis has been a big
piece of what we've been trying to do. Professor
Apostolakis had asked us to present an example where
we walked all the way through the work and through the
uncertainty analysis.
One of the things the staff is prepared to
present to you at this meeting is an example of how it
all goes together. We have some preliminary results
from the Oconee plant and then we'll use those to
provide an example of how the whole analysis goes
together and hopefully clarify what we've been
promising to come tell you about for some time now.
With that I would like to turn it over to,
I guess, Mark Kirk or Ed Hackett to open the briefing.
We hope that we'll satisfy your interest, at least for
MR. HACKETT: Thanks, Mike. We have quite
a team arrayed here to do this briefing. To my right
is Alan Kolaczkowski. My name is Ed Hackett. I'm
Assistant Chief of the Materials Branch in the Office
of Research. Mark Kirk, Shah Malik, and Dave Bessette
are over here to my left and there will be others
I guess a couple of items of
administrative business. On your schedule if you have
that in front of you, we are proposing at this point
swapping Roman numerals III and IV. After this status
briefing, which will be largely a combination of
myself, Mark Kirk, and Alan Kolaczkowski, we will go
into the modeling process before we get into Oconee
specific results.
Also, I know, as the Chairman mentioned at
the opening of the meeting, there are a number of
people joining us on the phone line. To allow
opportunity for more to join in, we are not going to
identify the folks on the phone line until the break.
At the break point we will take some time to ask you
to identify yourselves.
At this point, also I guess I should
mention the purpose of this brief is -- I guess we
could go to the first slide, Mark -- is a status
briefing. At this point we are not looking at
requesting a letter from the committee. Obviously if
you feel a letter is necessary for some reason, that's
a different story but we are not coming requesting a
letter in any specific fashion on the project.
I guess it is unfortunate, as Mike
mentioned, a lot of this particular meeting was driven
by Professor Apostolakis. We will miss him today but
I'm sure he will make sure that whatever is presented
today we will be taken to task for -- well, we will be
compounding with Professor Apostolakis at some point.
He was a key driving force behind wanting to see us
step through the uncertainty process. We will miss
him today.
That's going down the list of meeting
objectives that you see here. That's a key part of
the meeting today to describe the modeling process and
the treatment of uncertainty. Major departure from
what we've done in the past for those of you who are
familiar with this rule, 10 CFR 50.61.
The PTS rule is a rule that is basically
keyed towards the best estimate approach and there are
margins added to account for uncertainties in rather
a crude fashion compared to what we think we can do
today. One of the key goals in this project, which is
at least two to three years into the tech evaluation
at this point, was to try and treat those as we went
I think I can say it is the most
comprehensive treatment we've ever done in this
technical area. However, I think it would be fair up
front to point out that what you will see is the
uncertainty treatment has also been done on I guess I
would say not an equal basis.
There will be some elements of the project
where you will see uncertainty treated about as well
as we know how to do. Then there are other areas
where we weren't able to do as well for any variety of
reasons. In a lot of cases lack of data or lack of
appropriate data. You'll see that as we go through
the day and a half here.
Also, at the end of the status briefing
I'll try and leave you with our bottom line so you
don't make everyone here wait a day and a half to see
where we're going.
The Chairman mentioned the other big part
of the brief is to discuss current results and
insights from the analysis of the Oconee plant which
was one of four that we're doing in the project for
the tech bases and provide one detailed example of the
modeling in the uncertainty process.
With that, I guess we'll move on to the
basic overview slide of project status. The graphic
gives you really the three pieces that have gone into
this in terms of the technical or scientific areas.
PRA events, sequence analysis is basically the
beginning part.
The thermal hydraulic analyses to describe
in detail the transients and then the integrating
piece which is the probabilistic fracture mechanics
analysis, No. 3 up there and ultimately going down to
yielding a yearly frequency thru-wall cracking which
is the way the previous rule was also set up.
This is the way we set up the project for
the technical evaluation so we have developed an
approach to do that. That is described in the
graphic. I think I basically said everything under
the second bullet.
Most recent accomplishments include the
finalization -- maybe I shouldn't use finalization.
The FAVOR code is now a working code that includes
what we think are all the updates that were necessary
to start the runs.
That was completed largely thanks to some
pretty heroic efforts on the part of the Oak Ridge
National Laboratory, most notably Terry Dickson and
Richard Bass and then Paul Williams. That was
released to the public as version 1.0 in October.
There will be, no doubt, refinements and enhancements
as we go along. That was a major piece to enable the
rest of this.
We are estimating the risk of vessel
failure for four plants and attempting to generalize
from there. We have two combustion engineering
plants, one Westinghouse plant, and one BNW plant that
you can see listed there.
Right now the Oconee analysis is pretty
much in a draft form. It has been completed to a
draft state in all the technical areas complete with
an estimation of the thru-wall cracking frequency that
is a draft of preliminary effort at this point that
we'll describe.
The other three plants you can see their
status there. We have in a lot of cases, I guess I
should mention, the licensees in the industry have
been a very key part of this effort. A lot of the
inputs on the PRAs and the thermal hydraulic inputs
were from the licensees in the industry on a volunteer
This has been -- the project couldn't have
been done without that type of assistance. A lot of
credit is due to the industry participants. Bob Hardy
this year had three reactor vessel integrity groups
within the Materials Reliability project. That has
been extremely beneficial cooperation along the way
for us. That is the overall status of the project
right now.
We are on schedule to complete the
technical basis. We are looking at a schedule to
complete the technical basis in 2002. Hopefully we
would be looking at embarking on rulemaking somewhere
after that.
Work remaining, on the next slide. We
have discussed some of this with the committee
previously. A fairly major effort underway on Q/A
that is pervasive to the project. That has been
underway simultaneous with the technical effort most
of the way. A particularly large effort they are
devoted to Q/A of the FAVOR code, validation and
verification of the FAVOR code.
Finishing the internal events analyses for
the four plants, that will hopefully complete in the
first half of this year and then we'll have a fair bit
of integration to do to go from there.
Another element we could spend a few
minutes discussing I will come to in a minute and show
you that the results for the Oconee plant are going
the way we had hoped that the project would go. The
risk is looking like it is less than expected, or less
than we had looked at in the past.
One of the things that was a factor in the
project -- Nathan Siu couldn't join us today
unfortunately but Nathan was one to flag up early to
the working staff on the project and also to the ACRS.
The notion of external event risk contribution has not
really been addressed previously in PTS evaluations
largely because the internal events dominate so
In this case it's looking like that risk
number is coming down low enough that I think we will
have to be considering external events. We have
initiated work in that area but that is something we
have not embarked upon before so that is a bit of a
departure for us in terms of that effort.
Under that we're talking about things like
fires, for instance, natural hazards and how they may
contribute either in tandem or singly with some of the
PTS initiators. That is something we will spend at
least a little bit of time discussing today, too.
Then there is the overall integration of
the results and the risk criteria. Those of you who
have followed this project know that the regulatory
guide on the plant specific analysis which is
regulatory guide 1.154 is really key to a risk
criterion of 5E-6.
We are not coming here to discuss that
today. That will be the subject of a separate
meeting. We are in the process led by the PRA branch
working on a SECY paper that will outline the staff's
approach to what that risk criteria should be.
I guess in short right now it's not clear
that will stay at 5E-6 or move to some other number in
that range. That remains to be determined and will be
the subject of the SECY paper and some potential
discussion debate with the commission itself. There
is some work that remains.
CHAIRMAN FORD: If I could ask a couple of
questions. In some of the earlier correspondence on
this whole reevaluation project. Mention was made of
the continuant and that it would not be covered in the
current work scope, the current paid for budgeted work
Since you are coming to the end of the
project by the end of this year, do you have any
thoughts as to where you stand on this or are we going
to lure the frequency down to E-6?
MR. HACKETT: These are really good
questions, Peter. As you know, they have been the
subject of a lot of debate. I'm probably not the --
Nathan Siu would probably be the most qualified person
on the staff to get into that, Nathan or Mark
Since they weren't able to be here with us
today, the staff's current thinking on that is to stay
nominally with the criteria that exist at the moment
which is a CDF based approach. Also based on, as
we'll get into extensively today, RTNDT as RTNDT is an
At least the staff's thinking going into
this is we will retain those features. As you
mentioned, we are pretty late in the game to be
looking at changing horses significantly, but that
will remain an effort to outline in the SECY paper.
When Nathan put together an outline, this
is vintage about November of 2001, we were looking at
pursuing several options. One of them would have had
a departure that would have included addressing LERF
and containment integrity.
We at this point would be -- I think I can
speak for maybe myself and a few others here -- not
looking at pursuing that path right now as a
recommendation, but that remains to be debated at
higher levels. Obviously it's not my decision as to
how we go forward with that.
Or it may be that we go forward with a
version that is RTNDT CDF base now and continue work
looking at the LERF element and see if it's warranted
that we make a change to the rule in the future along
those lines. I think that is the way I would answer
it. I don't know if anybody else has anything they
would like to add to that.
CHAIRMAN FORD: Actually, you're an ideal
feed man because the other question I had was related
to you mentioned what's going to happen in the future
on the regulation point of view.
Given the fact that two of your reactors
that you've got on your study, Palisades and Beaver
Creek, if you are using the current screen criteria,
you've only got a couple of degrees, 1 degree, 2
degrees difference between RTPTS and current screening
criteria. Those problems for those particular
reactors are very current.
Ford Calhoun coming up for a license
renewal is also very close and that's this year. What
is the timing? You say that this is going to end.
The technical part is going to end this year and then
you are going to some sort of regulatory discussions.
CHAIRMAN FORD: Is there a lack of urgency
MR. HACKETT: There is in all fairness
less than there was. There was a time when we started
this project where it was to a large degree being
driven by a number of events, a number of things
involving the technical accomplishments that have made
it possible is one thing.
But also being driven by a lot of the
events at the Palisades plant where at one time -- I
think that maybe representatives from Palisades are
here on the line -- at one time their PTS screening
criteria date looked like it was going to be in the
range of 2003/2004.
They have since made submittals to NRR
that have taken them, my understanding is, to 2011 and
those were largely arguments based on fluence and
dosimetry is my understanding. That urgency went away
to that degree.
However, you raise a key point that there
is still a fair bit of urgency in this project
because, as you mentioned, people are making decisions
for their plant's futures based on, in large part,
whether or not they are going to have a viable reactor
vessel for that renewal period.
Those decisions are very here and now so
we take that very seriously from a schedule
perspective. But to go further, the technical basis
we are anticipating completing in 2002 the idea is we
would start a rulemaking process in parallel with
that. That would be Cindy Carpenter's branch in NRR
and they have prepared resources for 2002/2003 time
frame to undertake rulemaking in this area.
Again, schedule impact can be critical
because we are probably talking a rulemaking that is
two years in the doing. That would be from, say, this
fall. We would be into 2004/2005 time frame before we
would have a rule that would be on the books in 10
In the meantime, hopefully the technical
basis would be there so that people could feel
encouraged that it's going a certain way. It's a
little premature now to say just based on the results
from one plant that everything is going to come up
nice and rosy. At the moment that's the way it's
MR. MAYFIELD: This is Mike Mayfield. I
think it's worth pointing out that while indeed there
are some fact-of-life changes that have happened with
some of the plants that maybe take some pressure off,
there still remains significant interest from, at
least, at the office level.
Ashok Thadani is keenly interested in this
rule or this activity and making sure that it stays on
track. That part of the pressure has not gone away.
He is focused on it. Some key members of the project
team did get distracted from some of the staff's
efforts dealing with the September 11 events. We are
working to try and recover from that.
There was an impact that we are trying to
deal with now. Mr. Thadani and Mr. Zimmerman remain
focused on this rule and keeping it on the schedule,
at least as much as we possibly can.
MR. HACKETT: I guess with that we'll flip
to the bottom line. This is after a day-and-a-half
worth of presentation tomorrow what we will come to.
We just thought we would hit you with that up front to
let you know what's coming. We ended up in a lot of
debate over how to present this so it's a work in
progress. It does say preliminary on it and I would
encourage people to take that seriously.
The current risk criteria from the
regulatory guide set at 5E-6. You also see two curves
on there that describe the results of the project for
Oconee that are basically looking at the RTPTS versus
the thru-wall cracking frequency where you have two
key junctures.
You're looking at it after 40 years of
operation and then you're looking at it if you were to
project ahead to the actual screening limits where
they are set now, 270 and 300.
What you can see over on the left-hand
side there is that some pretty exciting results here.
Approximately 4 orders of magnitude lower than the
nominal risk criterion after 40 years of operation and
2 orders of magnitude lower at the current screening
This is a pretty exciting result for us.
Again, we are hoping this holds as we go through some
more advanced and refined Q/A of the codes and the
data that produce this result. You will hear a lot
about that and the propagation and uncertainty as it
goes through this analysis.
I wanted to get this to you up front
because this is what we went into the project hoping
would be the case. Philosophically the idea was that
there were a lot of conservatisms, or at least
significant conservatisms embedded in the current rule
and that with the aid of more accurate analyses and
evaluations we would hopefully be able to remove some
of that.
I guess what I would leave you just for
this opening is it does look like it's going in that
direction right now so we are pretty happy with that
PARTICIPANT: Where is 60-year life?
MR. KIRK: It's the next dot up.
CHAIRMAN FORD: Bill, he's off the
computer -- the microphone. What was the answer to
MR. KIRK: The answer is it's the next dot
above 40.
MEMBER BONACA: Okay. So just above one.
MR. KIRK: Not much further up.
CHAIRMAN FORD: I see. And you mentioned
that obviously this is because of excessive
conservatisms in the current code. Does it turn out
to be one significant conservatism?
MR. HACKETT: You're a good straight man.
The next slide is our attempt to go into that.
MR. KIRK: Should I give him the $20 now?
MR. HACKETT: Wait. We debated this one
a lot, too. That question has occurred to a lot of
folks as we've been through this. What is the key
driver for this? In the past it has been for the PFM
inputs, the flaw density, and distribution. That has
been a key driver.
But I think at this point we are not able
to say exactly in terms of quantitative assessment X
percentage came from this area, X percentage came from
another area. We will hopefully get to that point.
What we tried to do on this slide is just
show you the trends and sort of how they hit here with
the green arrows showing items that would tend to
reduce the conservatism or decrease the risk, and the
red arrows showing areas that might be or would be
tending to increase the risk.
The only three upward arrows on here are
acts under PRA. Acts of commission that were
considered were operators did the wrong thing at the
wrong time, external events, when they are considered,
would obviously increase the risk over all. We have
not gotten into that to any significant degree yet.
Some nonconservatisms removed in the
arrest and embrittlement models also have a slight
upward trend there, too. But many more down arrows,
down green arrows here than there are the upward red
arrows. The overall effect on the project has been to
reduce the risk and the conservatism.
Let's see where I wanted to focus here.
PRA I think, and Alan will get into this, a lot of the
PRA effort goes into much more refined binning. Alan
will talk about that in detail. The previous binning
of some of the event sequences was much greater than
has been done in this project. That has been a major
Under PFM there was a significant
conservative bias in the toughness model that has been
removed or mitigated. Most of the flaws now we find,
and this goes with expectation from some of our
experimental work, the flaws are embedded. They are
not surface breaking. In fact, we have not seen
surface breaking flaws in the actual experimental
evaluations we've done from real vessel materials.
That's been a big factor.
Also, and you'll see details of this, we
are now looking at spacial variations of the fluence
where as before we assumed that all the materials were
at the maximum fluence and the vessel was made of the
most brittle material. Now we are considering the
spacial map in the vessel belt line region. That is
a significant aid.
I guess with that --
MEMBER SHACK: On the thermal hydraulic
sequences, is that just basically the binning again?
You are not assuming that all the horrendous thermal
hydraulic events?
MR. HACKETT: Right. I think largely,
Bill, it's mainly binning that has been the
improvement there. Dave?
MR. BESSETTE: I think that's right.
There hasn't been that much change and the way we
predict these transients is the fact that we can do a
lot more sequences than we could 15 years ago.
MEMBER BONACA: What are you talking
about? Operator action credited in the sequences, you
MR. BESSETTE: Yes. We ran sequences in
which the operator action is credited and that makes
a big difference a great deal of the time.
MR. HACKETT: That's one I should have
highlighted, too. Dr. Bonaca raises a good point,
especially for Oconee which is a BNW plant. With the
one steam generators there were the previous sequences
that, I don't want to say dominated, but were very
significant which included things like main steam line
break and steam generator two rupture.
By virtue largely of crediting operator
actions much more significantly, you'll see when Alan
goes through his presentation that those have come way
down in terms of contributors to PTS. That has been
another major improvement.
MEMBER BONACA: And you will discuss the
uncertainties later on.
MEMBER BONACA: We'll have a feeling for
what dominates uncertainty here if you have some
specific --
MR. HACKETT: Specific transients and
MEMBER BONACA: Specific elements of this.
You are showing us improvements and, one that is in my
mind, what is driving uncertainty more than other
things. I mention, for example, further action would
drive uncertainty.
MEMBER SHACK: They only gave us point
estimates before. Presumably we are going to see
uncertainty bands.
MEMBER BONACA: I understand that.
CHAIRMAN FORD: Ed, could you go back to
the previous graft? It's more for my edification.
There's another way of looking at it that
if you want to decrease the cracking frequency down to
10 to the -6 taking into account uncertainties on the
containment. Is another way of looking at it you
could just go to extreme criterion way up at 300? I'm
just eyeballing this thing. 300 or 350 or 375?
MR. KIRK: Where you -- well, how we take
this information and turn it into a screening criteria
will be an interesting piece of technical work. At
least in looking at it this way, let's be clear that
reflects an assumption that we would intend to look at
it this way.
Looking at some materials based value on
the bottom and RTPRS type value versus predicted
through all cracking frequency, there are two things
that you can change that hope out on this plot. You
can change the position of the vertical line -- the
horizontal line, I'm sorry, as you pointed out, but
you can also, as Bill was saying, he wanted to see
some distributions.
I'm thinking, "Oh, God. I've got to go
find some distributions." All the lines you'll see
plotted today are 95th percentile not reflecting any
policy decision, just based on normal scientific
practice. What it does suggest is that there are
distributions behind all of these lines. Of course,
that means that 95 percent of the failures are below
the line.
Where the screening criteria comes up is
really based on a combined decision of where the
horizontal line goes and what percentile you put in.
As a materials person it would be my hope that we
could make those two decisions and then do the
screening based on best estimate values of the
horizontal access variable.
You could keep the risk criteria where it
is and pick a 99.999 percentile. I'm not suggesting
that is a good thing to do, but achieve the same
result as a very low risk criterion in a 25th
I think those are the decisions and if we
think about them in that way, we can think about them
in a rational scientific way as opposed to letting
ourselves be led around by what the numbers actually
CHAIRMAN FORD: I guess this is a topic
that will undoubtedly come up further than today. If
you're talking about one sigma type of methodology,
should you not be -- if you go for a 95 percentile --
MR. KIRK: Oh, yes. I see.
CHAIRMAN FORD: -- the 95 percentile and
given the urgency of this -- not the urgency of this
problem, the potential severity of this problem,
shouldn't you work into a six sigma or 99. whatever
six sigma?
MR. KIRK: That's a very good point. By
looking at it in this way, it enables us to think in
that manner and make a decision consistent with the
severity of the accident if it did, indeed, occur.
Terry Dickson knows better what the tails look like on
these distributions than anyone. Certainly the
percentile that's picked should be appropriate to the
severity of the accident.
MEMBER SHACK: Since we truncated .1 and
99.9, we'll have a hard time probably going to it.
CHAIRMAN FORD: Sorry. We have talked
enough at this point.
MR. KIRK: Yes.
MR. HACKETT: I guess a couple of comments
I'll add just in closing of this sort of opening
session here. This has been a really focusing event
for the team. I've got to say the team has worked
really hard on preparing for this presentation.
I think it was very good to have Professor
Apostolakis point us in this direction because it
pointed out some things both in a good and bad sense
where we needed to spend some more attention. It has
involved all three disciplines in the Office of
Research and a lot of cooperation with the industry
and with NRR.
That has come off remarkably well. It's
fair to say it's not something that we're used to
doing. We're getting better at it but I think it's
about as good as it's been on this project.
Maybe just as a last anecdotal comment, a
lot of credit to especially Mark Kirk for bringing
this together including his facility with PowerPoint.
You'll come to appreciate that he made all this
possible today because the team was here fairly late
last night getting things together.
MEMBER SHACK: You just need a printer
that comes up with his skills.
MR. HACKETT: And you need to keep the
files smaller. Also slides curtesy of Kinkos because
the NRC reproduction facilities weren't able to
accommodate our tight schedule here either. A lot of
credit to the team and hopefully we can launch now
into more detailed discussion.
I'll turn it over to Mark.
MR. KIRK: Okay. Thank you. Actually, I
only have a few slides here and then we go quickly to
Alan. We are in the part of the agenda here, I think,
this was previously item 4 on your agenda. Actually,
items 4, 5, and 6 on your agenda. They've gotten
moved up and we'll do the Oconee results after this.
Just to seal yourself, this is probably
going to take us about three hours to walk through.
As always, questions invited along the way so that we
can get our voices back.
The purpose of this phase of the
discussion is to work through the overall modeling and
uncertainty process that we have undertaken in this
project. In terms of what you're going to see here,
the first two bullets are like one slide each and then
the last bullet is approximately 30 slides per
discipline so there is an uneven weighting here.
I would like to discuss the guidelines
that we establish for doing uncertainty models in this
project and talk about our intentions regarding the
material screening criteria; discuss the interaction
and integration of the three technical disciplines;
and then, of course as everyone is aware, concept for
model development on uncertainty treatment was
established in 1999 by Nathan Siu.
You've been briefed on that already.
Today what we're going to focus on is what actually
happened. As Ed has already pointed out, you'll see
that sometimes theory meets practice.
Other times practice falls a bit short of
theory but the aim of this presentation is to present
this with a good degree of candor so you get a good
picture of what happened and, indeed, what did not
The guiding principles that we started
with were sort of those laid out by Nathan. The
methodology that we've adopted in the PTS reevaluation
project requires an explicit treatment of
uncertainties across all of the technical disciplines.
As Ed pointed out, relative to where we
are now in 10 CFR 5061, that is a bit of a departure
where uncertainties tend to be -- it's like relatives
you don't like. You tend to bury them and hide them
in parameters. Well, they are all out here in the
open now and we're going to talk about them.
We classify uncertainties as being either
aleatory or epistemic. Those are my two new words for
last year. Now that we have identified them and given
them a name, we can put a number on them and then put
them in the FAVOR code and Terry swims laps while the
FAVOR code is running.
The second point on this slide is that our
intent in where we are going with this is to, of
course, reset, or hope to reset, the material
screening criteria for vessel embrittlement which is
right now called RTPTS as expressed in 10 CFR 5061.
Our hope has been that in going through
this we won't be requiring the licensees to make any
new measurements. It will be able to use the advanced
state of knowledge of computation of whatever is
developed over the last two decades to be smarter
about the screening criteria, but still express it in
the same way relative to NDT, chemistry, and Charpy
data. Everything is looking good with regards to that
intent right now, I should say.
CHAIRMAN FORD: That last sentence, no new
material measurements, when you're doing this project,
you are obviously.
MR. KIRK: Certainly, yes. As associated
with the project, the new measurements have been
predominately focused in the flaw area. We have,
indeed, collected together a lot of toughness data but
that's not necessary new measurements. I should say
no new surveillance measurements by the licensee.
MR. HACKETT: I guess including a couple
of comments there, too, including things like the
inspections. We've had those discussions. At least
the idea going into the project was that it wasn't
going to result in a new level of inspection
technology that would be required on vessel wells.
CHAIRMAN FORD: The way it's written right
now hits a raw spot with me in that you didn't do
anymore experiments in this project and that's not
what you meant.
MR. HACKETT: No, not at all.
MR. KIRK: Actually, we softened that
statement from some that we had previous. Another
example could be, for instance, the work of Ernie
Eason and Bob Odette and others on the embrittlement
correlation flagged up that phosphorous looks to be a
contributor again as it was at the very beginning.
That then raises a question how does the
industry address that. Is that a new measurement they
need to make or are there default values or other ways
you can address that.
And it looks like the answer is there will
be -- there are alternative ways of addressing that.
That is the kind of example. That did flag up out of
the technical assessment and it will have to be dealt
MR. KIRK: Okay. Actually, we've used
this graphic before and you'll probably get sick of it
by the end of the day. This is the very highest level
view of what's going on in this project. Like I said,
we will be referring back to it just so you can see
where you are.
We, of course, start with the PRA event
sequence analysis. That gives us two things, both
sequence definitions and sequence frequencies. The
sequence definitions go into the thermal hydraulic
analysis. We use the RELAP code to commute pressures,
temperatures versus time from those. Those go into
the PFM analysis which magically pops out a
conditional probability of vessel failure. That is
then combined in a deceivingly simple matrix multiply
with sequence frequencies to get the yearly frequency
through all vessel cracking.
As we were dry running this yesterday my
colleagues admonished me to point out that this is
shown as a deceptively simple and linear process here
and it's not either of those. Which is to say, we
just don't pass through this once and call it a day.
If we did, we would have met our schedule a lot
There's an awful lot of -- I shouldn't
shall awful. Awful is a bad word. There's a lot of
feedback that goes on at many points in this process
as we get our results and it's just the normal
engineering process of saying, "Well, that doesn't
quite look right. What did we do there?" We feedback
and we do it again.
As Ed pointed out on the status, and
you've seen some of the results, and you'll see some
more of those results today. We've got what we feel
like is a good draft on Oconee. Having said that and
going through that draft, we found certain things that
they clearly need to be redone and will be redone but
this is the overall process.
Now what we would like to do is to go
through each of these three main elements, the three
blue boxes, PRA, thermal hydraulics, and PFM and for
each element present you with a presentation that in
its total describes how we implemented our model
development and uncertainty treatment procedures.
We are going to try to stick to this
format as much as we possibly can to make it clearer,
but in each of these three blue boxes we're going to
start by talking about whatever constraints were
imposed on the element or fundamental assumptions we
have to make at the start. Those types of things
would tend to constrain what we did.
We will then break down those three
deceiving simple blue boxes into many more boxes or
lines or what have you. There is an awful lot hidden
in there. It's kind of like Pandora's Box. We will
then discuss the process used for model building, if
indeed models were built, talk about uncertainty
treatment, and we'll try to wrap up each presentation
by focusing on significant changes since the 1980's
evaluation providing a bit more meat to Ed's down
green arrows and up red arrows that you saw earlier.
With that, unless there are any questions
CHAIRMAN FORD: I've got a very, very
general question.
MR. KIRK: Sure.
CHAIRMAN FORD: It's more for best
practices for the future. Along with the thermal
hydraulic code research is developing. This is an
extremely multi-dimensional multi-disciplinary
exercise. We are presuming it's going to work. We
are positive.
Do you have any lessons learned as to how
you make such a thing work when you're talking to
people on the west coast, the south presumably, in the
east, the west?
MR. HACKETT: We have enough challenges
just within 2 White Flint or just at the NRC
headquarters complex. I guess it's kind of a
philosophical question.
CHAIRMAN FORD: It's more a management
question which will increasingly become if not
philosophical, real management.
MR. HACKETT: It's interesting. As I
mentioned in the overview when we opened things up, we
have been in the past, I think, guilty of being more
compartmented here at the headquarters operation and
with the contractors that we do our probabilistic
fracture mechanics in our branch and David, Roy, and
Alan, and others are doing PRA and thermal hydraulics.
There wasn't in the past, quite a while
back now, as much cross-talk as there needed to be.
This project has made that necessary. Mike Mayfield
could give the best history of this you could possibly
Mike has made a number of stabs at trying
to do this over the last decade. We have met with
some significant challenges in the past because of
failure in the interactions between the technical
I think this is the first time it has
really come together to work as well as it has, I
think, frankly, it's out of necessity. We were just
not able to make -- we can't run FAVOR without
pressure temperature traces and event sequences and
combine these into meaningful numbers for the rule
without the kind of cooperation we've had.
I think it was those features plus an
office director, Ashok Thadani, who got behind us in
a very forceful way to have his three divisions taken
on as a priority. It is one of the top priority
projects within the Office of Research.
We've also had NRR supporting us all along
the way. And then the industry, too. I think that
was an element that was missing when we tried these
things previously. We did not really work it
cooperatively with the industry. We did this time
from the very beginning and I think that's been
another big factor. At least from my perspective
those have been some major influences on the success
we've had so far.
MR. MAYFIELD: This is Mike Mayfield. I
think there are some things we have done differently
from a management point of view. As Ed mentioned,
this was taken on as a team activity involving inputs
from all three of the divisions in the office. We had
support from all three division directors.
The division directors meet basically bi-
monthly briefing from the team on where they stand.
The division directors meet separately on a regular
basis and some of these top-tier programs that have
high visibility are discussed.
We budget as a team activity rather than
stovepiping the way we used to. The annual budget
input is put together on an issue basis. There have
been a number of changes like that that have
contributed to making this go forward. I think they
are lessons learned from some failures in the past
that Ed mentioned and what is looking like a
successful project right now.
We have adjusted. We've made some budget
adjustments as we go along. By looking at it from an
overall project standpoint rather than individual
pieces I think we have been able to keep it moving.
MR. HACKETT: I think Roy had a comment.
MR. WOODS: This is Roy Woods. I'm
involved with the PRA part of this. I heard Mike, and
all that's true and very vital. But I also have to
point out to you that the three of us at the lower
level, PRA and myself, thermal hydraulics, Dave,
Fracture Mechanics, partly, at least, Mark here, we
meet sort of impromptu almost every morning now.
I wonder down to where their offices are
and we've had some of the more meaningful and
important exchanges of technical information or the
way something is going or something that the other
group needs to know about.
My point really here is in addition to
what Mike was talking about where you made it a high
level, director level, the workers also have to talk
about the details of what's going on. I think that's
been very important.
MR. KIRK: Coffee mess discussions have
gotten us very far in this project.
MR. HACKETT: And it's Alan's turn.
MR. KIRK: With that, yes. We'll go to
the detailed discussion of PRA models and uncertainty
and I'll turn the presentation over at this point
largely to Alan Kolaczkowski of SAIC and Roy Woods of
the staff.
MR. KOLACZKOWSKI: Alan Kolaczkowski.
First of all, thank you very much for the opportunity
to, again, present the status to the committee
members, etc. What I'm going to go over now is
primarily address the key modeling aspects and the
treatment of uncertainty in that first box that we see
here on this diagram which is where we first define
the sequence definitions that we're worried about.
Obviously a major product of that also is
coming up with an estimate of the frequencies of those
accident sequences that could represent a serious PTS
challenge that is later combined with the conditional
probabilities of vessel failure towards the tail end
of the process to actually come up with estimates of
the yearly frequency of thru-wall cracking.
This is, if you will, just another
representation really of the same thing in the
previous slide just shown perhaps in a little bit more
detail. Unless you have specific questions, I'm not
going to go through this in any detail but, again,
this is meant to represent really what is going on
throughout the entire project.
The PRA aspect of this, which is sort of
the beginning part of the analysis through a primarily
event tree modeling, we define what the potential PTS
challenge accident scenarios could be and come up with
the frequencies of those. Of course, those
frequencies have uncertainties associated with them
and we will be addressing that aspect of it later.
Those are binned and where sequences are
likely to represent, if you will, very similar plant
responses in terms of the way the plant will respond
thermal hydraulically to those sequences. Those
sequences are binned into what we call thermal
hydraulic bins and then RELAP runs, etc., are run on
those sequences to actually come up with the pressure
temperature profiles for those sequences.
Now, again, while I'm presenting this in
a very serial fashion. As has already been mentioned
by Mark, this is quite an iterative process. You do
some binning and you find out you've been too course
or whatever and maybe, in fact, you've got to break
those bins down into others and so you go back and you
rebin those sequences and then more hydraulic runs are
made so that we're not binning things quite so grossly
where it looks as those something really does make a
So, again, while we are presenting this in
a very serial fashion, in fact, it's quite a iterative
process and you go through this over and over and over
again to keep refining the work to try to, if you
will, remove many of the conservatisms that were
certainly part of the original 1980's work.
MEMBER BONACA: Now here you include the
operator actions in the binning process.
MR. KOLACZKOWSKI: The operator actions,
of course, are actually some of the events that are in
the event scenario so the operator actions get defined
as part of the event scenario. Hopefully that will
become clearer as we move on to some of the modeling
process. Surely by the time we get to the example
sometime either very late today or tomorrow, hopefully
we will demonstrate for you very clearly exactly how
the operator actions --
MEMBER BONACA: The reason why I asked,
you know, we're talking about the managing of this
effort. I'm sure you had interactions with the plant
or whatever.
MR. KOLACZKOWSKI: Yes, we did. I don't
know of the Oconee staff are listening but I think
they will verify that I probably asked them too many
questions too many times. Again, they responded to
all our needs and that was quite an interactive
process going on with Oconee, for instance.
MR. KOLACZKOWSKI: As is going on with the
other plants, Palisades, Beaver Valley, and so on.
MEMBER SHACK: Were there any physical
changes in the plants that effected the event
sequences that you looked at?
MR. KOLACZKOWSKI: You mean from the '80s?
MEMBER SHACK: From the '80s, yeah.
MR. KOLACZKOWSKI: Well, I mean, to some
extent certainly, yes. Oconee being a BNW plant has,
of course, an integrated control system feature.
Oconee has gone through an effort of updating that ICS
system from the system the way it used to look back in
the '80s time period. That is reflective ultimately
on what the likelihood is of the integrated control
system inducing faults that could cause PTS
challenges, etc. We had to look at that. That is
just an example. Probably if I thought about it a
little harder I could think of some more.
Clearly one of the things that we had to
do and did do in order to properly represent the
potential PTS challenge scenarios for Oconee was get
the latest information on what the plant looked like,
how it's operated, what are the procedures they are
using, what's the operator training, to what extent
are they sensitive to PTS challenge, etc., etc. All
of that needed to be, and was, done.
That's why we had to, in fact, interact
with the licensee considerably to make sure that our
model was indeed reflective of the way the plant is
designed now, the way it's operated now, the training
the operators go through, etc. There was a lot of
work done in that area. That's just an example. If
I thought about it, I could probably think of others.
So I'm going to be talking about, again,
what's happening sort of in that first box in terms
of, at least, some of the major modeling features and
what we did in terms of handling, at least, the key
uncertainties that we need to worry about, that
portion of the analysis.
Okay. As the outline suggest, one of the
first things we want to address is limitations or, if
you will, constraints that we sort of impose on
ourselves in terms of some limits that we put in terms
of what we are going to analyze.
Also I want to point out a few keys things
that were considered that are important to ultimately
assessing what the PTS risk is from the PRA portion of
the analysis.
Under limitations or, if you will,
constraint side, I think we have to recognize that we
are using an event tree PRA type of modeling structure
to represent what those accident scenarios are. Along
with that comes all the typical underlying PRA
assumptions. Things are binary, although even there
I know like a TBV, turbine bypass valve, is either
going to stick open completely or it's going to
reclose like it's supposed to.
Generally the model does not address it
sticks open 30 percent or something like that. On the
other hand, I should say that towards the end of this
process, if we found out that something like that was
very important and, in fact, it is in a few sequences.
The example that we'll get to at the end
of this entire presentation will point that out, as a
matter of fact, where we did have to go back and treat
the fact, well, is the valve only 30 percent open or
is it 80 percent open, etc., as part of the
Where it was important to do so, we did
digress from the typical PRA binary look at the world.
But in general it is a binary look at the world.
Another example, the assumption that random events
occurring follow some poison distribution process.
That's an underlying assumption that we've
always used in PRA so you've just got to recognize
that if you're going to model these sequences using a
PRA event tree structure that there are some
underlying assumptions that have just come along with
the PRA process. Those are still there for the most
The next thing I should point out is that
there was a screening step performed before really the
collective PRA information about the sequence as well
as the T-H information about the sequence. Before
that information was passed on to the fracture
mechanics modeling there were some screening done.
For instance, the PFM folks did not
analyze every sequence that comes out of the PRA
model. There is something like 160,000 sequences, I
think, in this model. We could not do 160,000 PFM
runs or whatever. Even if you include the fact that
we binned a lot of them, etc., there would still be
many more than what PFM analyzed.
Largely that screening was done on what we
think are very conservative either low frequency
scenarios. In other words, the frequency was so low
that even if you assumed the conditional probability
of thru-wall crack frequency would be -- excuse me --
that the conditional probability of vessel failure
would be something approaching one and you just knew
it was no way going to dominate the results. Those
really, really low frequency scenarios were screened
and weren't even analyzed in the PFM part of the
Similarly, on a T-H basis some of the
accidents are some of the scenarios that are analyzed
in the PRA model. Many of them, especially for taking
credit for operator action, for instance, would lead
to a pretty benign cooling event.
If it was pretty clear that either the
rate of cooling that we were getting or, in fact, that
the ultimate temperature that we would reach to were
such that that rate was just so slow, say, well under
the typical 100 degrees per hour cool down rate, or
if, in fact, that the ultimate temperature that we
would reach was something approaching 400 degrees in
10,000 seconds into the scenario, clearly those kinds
of cooling events are not going to represent
significant challenges in terms of this phenomena
Where we saw that a scenario, even though
it may have a high frequency, for instance, from the
PRA aspect, was very benign from a cooling standpoint,
then that scenario was screened and wasn't analyzed in
the fracture mechanics portion of the model.
Having said that, I should point out that
actually there wasn't a lot of screening done. I
would say a lot of the scenarios passed through and
were analyzed anyway just to make sure that they were
not PTS challenges.
The point is if it was pretty clear that
the frequency was just so low, or the amount of
cooling was just so benign that it was not going to be
a dominate contributor to this challenge, the PFM
folks never even saw that scenario so there was a
certain amount of screening done.
As has already been pointed out, at this
point in time external event types of scenarios,
fires, floods, seismic, that could somehow cause an
overcooling event and how would the operator respond
to this event given that there is also a fire going on
in the plant, for instance. That has not been
analyzed yet. It will be.
We have an approach, a couple of
approaches actually, outlined. Those are being
reviewed and we should be planning to proceed on doing
something in the external event area very shortly.
Realize the results you're going to see now and the
things I'm going to talk about don't address external
events at all.
On the considerate side, I think a couple
of key things that we need to keep in the back of our
mind as we look at the results of Oconee and
subsequent studies that are going on in terms of
Beaver Valley and the other plants.
We are looking at both full power and hot
zero power, initial conditions. We are looking at
different decay heat levels when this "overcooling
event" occurs.
Obviously if you're at a hot zero powered
condition and all of a sudden you have a severe
overcooling event, the plant response is going to be
somewhat different because it doesn't have that high
decay heat to kind of slow the thermal response down
in the plant. We are looking at both full power and
hot zero power types of scenarios.
Secondly, the timing of the events.
Again, a PRA event tree model in large part does not
see time. It doesn't know when things happen. It
just knows that they happen. Either a valve sticks
open or it doesn't but it doesn't say when does that
valve stick open.
For the large part, the model does not see
time. However, having said that again, where time is
important and where we did need to think about late
occurrences of events or whatever, those have been
included and, again, I think that will become clearer
as we proceed through the presentation and
particularly in the example when we get to it.
Finally, a point I want to make about
operator actions is that typically in PRA we have
always tended the model what we hope for the more
important errors of omission. I think a serious
attempt has been made to look at acts of commission.
In fact, there are a lot of -- I'm calling
them acts on purpose because they are really not
errors given the situation in most part. But there
are conditions under which the operator will induce,
if you will, a cooling to the primary system. Just to
give an example, in a loss of heat sink type of
accident where, let's say, we've tripped a plant and
we've lost all feed to the steam generators.
One of the things the procedures and the
training will direct the operators to do is to
depressurize the secondary side of the plant to try to
get feed into the steam generators by virtue of the
condensate booster pumps.
That act of depressurization causes a
cooling in the primary system because he's going to
open up by-pass valves to depressurize the plant which
is going to cause cooling but that's a proceduralized
directed act of commission to, if you will, add
cooling to the plant. Obviously the operator is going
to try to do it in a controlled fashion and then not
cause a serious overcooling event.
Nevertheless, those types of acts are
included in the model and we actually did look for
other types of acts of commission where it would be a
mistake on the operator's part, if you will. Quite
frankly, we didn't find too much in the way of
contacts that would cause a high probability of that
Nevertheless, there are procedurally
directed acts which do add cooling to the event when
it's necessary to do so to, for instance, avoid a core
damage event and those acts are included in the model
Finally, there are four functions of
interest really that we are looking at and I'm going
to point those out here in the next slide if there are
no questions on this one.
This is sort of a complex diagram and I
certainly don't plan on going through each and every
line of this unless there are questions from the
committee. I think the main thing that I want you to
walk away from in terms of looking at this diagram,
this is a functional representation of the types of
scenarios that are actually in this 160,000 sequence
model that we have super simplified down to one page.
As you can see, what we are really looking at is some
sort of an initiating even that comes along either
while the plant is operating at full power or as a hot
zero power condition.
Then we're looking at the status of four
functions that are listed there across the top. What
is the status of the primary integrity. For instance,
do we have a loss of cooling accident going on or is
the primary basically still intact.
What is the status of secondary pressure.
That's where you're covering things like do we have a
turbine by-pass valve stuck open that is
depressurizing the plant and causing a cooling of the
primary system.
What is the status of secondary feed. Is
that being properly controlled or are we indeed
overfeeding the steam generators which also, again,
could induce cooling in the primary system and,
therefore, have an effect on the downcomer wall.
Then, finally, what is the status of the
primary flow and pressure conditions. This is where
you address such things as high pressure injection,
potential repressurization events, are the reactor
coolant pumps on or off because, again, that's going
to have something to do with the amount of mixing
that's going on in the primary versus the potential
for stagnation conditions.
Those are the four functions of interest.
Let me just say that the model addresses not only each
one of those functions individually but also all the
interactions between them, as well as the fact that
you can have combinations of those occurring at once.
Maybe you are overfeeding the steam generator and at
the same time there's a stuck open TBV so two of the
functions are, in fact, inducing cooling in the
All of those interactions, the multiples
of those interactions, etc., are all handled as part
of this 180,000 or 160,000 sequences. Unless there
are questions on that, that was really all I was
planning to go through there.
This and the next slide serve to
illustrate sort of the process that went on in
building the model. Again, I don't plan on going
through this in a lot of detail unless there are
questions. There are a few things that I want to
point out about the process.
As with any modeling process, the first
thing you've got to do is go out and get a bunch of
information on what it is you are going to model and
how you're going to model it.
You can see there on the left sort of a
long list of things that were collected in order to
make sure that the model was going to be an accurate
representation of the Oconee plant. You can see the
list there. I won't go through it but a couple of
things I do want to point out, especially the last two
bullets there.
We had the fortunate luxury to be able to
visit the Oconee plant and actually spent, I think,
like two or three days there the first time that we
went. During that process Oconee staff and some of
their regular crews were able to perform, I think in
their case, three or four transient scenarios,
different scenarios that represented overcooling
We got to observe the interactions of the
crew, how they operate, roughly how long did it take
to get through various steps in the procedures, etc.
That helped us an awful lot on the human reliability
aspects of this portion of the analysis. I want to
thank the Oconee staff for providing that simulator
time to us and helping us through that aspect of the
Then lastly the interactions. Again, this
went on continuously. I don't know if Steve Nader is
listening but if he is, he'll tell you that we bugged
him way more times than they were hoping. If we had
questions about anything, "How does this ICS system
work again?" etc., etc., "Where would the operator be
in this step of the procedure at this point in time?"
Whatever questions we had, we would e-mail those
questions to them and the licensee was very prompt in
providing responses to us. We would get on the
telephone and we would discuss those responses. That
was a continuous step throughout much of the analysis.
Again, a point I want to make was it isn't
like we visited the plant on the first day, spent
three days, and then never talked to them again. It
was an ongoing effort through much of the analysis and
almost up to the present time so there's a couple of
things I want to point out.
From the information you collect, you
begin to identify things that you need to make sure
that the model includes and you can see a
representative list of some of those things. I'm
going to talk about some of the arrows in such a
Then we started building the actual model.
I just want to point out it's a large event tree,
small fault tree type of modeling process. As I said,
it's something like 150,000 or 160,000 sequences
represented in the model.
At this step you put a model together.
You include in it all the major equipment items that
you need to model like the reactor coolant pump
status, HPI injection, etc., etc. At this point I
just what to point out this is where we did an initial
cut of what the human error probabilities should be.
I should point out relevant to PTS
challenges, especially for secondary events where we
have a problem on the secondary, the operator plays a
very key role in arresting that event. The human
aspects of this are vitally important and need to be
addressed. I think that's one of the major
improvements that we made over some of the 1980's
Nevertheless, we have an initial model, an
initial cut of accident sequences including human
error probabilities and so on. There's a preliminary
quantification of that.
At that point we went back to Oconee and
while there was an internal review going on of the
model, the preliminary results, the human error
probabilities we put in, I should say at that point we
already had not only mean estimates for those human
error probabilities, but also had uncertainty bounds
on those human error probabilities.
That was all looked at again by the
licensee. We actually went back to Oconee and visited
them, I think for a day, presented the preliminary
results, left them with information on CDs and
whatever, and then they provided comments back to us
where they felt that we were either overly
conservative or, even in some cases, I can remember
one in particular, where the Duke staff pointed out
that they thought we were a little too optimistic with
our human error probability.
Nevertheless, it was give and take there
where they commented on the preliminary results. So
this analysis also took advantage of an intermediate
step where the Oconee staff looked at our preliminary
results, commented on that in terms of the PRA model
structure, some of the data that we had, the human
error probabilities, etc., provided comments back to
us, as well as we, of course, were performing an
internal review.
MEMBER SHACK: Where were your human error
probabilities coming from? What models were you
MR. KOLACZKOWSKI: The human error
probabilities, actually the way they were done on
Oconee it turns out it's going to be slightly
different from the way we may do it on some of the
other plants. We had a number of, I guess for lack of
a better phrase I'll call them, human reliability
experts pull together among the NRC contractors.
Recognize that we had the procedures. We
had observed these simulations, etc., and so forth.
We had many discussions with the licensee. The first
cut was that we went through essentially an expert
elicitation process.
We looked at the various scenarios and the
NRC contractors came up with an estimate of the mean
probabilities as well as a cut at uncertainty bounds
on the probabilities of human error failures and
various conditions. As I pointed out, these covered
again both errors of omission as well as what we
thought were key acts of commission.
Then that was presented to the licensee
when we went back to Duke. In fact, I sat down in an
all-day meeting with a number of their operators and
trainers, as well as some of their engineering staff,
and we went through almost number by number what the
human error probabilities were in the model and
presented why we thought the means ought to be what
they are, why we thought the uncertainty bounds ought
to be what they are.
The uncertainty bounds did try to
consider, if you will, the varying context of the
scenario such as what if that key instrument were
failed. How much higher would the human error
probability get without that key instrument or
whatever. Of course, now you've got to also factor in
what is the likelihood of that key instrument failing.
Quite frankly this was done in, I'll call
it, not subjective but I'll say a simplified manner.
The point is the uncertainty bounds did try to account
for variations on, if you will, the ideal state. That
was all gone over with the Duke staff. Then they
provided us some comments on the spot on some of those
as well as then provided some written comments later.
We incorporated those comments where we thought it was
To that extent it certainly reflects not
only the contract or expert's opinions on what those
probabilities of failure should be, but also the
licensee's feeling on what those probabilities of
failure should be through the comment process.
Let me just mention on some of the other
plants we are actually like in Palisades I'll just
point out that we actually sat down with the licensee
and came up with the human error probabilities
together as a group through an expert elicitation
To that extent we have even done, I think,
a little better job on Palisades as opposed to Oconee
where the contractors did a first cut and then we had
the licensees review and comment and then we changed
accordingly where we thought appropriate.
In the case of Palisades we actually sat
down together. Licensees and contractors were among
the experts and we came up with what we thought the
human error probabilities ought to be.
CHAIRMAN FORD: Alan, this whole question
of human performance is not my area but you did
mention that it was a large input to your overall
event tree scenario. When you look at Ed's view graph
7 and 6, are these curves in view graph 6 going to
change much if Oconee is a good plant. They are good
operators. Presumably there are some bad operators.
Will the bad operators markedly move those curves?
MR. KOLACZKOWSKI: I don't think so. You
have to recognize that you are asking a question where
I'm trying to now predict what are all the other plant
analyses going to look like.
MR. KOLACZKOWSKI: I guess I'll just say
this. I'll try to answer it this way and we'll see if
that is satisfactory for you. I think certainly today
in the year 2000 we are much more sensitive to
worrying about PTS than perhaps we were back when
these analyses were originally done which is
representative more of late '70s, early '80s kind of
event in terms of procedures, training, etc.
I think the industry at large has made
vast improvements in terms of dealing with potential
overcooling events and that is reflected in the way
the procedures are written, whether it's a BNW plant
or a Westinghouse plant or a CE plant. A lot of
improvements have been made in the procedures and the
training to be more sensitive to overcooling, to
control overcooling events, etc.
Having said that, as a result, I have a
feeling that a lot of the secondary kinds of scenarios
that can cause overcooling, be they overfeeds to the
steam generator, be they secondary depressurizations,
etc., I think the procedures today and the training
today in large part, no matter which plant we look at
-- again this is a presumption on my part but you
asked me the question and I'll try to answer as best
I can -- I think are such that we are going to find
that the operators are going to take actions fairly
promptly to deal with that situation.
Therefore, almost because of through their
acts we can make a lot of the secondary kinds of
overcooling events go away to the point that the
training and the procedures we are going to see
throughout the 3 NSSS vendors a consistency in terms
of that there is a concern about overcooling, that the
operators are trained to address those, the procedures
are written to address them promptly.
I would hope that we are going to find
across all the plants that one day you will be able to
give a considerable amount of operator credit for
arresting those events before they become serious
overcooling events.
That leaves the primary side. On the
primary side, depending on the nature of the event,
let's say, for instance, we take small loss of coolant
accidents which are going to lead to overcooling
situations and we need to inject obviously to deal
with the loss of inventory, in that situation there's
not much the operator can do, quite frankly.
I mean, the LOCAs happen, the cooling is
going at some rate and they have to inject water into
the plant. Obviously we do need to worry about
throttling that water when we do meet those throttling
criteria. That aside, at least during the initial
1,000 seconds in the event, the plant is going to
respond the way it's going to respond and there's not
much that can be done.
There the operator does not, in fact, at
least during the early phases of that accident, play
all that significant a role. I guess I'm giving a
long answer to your question but I think to the extent
that we find that the procedures and the training are
reasonably consistent and there is a sensitivity to
PTS challenges, hopefully we will continue to show
that the operator can arrest a lot of the secondary
kinds of problems.
The primary, there are some that you just
can't do much about, at least during the initial phase
of the accident. However much of cooling we are going
to get, that's how much we're going to get and there's
really not much the operator can do about it.
The point is that remains consistent
regardless of whether we are looking at Calvert Cliffs
or Beaver Valley or Fort Calhoun or whoever. The
point that that continues to exist, I think we'll see
these general conclusions kind of holding. Do I
expect vast changes in that curve that we saw in slide
7? My hope is that we won't.
CHAIRMAN FORD: Presumably jumping forward
to maybe the very last graph, when the licensee in the
future uses this new procedure, not the regular tried
1.153 or whatever the number was, he's going to have
to measure his HEP for his plant. No?
MR. KOLACZKOWSKI: I don't believe so. I
think we're going to try to by being conservative
about where we ultimately set the risk criteria.
Hopefully we can make an argument that we are covering
all the various plant designs that are out there as
well as all the variance that there might be operator
actions recognizing, as you said, that there are some
different levels in terms of the amount of perhaps
attention to this as an event.
There are different training programs,
etc. Nevertheless, I think they all have some degree
of homogeneity to it in terms of addressing PTF
challenge events. I think we will try to take credit
for that. Maybe Mark can provide a better response.
MR. KIRK: No. I don't think so.
MR. KOLACZKOWSKI: Well, we certainly
aren't going to ask all the plants to do a human
reliability test.
MR. KIRK: No. Maybe I would just like to
say two words and then expound on them; screening
criteria. We are trying to develop a screening
criteria in the same sense that we've got one now and
accepting the fact that right now it's widely regarded
that the screening criteria is the limit and you just
give up.
The fact is it is a screening criteria and
you can do other things like Reg Guide 1.154. I am
reminded of getting letters in the mail from my
financial adviser with the big warning that says,
"Past performance does not indicate future trends."
Right now the trends are pretty positive
that if we continue to go the way of the Oconee plant,
that we could be looking at raising the screening
criteria by anywhere from 30 to 80 degrees, just to
pick numbers out of the air.
Perhaps more importantly, that screen
criteria might be based on mean values rather than
adding margin terms that we do now. That provides a
substantial relief relative to -- you cited Beaver
Valley and Palisades that are right now sort of on the
brink to provide that much of an increase in the
materials screening criteria which, for all intents
and purposes certainly for a 40-year license life,
take PTS off the map.
I don't have the numbers stored in my
brain for 60 years but probably there, too. Again,
that is assuming things continue to go the way they
are. If somebody crossed that line, then yeah, they
would have to do a more detailed analysis and would
have to quantify some of these things.
MR. HACKETT: That's what 1.154 would be
about. As Mark said, that is not an option that has
been a popular option. It may not be in the future.
One other thing I'll just make a comment
on. Mark said you just want to be careful since this
is a transcribed meeting, Beaver Valley and Palisades
being on the brink. That would be at EOL, obviously
not right now.
MR. HACKETT: And those are significantly
in the future for both of those plants.
MEMBER SHACK: When you are comparing with
the Oconee PRA, they would presumably screen a lot of
these sequences out of their PRA since you don't have
an embrittled vessel they don't lead to core damage.
Do they include these and then screen them at some
point? When do they get cut out of the PRA?
MR. KOLACZKOWSKI: In terms of the type of
accidents that can happen or types of challenging
scenarios that can happen?
MR. KOLACZKOWSKI: As you see in the
diagram right down here in the lower left it says, "No
frequency screening," etc. We did not screen in the
model at this point on the basis of --
MEMBER SHACK: I was thinking back to the
Oconee PRA where presumably they don't consider PTS as
a failure.
MR. KOLACZKOWSKI: Oh, in terms of their
core damage scenarios.
MEMBER SHACK: They screen these scenarios
out somewhere along the way and I was just wondering
if they have them somewhere and you've gone back and
compared with them or they have screened them out at
such an early stage there's nothing to compare with.
MR. KOLACZKOWSKI: I wish in a way -- is
anybody here from Oconee? Okay. I'm trying to
remember. I don't think that their core damage model
-- let me call it that -- probably has any significant
PTS scenarios remaining in it. I hope I've said that
On the other hand, having said that, the
PTS scenario work that was done in the '80s is
probably in large part what they still represent as,
if you will, their PTS risk. Okay? Now, that is
essentially going to get updated, I'm sure, by this
Now, to what extent that this will get
folded into their core damage model, I guess I really
can't speak to that. You will have to ask Duke that
question. Yeah, clearly their core damage model now
certainly does not have a significant portion of it
dedicated to worrying about PTS challenges.
This work, I'm sure, especially if they
feel like -- I do think at this point they feel like
it's a reasonable representation of their plant and
their PTS risk. You would hope this will be reflected
in their analyses in the future or whatever. Again,
I can't speak to that.
MEMBER SHACK: You tried to make this sort
of a best estimate, right?
MR. KOLACZKOWSKI: Best estimate with
MEMBER SHACK: More conservative than you
typically are in a PRA. On an unlimited budget you
would have done it even better.
MR. KOLACZKOWSKI: That is correct. But
you're right. The purpose of this was to try to be
best estimate. However, reflecting the uncertainties
in all of the inputs that go in, and I'll be talking
more about the uncertainties shortly. But, yes, it is
meant to be a best estimate and not to be needlessly
conservative where we don't have to be. That is
MEMBER BONACA: Just a question I have.
Regarding the primary system injection they were
talking about some sequences where he doesn't have
anything he can do.
On the other hand, I mean, if you have a
large-break LOCA, clearly they will have a pressure
challenge so you will have excessive cool down but you
don't have a high-pressure challenge. If you have
small-break LOCA, you would be hanging up there in
pressure but you don't put in much water. Even feed
and bleed sequences, it seems to me, you would have
the same situation. This is pretty much self-
MR. KOLACZKOWSKI: Yeah, to some degree.
MEMBER BONACA: Only found some sequences
which are, in fact, a challenge. I was trying to
understand it.
MR. KOLACZKOWSKI: That is true. They are
to some degree self-controlling. Again, I didn't mean
to imply that if we have a loss of cooling action in
the primary, well, that's it. We have a major PTS
challenge and there is nothing the operator can do
about it. If it came across that way, I apologize.
MEMBER BONACA: No, no, no. I was trying
to understand, in fact, if the procedures would have
to have some warning to the operator, for example, for
bleed and feed in which you are intentionally open
your PRVs and feed into the system. Even in their
condition it's --
MR. KOLACZKOWSKI: It's only going to feed
in so fast because obviously the pressure is at a
certain pressure so the pump can only pump so much
flow, etc. You are absolutely right. I mean, the
smaller the LOCA, while the pressure may tend to stay
higher, obviously the amount of cooling you get is
going to be not as severe as if it was a much larger
Then again, on that side, you also have
the pressure staying relatively low so that is helping
you to some degree. So you are right, there is some
self-limiting features as to the way the physics
works. What I was trying to imply is that in the
first maybe 1,000 seconds of that event, really the
operator is not influencing the event very much.
It's going to do what it's going to do and
then at some point it begins to throttle back
injections so that we don't get into a
repressurization, let's say, scenario if the LOCA is
small enough. Then the operator is also again
influencing what the response is going to be
thereafter. But in the first 1,000 seconds of the
event, pretty much the plant is going to do what it's
going to do.
MEMBER SHACK: Let me rephrase Mario's
question in perhaps a different way. Is there some
small set of the 108,000 sequences that really
dominate the PTS risk?
MR. KOLACZKOWSKI: We believe so. Yes.
MEMBER SHACK: Can we describe -- I mean,
are we talking about LOCAs?
MR. KOLACZKOWSKI: Maybe a dozen.
MR. KOLACZKOWSKI: A dozen of that order.
MEMBER SHACK: They are slow-break LOCAs?
MR. HACKETT: I guess we are preempting.
Alan does get to this.
MEMBER SHACK: Okay. Maybe we'll just
wait then.
MR. KOLACZKOWSKI: But the short answer is
it's primarily LOCAs varying in sizes from very small
to what I would typically characterize as maybe a
medium break, not the double guillotine 34 or 30-inch
line break.
MEMBER SHACK: LOCAs that could actually
MR. KOLACZKOWSKI: Well, I don't know.
That depends on your opinion of what could happen.
MR. KIRK: You've quantified the
probability of that.
MR. KOLACZKOWSKI: We have quantified the
probability of that. Small and medium LOCAs of
varying types, relief valves sticking open, that kind
of thing on the pressurizer.
MR. KOLACZKOWSKI: Were are going to
describe later in more detail what the scenarios are
that are dominated.
CHAIRMAN FORD: Okay. We are scheduled to
have a break at 10:15. I don't want to destroy your
crescendo here. I'll leave it up to you two to decide
when we are going to have a break.
MR. KOLACZKOWSKI: I'll tell you what. If
you'll let me just go through the next three slides
and then I'll be ready to address the uncertainty,
maybe that's a good place to do a break first. Is
that okay?
MR. KOLACZKOWSKI: This is just a
continuation of the process. I pointed out that we
went through an interim step of sort of getting some
preliminary results, going to Duke, getting some
comments, etc., incorporating those comments where
appropriate. Obviously there was a revision of the
model, that step to incorporate those and requantify.
I should also point out that the binning
that went on that I mentioned before, it wasn't like
we took sequences, bin them once and then forever they
were in those bins. As we learned more about what was
dominating, about what was important to the plant
response we may decide that we have to make the
binning a little bit more refined than what we have
currently done.
We would go back and redefine a new bin,
rebin the accident sequences, do a thermal hydraulic
run on that bin so we had a representation of what
that bin represented, etc. There was an iterative
process going on here as binning became more refined
such that the bins really were a reflection of what
the scenario really was as best as we could fit.
We weren't doing these gross bins like
back in the '80s work where they also had 180,000
sequences and they put them into 10 bins. Whether it
was a small steam line break or a large steam line
break, thermal hydraulically looked like a large steam
line break.
We were able to make those distinctions
because we could run many more bins computer power
being what it is today, etc., and so forth. I just
want to point out that the binning kept getting
refined, refined through this process.
Finally we get to a final quantification
and binning step and out comes eventually for each T-H
bin what the frequency is of the scenarios that fit
into that particular T-H bin. That is really the
primary product, a description of the scenario and
what the frequency of that scenario is.
The only other thing I want to point out
on both this and the previous slide is that you see a
lot of arrows going to sequence definitions from
various steps and then T-H input being provided back
to the PRA process.
Again, that is just meant to be a
representation. It isn't like we took the sequences
and in one step provided them to the T-H folks and
they ran a bunch of T-H analyses and then we went off
and provided that stuff to PFM. This was highly
interactive, kept occurring over multiple steps in the
process to refine the interactions.
We talked about the fact that this project
had to interact a lot between the PRA, or
representation of this model, the T-H representation
of these scenarios, and then the PFM folks.
This largely is just meant to show you the
amount of interaction that was going on between the T-
H folks and the PRA folks who had to talk to each
other sometimes on a daily basis.
This is just meant to be a very quick
summary of some of the major features of the model.
On the left-hand side you see the initiating events
that we included in the model. Let me just point out
that they in large part are the same as what was in
the '80s work but, in fact, we did include, I think,
one or two things that perhaps the early work did not.
But for the large part there was not a
significant difference here in terms of the initiating
events that are modeled in this work as opposed to
what was done in the early '80s work.
Again, you notice these were looked at
both from a full-power and hot zero power condition.
Then over on the right-hand side you just see a quick
summary of, again, the four major functions that we
are concerned with and the equipment that is
represented in the PRA model.
What is the status of high pressure
injection charging? What is the status of emergency
feed? Is it overfeeding the steam generator? Is it
underfeeding? Is it being properly controlled by the
operator?, etc. You get a feel for the equipment that
is manifested in the PRA model somewhere in terms of
the status of that equipment.
Of course, another important part, as I
pointed out, is the operator actions. This is, again,
probably an over-simplification but, nevertheless,
does represent the types of operator actions that are
considered in the model some place.
I just want to point out again that if you
look at the list you will see both examples of errors
of omission but you will also see acts of commission
such as under secondary pressure control operator
creates an excess steam demand.
As I pointed out, in a loss of heat sink
accident where we've lost all feed, they are going to
purposely depressurize the secondary side of the plant
to try to get condensate feed into the steam
generator. Of course, they will try to do that in a
controlled manner.
Nevertheless, the operator is inducing to
some degree an excess steam demand event by procedure.
That act is included as a potential mechanism for how
we can get an overcooling situation in the plant.
Again, this is a list of the types of operator actions
that are considered in the model.
MEMBER BONACA: In the thermal hydraulic
analysis did you assume, for example, complete mixing
and just bulk temperatures, or did you have also an
assessment of azir muthal in the vessel?
MR. KOLACZKOWSKI: I'll let Dave Bessette
maybe answer that question.
MR. BESSETTE: Of course, the RELAP code
doesn't only treat large volumes with a single fluid
temperature so we had to address those questions
differently. We did that for combination of the
experimental program at Oregon State University in the
apex facility.
Also looking back on the various mixing
experiments we did back in the '80s in the aftermath
of the first PTS study. In addition, we did some CFD
analysis associated with the Oregon State work to
compare where you do get three-dimensional fluid flow.
So the combination of the CFD analysis and
experiments we were able to conclude that there are no
substantial azir muthal or circumferential temperature
variations in the downcomer adjacent to the core
MEMBER BONACA: So you didn't apply any
multiplier factor or anything of that kind? I mean,
you didn't have to do that?
MR. BESSETTE: That's right. We found a
simple temperature boundary condition to pass on to
the PFM people to sufficient and adequate.
MR. HACKETT: As we had suggested, this is
a good point to take a break.
MR. DUDLEY: And during the break I would
like to check the people who are on the bridge line so
if they would stay there until we chat. Thank you.
CHAIRMAN FORD: Okay. I hereby recess
until 10:30.
(Whereupon, at 10:14 a.m. off the record
until 10:34 a.m.)
CHAIRMAN FORD: Okay. I'd like to call us
back into session. Since Graham Wallis will not be
here until lunch time, I think Ed and I have decided
to swap the EFM and the thermal hydraulic sessions to
get a good reviewing.
MR. KIRK: It works for me. I'm not sure
Dave would like it.
MR. KOLACZKOWSKI: Shall I go ahead?
MR. KOLACZKOWSKI: Okay. You've heard at
least the major aspects of the modeling process and,
if you will, at least some of the main features of
what is included in the model in the way of the
functions of the equipment that, therefore, was
relevant equipment that we need to worry about, and at
least a quick overview of the operator actions that we
tried to consider.
I would like to now change focus a little
bit and talk a little bit about the uncertainty
treatment in this aspect of the entire analysis. This
first slide here, No. 21, is meant again just to keep
in mind in terms of what is the PRA portion of the
analysis trying to produce. What is its product and
ultimately working towards estimating a thru-wall
crack frequency on the vessel.
I have a statement of that. Hopefully a
somewhat succinct statement of what the PRA product is
trying to produce and, therefore, what uncertainties
we need to worry about.
Let me go ahead and -- I apologize. I'm
going to go ahead and read the statement and try to
focus on the key words. We are trying to come up with
the frequencies of a wide range of representative
plant responses to plant upsets. I'll call those
plant responses the plant upset scenarios.
We are trying to get the frequencies of a
wide range of scenarios that are each described by
some set of thermal hydraulic curves in terms of
pressures and temperatures and so on which occur as a
result of mitigating equipment successes and failures
as well as operator actions, that result in various
degrees of overcooling of the internal reactor vessel
downcomer wall.
Keep in mind here that we really have as
an output unlike the typical core damage PRA type
models where we sort of define a state of core damage
and then we say, "Okay, does this scenario lead to
core damage or not."
In this case, we have a much more complex
situation in which you cannot define a single state
that says this is overcooling or this is a PTS
challenge and this is not. We actually have degrees
of overcooling and, therefore, degrees of PTS
challenges. In reality we don't have this binary
output of either it is core damage or it's not.
What we really have is an output that says
this represents some amount of overcooling which may
or may not be a serious challenge from a PTS
standpoint. This scenario represents yet a different
degree of overcooling, maybe worse, that represents
perhaps a greater challenge.
That is why you have to go through all
this binning and separate those, etc. It isn't that
all the scenarios are being binned to a single output
that says core damage. These are degrees of PTS
challenge that makes the process much more complex as
a result.
Nevertheless, we are trying to come up
with frequencies of scenarios that represent potential
PTS challenges and they are various degrees of
overcooling in the plant that could occur.
Now, in trying to come up with that
product, of course, we build a model to do that. As
a result, the model represents, to some degree of
course, sources of uncertainty in that how accurate is
the model really representative of the real world.
We'll talk about that in just a moment.
Then even given the model which
represents, if you will, these scenarios that are
going to go on to the rest of the analysis and will be
analyzed both thermal hydraulically and then where it
is a serious challenge also model from the fraction
mechanics point of view.
Even given the model then, of course, the
other important product of this portion of the
analysis is to come up with a frequency of scenarios
each of which is an overcooling situation. Obviously
there is uncertainty in what those frequencies are and
we'll talk about that.
CHAIRMAN FORD: Alan, is it a given in all
those scenarios that the pressure remains constant?
CHAIRMAN FORD: So is that not also the
MR. HACKETT: I was going to say you could
probably have added to Alan's definition overcooling
and potential overpressurization.
CHAIRMAN FORD: Maintenance of pressure.
MR. KOLACZKOWSKI: That is probably true.
MR. KOLACZKOWSKI: I did not intend to
mean that we did some sort of screening or didn't
represent certain pressure situations or not. I did
not mean to imply that.
Next slide. From the modeling
perspective, again, what the PRA and the way PRA
models the world, if you will, is that each scenario
represents a collection, if you will, an interaction
of events.
The plant is sitting there at some stable
state, be it either at full power conditions or hot
zero power conditions, and then along comes some sort
of an upset, an initiating event, as we call it, which
causes a transient and a subsequent response to be
required in terms of the plant response.
The plant response depends on what the
status is of various equipment, whether it be the
status of emergency feed, whether it be overfeeding or
being controlled, what is the status of the term by-
pass valves, have they properly throttled or is one
stuck open which causes an overcooling situation,
etc., depending on the nature and the status of the
various equipment that is relevant to a potential
overcooling scenario or repressurization scenario.
Depending on the status of that equipment
and the subsequent operator actions, you could view
each scenario, if you will, as an initiating event,
some status of equipment, and some status of operator
actions which combine together either does lead to a
overcooling type of scenario and it leads to a
potential PTS challenge, or a very controlled non PTS
kind of an event. That's the way the model
essentially is representing the real world.
I want to point out, and I mentioned
earlier, that the model is for the most part time
independent. It doesn't necessarily know when these
things are happening. It's just modeling the world in
a way that says this initiating event has occurred.
Either the valve has stuck open or has not, but we
don't necessarily, at least in the first cut of the
model, say anything about when the valve stick open.
Now, it does turn out that where that is
important in describing the scenario and the potential
amount of overcooling, etc., we do go back later and
add timing into the situation. Again, I think the
example later on will demonstrate that hopefully very
I do want to point out that there was a
place where timing was introduced into the model right
from the start, and that is when the operator action
occurs. Recognize, as an example, we do have a
secondary depressurizaton event going on, some sort of
excess of steam demand.
By procedure one of the things that the
operator will be embarking on once they detect that
situation is to isolate that faulted steam generator.
How quickly the operator performs that in large part
will dictate how much overcooling we get. The more
delayed the actions, the more overcooling we will
have. Whereas if he takes those actions very
promptly, we'll hardly have any kind of overcooling
event at all.
What we did do and, again, the example
will demonstrate this, is that we did in the model
from the start take this time continuum and break it
into very discreet time points in which we said let's
address and quantify sequences such that we are
What's the likelihood that we have this
initiating event, some status of equipment states, and
the likelihood of the operator takes whatever the
action is supposed to be within, let's say for the
sake of argument, 10 minutes into the event.
If the operator does successfully perform
those actions, maybe the event is for all practical
purposes over. Maybe not even a very serious
challenging event and would go into, if you will, a
very benign overcooling bin.
On the other hand, in the model we also
say what if he didn't take that action in 10 minutes?
What's the likelihood that he would have taken it by
20 minutes into the event and we reassess a
probability for that likelihood.
Now, obviously if the operator does not
perform the action within 10 minutes time but does
perform it within 20, he may have moderated the
overcooling situation ultimately, but obviously as
conditions got much worse before that arrest took
place so that would perhaps go into a different T-H
We would actually perform a thermal
hydraulic analysis of the same scenario with the
operator taking the action in 10 minutes and then the
same scenario with the operator action occurring in 20
minutes and we put those in two different bins.
MEMBER BONACA: Just for clarification,
you're talking about time was of an element but if the
cooling occurs at the rate which is less than the
normal cool down rate, you will not consider it other
MR. KOLACZKOWSKI: That is true.
Generally if a scenario was cooling down at less than
100 degrees per hour, that was screened from --
MEMBER BONACA: You just take it out.
MR. KOLACZKOWSKI: We still went ahead and
kept it in the model and calculated a frequency for
it. Ultimately none of that, primarily because of the
thermal hydraulic response, as you say, would be a
slow cooling situation was ever passed on to the
fracture mechanics analyst who analyzed it.
MEMBER BONACA: You are using the word
MR. KOLACZKOWSKI: That is correct.
MEMBER BONACA: To some degree, I mean,
you still -- the main concern is temperature gradient
across the wall or the vessel.
MR. KOLACZKOWSKI: That is correct.
MEMBER BONACA: That is a time dependent
function. Even for those overcooling situations,
which are benign, you are saying, at what point do you
introduce the dependency so that you can eliminate
some of those, or do you introduce the dependence at
MR. KOLACZKOWSKI: Well, in this -- I'm
not sure I follow your question. In this portion of
the analysis we did not, for instance, eliminate the
sequence from being quantified.
MR. KOLACZKOWSKI: Even if we saw that it
would be a fairly benign or slow cooling ramp, we
didn't at that point say we're not even going to
figure out what its frequency is. We kept it in the
model and so many of those 180,000 sequences are, in
fact, I'll call them relatively benign cooling
scenarios but we went ahead and calculated their
frequencies anyway so they are in the model.
MEMBER BONACA: So this benign definition
you're using is more qualified in the sense that it is
a slower cool down rate. There is still a cooling
respect to the cool down rate but it's not such a
challenge. I mean, you haven't attached any
quantitative definition of what it means.
MR. KIRK: This is Mark Kirk. If I could
just interject something that came up in our
discussions yesterday that is perhaps relevant to
point out here. Yeah, we made some -- well, I was
going to say a priori assumptions but assumptions
based on our previous understanding of PTS and we've
already talked about them. Sequences where the mean
temperature didn't fall below 400 sequences and it
didn't cool at a rate in excess of 100 degrees
fahrenheit per hour were not passed onto PFM.
Having said that, there were some
sequences that were just over that line that were and
we have quantitative information on them and those
invariably came up with zero conditional probability
failure. We haven't directly tested our screening but
we no that those that just made it over the bar didn't
matter anyway. I think that makes me feel good at
MR. KOLACZKOWSKI: Yes, it somewhat
validates the screening that we did do.
Okay. So I talked about the fact that
there is one very important aspect of timing that is
introduced in the model from the start, and that is
the timing of the operator actions. At some point we
had to say if they do it by X minutes, we will make
the assumption that the crew never does it. That
would be the worst of conditions, if you will, that
the operator does not arrest the situation at all.
Quite frankly, we use judgement as to when
to pick those times depending on how fast the ramps
were going down, when did we think it would matter.
Again, a lot of that came from our prior knowledge of
PTS conditions as to what were some reasonable times
to pick that we would probably put the sequence into
different bins and then pick those accordingly and
came up with operator failure probabilities for the
various times, had to account for dependencies, of
But at some point we said we are going to
assume if they don't do it by this time that they will
never do it and, therefore, that would be the worst of
conditions and came up with a probability of the
scenario actually going that far.
The other point I wanted to make -- the
last two points is, again, for the most part modeling
uncertainties were really not quantified per se. In
other words, we built a model of the plant response in
terms of the various statuses of plant equipment, what
initiators could occur, whether operator actions
occurred or not, etc., and we said that's our
representation of the world.
It's a binary model for the most part as
PRA normally is. However, having said that, what this
third bullet is meant to imply here is that if we did
have to worry about something additional that is not
normally treated in a typical PRA model of the world
such as when does the SRV reclose and, therefore, it
could potentially repressurize the entire system.
We did go and address that as, if you
will, a final step in the process. The model was
initially built without worrying about the timing of
the SRV reclosure. We just calculated what is the
probability it will reclose. Then when we recognized
that was a high enough frequency to worry about, we
went back and addressed what would be the timing of
that reclosure.
What is the probability it would reclose
early versus what is the probability it will reclose
late because that represents a very different thermal
hydraulic response and we have to put those two events
into two different bins.
We did make modeling changes to treat
uncertainty of timing, for instance, and other factors
but we only did it where it looked like it was going
to be important, which leads to the final bullet.
Therefore, all other modeling
uncertainties were not explicitly quantified. When we
talk about the uncertainties ultimately of the
frequencies that are passed on to the PFM portion of
the analysis, much of the modeling uncertainties are
not really quantified unless we deemed that it was
important to do so and then we adjusted the model and
came up with probabilities to try to address that
aspect of the modeling uncertainty.
MEMBER BONACA: So you really leave it to
the PFM analysis to make a decision whether or not a
sequence should be eliminated? What I mean is that
the same input that -- here you are using the
frequency for the sequence and then the sequence is
input to the thermal hydraulics that comes up with a
certain profile of pressure and temperature versus
MEMBER BONACA: And that may say for this
transient the probability is zero or very low so,
therefore, you eliminate the sequence. That is really
how you go through the logic process.
MR. KOLACZKOWSKI: That is true.
Okay. Now, the other aspect I mentioned
was even given the model we often now calculate the
sequence frequencies and there are uncertainties
associated with that. I want to address a little bit
about how we look at that from an uncertainty
Again, just a reminder that each scenario
is an interaction of essentially an initiating event
along with the status of various equipment and along
with the status of various operator actions which may
or may not occur, or may occur at some time.
Therefore, you can think of each scenario
-- oh, by the way, the model, therefore assumes that
those events are, in fact, random events. We don't
know what initiating event is going to happen, when
it's going to happen, although we do try to calculate
a rate of its occurrence on a per year basis, for
Again, we model the various states of the
mitigating equipment. But in terms of when that
initiating event occurs, how the mitigating equipment
is going to act whether it's going to fail that
particular time or whether it's going to succeed, that
is treated as a random event with some failure
probability, some failure rate associated with the
occurrence of that failure.
The same thing with the operator actions.
Think of it as another component of the system at some
failure rate and the operator may or may not perform
the action quickly or in a delayed fashion or
whatever. That is all treated as random events.
Therefore, the occurrence of each scenario
is a random event. There are many ways to challenge
the vessel from a PTS viewpoint. We talked about
180,000 sequences from benign to various serious
challenges. Which challenge is actually going to be
the next one we don't know.
The occurrence of each scenario is really
a random event and think of it as nothing more than
the probability -- excuse me, than the multiplication
of the frequency of the initiating event which is a
random event times the probability of the equipment
response times the probability of the operator actions
all of which are random events so the scenario ends up
being a random event.
I want to jump to the next bullet first.
Therefore, the various scenarios and their frequencies
are really characterizing, if you will, the aleatory
uncertainties, the randomness of how we may get a PTS
You will see later that what Terry and the
Oak Ridge folks do is that they sample from each one
of those sequences to ultimately come up with this
combination of sequence frequencies combined with the
conditional vessel failure probability to come up with
a distribution for what is the thru-wall crack
frequency. They are sampling from each of those
possible scenarios in terms of the way a PTS can
When they are doing that, what they are
really doing is they are quantifying the aleatory
aspects, the randomness of how PTS challenge can
occur. Those scenario definitions and each one of
those frequencies really represents the aleatory
aspect of the uncertainty.
Now, along with the fact that each
scenario is this multiplication of the frequency of
initiating event and so on as you see in the equation
there, obviously what we're trying to do is predict
what we think is the true failure probability of the
equipment, what is the true failure probability of the
operator, etc.
Therefore, what is the uncertainty on that
"failure rate" or that failure probability. There we
are really addressing the epistemic uncertainties. We
don't really have the knowledge to know what the true
failure probability is for a turbine by-pass valve to
stick open.
Nevertheless, we make an estimate as to
what that is and we try to assess an uncertainty bound
as to what that probability is. When we are doing
that, we are addressing the epistemic uncertainties.
Those epistemic uncertainties, coming to
the last bullet, are indeed propagated through the
analysis using a Latin hypercube sampling approach.
Essentially when we go to calculate the
frequency of a scenario, we are putting in
distributions for each one of those inputs in that
equation where that distribution represents the
epistemic uncertainty on that probability or on that
initiating event frequency.
Then we sample from those distributions to
get a distribution out as to what the frequency of the
scenario is. At that point what we're doing is
propagating the epistemic uncertainties through.
Unless there are questions on this, I'll
move on to the next slide.
Okay. That's a quick overview of how the
uncertainty is being handled. Now, I do want to
address just briefly what is different between this
work and the previous 1980s work. What are sort of
the dominate things that are different.
What you see, anything here in the red --
and I apologize for those that have a black and white
copy of this, but what you see in red are those things
that tend to -- would tend in general to increase the
PTS risk from what was done in the earlier work.
What you see in green are those things
that we have changed from the earlier work that would
tend to decrease the risk from the early assessment.
You can see here, and I mentioned before that pretty
much the initiating events and, for the most part, a
lot of the scenarios that we have modeled were in
large part covered by the earlier work.
However, we did do what is characterized
here as a slight extension of some of the possible
scenarios. In fact, you will find out that the
example one we're going to go through, one of the more
dominate scenarios, is a scenario that was not
included in the original analysis.
There has been a slight expansion of the
scenarios that we analyzed relative to the '80s work
and also some of the treatment of the support systems
like instrument air and component cooling and that
kind of stuff.
Those in general tend to increase the
risk. The more scenarios you add, the risk is going
to go up. If you fail to analyze a particular type of
scenario before and we're analyzing it now, that's
going to generally add to the risk.
On the other hand, by using the latest
equipment failure probabilities, initiating event
frequencies, I know the staff is well aware of the
fact that if we look at the number of plant trips that
we're having these days on a per year basis versus
what we were having back in the '70s, we used to have
six A transients a year that could potentially lead to
overcooling events.
Now most plants are operating at half a
trip per year kind of rate. Obviously that kind of
information being reflected in the current analyses is
tending to decrease the overall PTS risk.
The detail HRA. I pointed out the fact
that in the early work for the most part there was
either very simplified treatment of the human or, in
fact, in many scenarios, little to no treatment at all
or credit was given to operator actions. We are doing
much more now to credit the operator actions where we
think it is appropriate to do so.
I think we went through a very detail
expert elicitation licensee involvement process to try
to come up with best estimates and error bounds on
those human failure probabilities for the scenarios.
You can see some features of the HRA process here
which I won't go through unless you have questions.
Again, the last sub-bullet there,
consideration of acts of commission that could
exacerbate cooling, again, those things would tend to
increase the risk where you include that.
Very important point, much more binning
than we ever did before. I think that is something
that should not be overlooked. Never mind the values
and the numbers that went in and so on and so forth.
Just the fact that rather than having to
put everything into 10 bins, we could put things into
100 bins has removed a lot of the conservatism in the
original analysis because we could do a much more
finer scenario definition to bin process such that the
bin really represents a few scenarios that are really
quite representative of that bin and didn't put in,
for instance, small main steam line breaks into a bin
that has large steam line breaks and then treat it as
large steam line break. We are able to avoid those
kinds of issues.
Obviously the uncertainty analysis itself,
just doing it probably in general does tend to
increase the risk -- I've got to be a little careful
here -- in that from a best estimate point of view
when you do the uncertainty analysis you are
accounting for those tails out there that do tend to
bring the mean up than where they would be if you were
just doing a best estimate point estimate analysis.
You do have to keep in mind that the
original '80s work, however, was conservatively done
so from that aspect, I guess you would make the
argument the uncertainty analysis probably isn't
increasing the risk.
If they had added too much conservatism
into the original analysis. But I'm just saying from
a best estimate point of view if you were just doing
a point estimate analysis, you do have to keep in mind
if you do a full-blown uncertainty analysis, you are
going to get some tails out there which are going to
tend to increase your best estimate of the mean a
little bit than if you were just doing a point
estimate mean kind of evaluation. I've got to be a
little bit careful about how I characterize that last
Unless there are questions there, those
are some of the major differences, I think, between
the work that we've done on the PRA aspect of this
versus what was done in the early work.
Is that my last slide or is there --
MR. KIRK: You have one more.
MR. KOLACZKOWSKI: This is just meant
again to be a cartoon as to what the major products of
the PRA process are that then goes into the rest of
the analysis. There are really two things that are
coming out, although one of them is really contributed
by the T-H folks, and that is what is coming out of
this whole process is a scenario that is being
described by a set of thermal hydraulic curves.
Then there is a frequency associated with
that bin or that scenario, if you will. That
frequency is being described by distribution, by a
histogram, which is representative of the epistemic
uncertainty in the frequency of each one of those
You see there we basically describe that
distribution using 19 quintal levels. There was also
a 95 percent confidence interval put on each quintal
value. That information along with the T-H curves
that represent that scenario is what was passed onto
the fracture mechanics people for them to do their
Last slide.
CHAIRMAN FORD: Sorry. I'm being a bit
slow here.
MR. KOLACZKOWSKI: That's okay.
CHAIRMAN FORD: You see the density, that
MR. KOLACZKOWSKI: That's a probability of
distribution function of the -- that's our belief on
what the true frequency of the scenario is.
CHAIRMAN FORD: So that distribution code
is for a specific --
MR. KOLACZKOWSKI: T-H bin. Remember, it
may have some multiple scenarios in them.
Nevertheless, we think that they are all so similar
that they can be represented by one set of T-H curves.
Then essentially what you have to do, if you will, is
add up the frequencies of all the sequences that are
in that bin to get the frequency of that bin.
The frequency of that bin is ultimately
represented by a distribution which we then broke into
19 quantiles, etc., for descriptive purposes so that
the fracture mechanics people could then go and sample
from this distribution at the same time that they're
sampling from the output of the PFM code to put
together to get a thru-wall crack frequency in its
distribution. I don't know if I cleared that up or
That is the uncertainty on what the true
frequency of the bin is. You can describe that
histogram by a mean 95 percentile, a 5, etc., so
forth, and that's our description of the frequency of
MR. KOLACZKOWSKI: I guess a couple of
things I would like the community to walk away from
the PRA aspect of this in terms of the modeling we did
and the uncertainty treatment, I guess these are the
major points.
We feel that we have modeled all the
relevant initiators functions and equipment of concern
that through the status of those could represent some
sort of a potential PTS challenge. I keep saying
this, I know, but operator actions plays a very key
role in the arresting of many overcooling events.
To model the situation without operator
action would obviously not do any justice at all and
we would be back to the '80s work that gave very
little credit to operator actions. I think we have
tried to reflect that credit appropriately based on
current day standards of procedures, training, and so
on and so forth.
CHAIRMAN FORD: Just to make sure I
understand, if you go back to VG23, I can understand
what you have done in VG23.
CHAIRMAN FORD: I can understand the
epistemic actions. I can understand the physical
reality of what is happening there. The aleatory
uncertainties don't mention that at all. How much is
that going to affect your conclusions? I'm going to
assume these are things that are completely random so
how do you take that into account? Was it just --
MR. KOLACZKOWSKI: If I can go back to the
overall diagram which would be like No. 7 and 13.
When the [CPF] * [fr] takes place, remember we are
providing to that circle sequences and their
MR. KOLACZKOWSKI: Each frequency has a
distribution which is representative mostly of the
epistemic uncertainty and what the true frequency is.
Although, quite frankly there is some aleatory
probably buried in that that is not separated but is
largely represented in the epistemic uncertainties.
Each sequence, and there's a lot of sequences, we
don't know which sequence is going to happen that will
represent a PTS challenge.
A collective set of sequences is
representative of the randomness of how the next pTS
challenge is going to occur. The PFM folks when they
do this combination in the circle, they are picking a
frequency value from every one of the sequences that
could represent a PTS challenge when they do this
combination of [CPF] * [fr].
That is the place where the aleatory
aspect of this; that is, the randomness of how the
sequence is going to proceed gets factored into an
estimate of the yearly frequency of thru-wall crack.
Does that answer your question?
So that's where the aleatory is really --
the aleatory really doesn't get treated until
essentially the last step in the process but we are
propagating the epistemic all the way through all the
time. Does that help?
MR. KOLACZKOWSKI: I think I was on that
last slide if there are no other questions. Again, I
talked about we think the model is a pretty good
representation. Again, modeling uncertainties for the
large part, though, are not quantified except for
where it's important.
We did at the end of the process have to
go in and adjust the model to treat those, but the
frequency uncertainty, the epistemic uncertainties are
propagated all the way through. We determine or put
a distribution on each one of the inputs, get a
distribution on the output, propagate that all the way
through to get a distribution on the frequency output,
thus described ultimately as a histogram.
That is sort of a summary, if you will, of
how the uncertainty is treated. The aleatory
ultimately gets captured in that very last step. I
think that hopefully will also become clearer as we
get to that portion of the presentation. Does that
help? I think that ends my --
MR. KIRK: That's your last slide.
MR. HACKETT: Any other questions for
CHAIRMAN FORD: When we go through Oconee
we'll go through this again step by step.
MR. HACKETT: Right. Exactly.
MR. KOLACZKOWSKI: This is sort of an
overview of the generalization. As we get through the
Oconee and the example, hopefully some of this will
even crystalize some more.
MR. HACKETT: As Peter mentioned earlier,
we are going to look at another departure from our
plan here. To accommodate Dr. Wallis' arrival we are
going to now go into the probabalistic fracture
mechanic aspects of the program in more detail. Mark
and I talked and hopefully we can try to get at least
most of that in before the lunch break. That will be
our goal anyway.
MR. KIRK: It depends on how many
questions Dr. Shack has.
If you go to view graph 60 in your
handout, that is the beginning of the PFM part of the
talk. So again, you see the most frequently used
slide in the pack. We are now going to focus on
expanding the probabalistic fracture mechanics
As I indicated, behind each of these blue
boxes is sometimes a frightening level of complexity.
Now what I am showing you is what is inside the PFM
box. This is something like those Russian dolls. You
keep taking them apart and you get more and more and
more. You can either lose yourself or make a career
in it, however you want to think about it.
This slide illustrates the first doll
inside the box marked PFM. It's got both a material
resistance side which is shown down at the bottom of
the slide, and applied driving force side which is
shown off to the left. We just worked through some of
the steps in this. I'll be discussing the irradiation
shift model, index temperature model, and fracture
mechanics model in more detail.
We start off with data concerning
chemistry, fluence, and, of course, temperature
estimates from thermal hydraulics. That feeds into
our radiation shift model. From that we estimate what
the affect of irradiation at a particular temperature
for a given length of time is on a material having a
particular chemistry.
That tells us how much irradiation damaged
our material. We add that to an un-irradiated index
temperature which tells us where the toughness data
was before. Irradiation started and that gives us an
irradiated index temperature which we combine with a
reference fracture toughness curve to get our fracture
toughness resistance. Now we know in very general
terms what the material can take.
To define the degree of challenge, we go
over to the stress intensity factor model which
combines thermal hydraulic inputs, design variables,
physical properties like modulus, Poison's Ratio,
things like that.
And, of course, flaw data to get the
applied stress intensity factor. A comparison of
those two which is done in the code -- we'll show you
the mathematics of that probably tomorrow -- allows us
to estimate our probability of crack initiation and
probability of thru-wall cracking.
Going through this process, we focused our
model development and uncertainty efforts in the areas
that I've highlighted now, for those of you looking on
the screen, in yellow and in orange. It is relevant
to point out here, because I don't have another
opportunity to point it out, that certain things in
this model have been taken as deterministic.
For example, design variables physical
properties. This is not to say that the elastic
modulus of the material is not variable. It is to say
that when compared with other variabilities like
fracture toughness and embrittlement, it's about as
constant as the elevation in Champagne, Illinois, if
you've ever been there.
Design variables like the vessel diameter
are also treated as deterministic so we have made some
judgements at the beginning of this model and decided
to treat some things deterministically.
What I would like to do there are now two
parts to this presentation. There's a discussion of
the model development and uncertainty in the fracture
toughness and embrittlement model that is highlighted
in yellow. I'll go through that first. Then I'll
discuss the uncertainty treatment in the flaw data
which is highlighted in orange.
Just to benchmark ourselves for the
fracture toughness and embrittlement models in terms
of where we are and what we need to do, we start off
with our existing toughness model, our existing
embrittlement model, what we use today in 10 CFR 50.61
to estimate the material state of the plant.
In all of this nontoughness data meaning
data from Charpy specimens, NDT specimens is assumed
to represent toughness data. When we get into
uncertainty space, that presents us with some very
interesting challenges. That assumption has led to
uncertainties being treated implicitly and uncertainty
types between aleatory and epistemic being mixed into
a spaghetti bowl.
We needed to take that all apart in order
to get this into a PRA framework because of the
constraint that we stated at the beginning, or the
intended constraint that we would like to come out of
this with a revised PTS screening criteria that still
relies on NDT and Charpy data. That was the hope. We
believe we have achieved that but in order to deal
with uncertainties, we need to take this apart yet
This slide discusses the constraints and
fundamental assumptions that we have gone into, the
fracture toughness model with one, which I stated
before, is that we would like to come out of this
still allowing the licensees to assess the state of
their plant based on the nontoughness data that they
now have.
We need to somehow express that
nontoughness based model in the PRA context. Also we
retained the linear elastic fracture mechanics basis
of FAVOR. It would, of course, be possible to
construct a version of FAVOR based on elastic plastic
fracture mechanics. However, the nature of the PTS
challenge rather says that a linear elastic model is
appropriate so we've stuck with that.
Looking now at the process we've used in
the toughness embrittlement models for building the
models and characterizing uncertainty, I would like to
just mention at this point, or I should say
acknowledge the significant impact that the industry
provided here through the EPRI MRP and their funding
of their contractor, Marjory Detician, who was largely
responsible for developing this whole process that
enabled us to do a very good job of characterizing the
uncertainties in a way consistent with PRA. Without
that help from EPRI and Dr. Detician this would look
much different.
So what we started with were two things.
One was what we've been calling a root cause diagram
which is really nothing more than just a graphical
description of a mathematical process. The benefit
here is it allows us to depict very clearly what our
current process is.
For those of us that thought we understood
the 10 CFR 50.61 process and said, "Oh, that's easy,"
and then a month later said, "Boy, that was
educational," it really allowed us to identify very
clearly where the uncertainties were, or are I should
say, where the judgements are.
If you can't identify them, you can't
quantify them. That was our starting point. That
told us where we needed to work. But we've also
acknowledged at the start that because we are trying
to stick with nontoughness based data to predict
toughness data, we've got an estimate.
We've got a model and to assess
uncertainty relative to that we need something else.
We need something to compare to. What we focused on
developing is what we referred to as physically
motivated best estimate models.
If you only had data and you've got
nothing else, then you've got no way to tell if your
data is right or wrong. What we tried to do was to
use our physical understanding of defamation of
fracture to tell us how the data should behave and
then calibrate those models based on available data.
This also allowed us considerable insight
into classifying uncertainty type as being either
aleatory or epistemic. It really made quite clear
those uncertainties that were just inherent properties
of the material. No matter how much funding you had
and how well you did your experiments, you would never
get any better and those situations where the reverse
was true.
When we combined these two things, the
identification of uncertainties, our ability to
classify them relative to a physical model and,
indeed, our ability to quantify them relative to that
model, that provided us with a means to account for
Down there in the red print on the screen
you see sort of our mantra, "Fracture toughness data
is truth. Fracture toughness data is truth." We kept
getting back to that, that we certainly use the
physical understanding to get insight into how the
data should behave and that in many ways told us how
we should be looking at the data but we kept
referencing back to the idea that we are trying to
predict KIc data or KIa data.
This all enabled us putting all this
together to recommend a FAVOR procedure for both a
model and an uncertainty treatment for all of these
key parameters in the toughness equation, the RTNDT
index temperature, the T30 shift, the RT arrest shift
which gives us the distance between the initiation and
the arrest curves and, of course, KIc and KIa
What I'm going to do now, and if my memory
serves me, we briefed the committee in detail about
this before, is to just go through and hit the high
points on each of these models. We have some backup
slides that we can refer to if necessary. Certainly
there is the draft NUREG that, I believe, was passed
on to you.
There are three sequences of a few view
graphs here that I'm going to step through. The first
step along the way is the model of initiation fracture
toughness. As the schematic on your screen shows,
there are two parameters in this initiation model.
There's, of course, the toughness value
KIc, and then there's the index temperature RTNDT. We
need to have a model and quantify uncertainty in each
of these. As I pointed out, to do that we need some
independent arbiter of truth because we recognize that
RTNDT is most certainly not truth.
So this slide depicts the end result of
our best estimate model. Our understanding of the
physics of defamation and fracture suggest that we
expect, and indeed observe, a common temperature
dependence in the variation of toughness with
temperature that is common to all ferritic steels.
That temperature dependence depends only
on the lattice structure and so we don't expect it to
change with chemistry. We don't expect it to change
with the radiation in the range of exposures that we
get in reactor vessels.
CHAIRMAN FORD: Sorry. Did you say you
wouldn't expect it to change with chemistry?
MR. KIRK: No, not unless the chemistry
makes the steel not ferritic. As long as -- I'm
sorry. As long as the lattice structure is body
centered cubic.
CHAIRMAN FORD: There's a master curve of
KIc versus temperature for all --
MR. KIRK: For all ferritic materials
without question. If you change it from being
ferritic, it won't follow that master curve.
MR. HACKETT: I guess we don't want to go
too far with this. Nothing, I guess, in science is
without question.
MR. HACKETT: I think what Mark is going
to go through is a lot of empirical data backing up
those statements and a lot of analyses.
MEMBER SHACK: It does shift with
transition temperature.
MR. KIRK: Yes. The effects of
irradiation, and I get to that here, when you look at
what irradiation does to the material it, of course,
hardens the material which leads to a shift in the
fracture toughness.
MEMBER SHACK: But even without
irradiation you have a shift.
MR. KIRK: You have that index
MEMBER SHACK: So the master curve is
indexed to a temperature.
MR. KIRK: That's right.
MEMBER SHACK: So all ferrictic steels are
not the same.
MR. KIRK: No. Absolutely not, but once
indexed to a temperature, they all follow a common
variation of toughness with temperature.
MEMBER SHACK: Yes. That's correct.
MR. HACKETT: And maybe to pursue it
further, because I can Peter still musing over this,
and it's been something that has been worked on for a
long time. I think Mark answered it correctly. The
chemical composition influence would be through its
impact on crystal structure.
Obviously there are influences of the
chemical composition on the end state crystal
structure, but for most ferrictic steels under these
conditions, that variability is not there. It could
be in other circumstances but not in the particular
problem --
MR. KIRK: Not in these conditions. So
what we see is certainly for the range of conditions
that we're interested in here, the steels, the
exposure conditions, the ranges of chemical release,
our physical understanding suggest that we should
expect to see, as we said, once indexed a common
temperature dependence and, indeed, a common scatter
to all ferrictic materials of interest.
We find that the effects of irradiation,
since they don't affect the crystal structure, and
since they don't affect the micro-defects that are
leading to the scatter in the KIc or KIa data, we
effect irradiation to simply produce a shift in that
transition temperature as is illustrated by the two
graphs which have probably been shrunk too small.
These physical expectations are backed up
by, indeed, mountains of data. It's been said that
the master curve hasn't yet met a ferrictic steel that
it doesn't like and I personally haven't found one.
I should also note that a couple of years
ago I made it a quest to find one believing, as any
good experimentalist did, that life just can't be that
simple. I haven't found one yet. We've adopted as our
best estimate to which we will compare are RTNDT, the
master curve method, with the T0.
I'm sorry, Peter. Go ahead.
CHAIRMAN FORD: I must admit I'm still
pondering over this master curve business and
invariable. If you go back to VG 64, you've got on
the left-hand side two examples of two steels, one
from Midland and one from HSST program where they are
markedly different from this master curve.
MR. KIRK: Well, actually, that's the KIc
curve, but go on. That's the KIc.
CHAIRMAN FORD: My point is if you just
look at it as a reference curve, those gold lines you
see there, the same curve.
MR. KIRK: Yes.
CHAIRMAN FORD: Big difference. Two sets
of steel. Different T values which is fine. You
would expect the difference in the half T and the 3 or
4 T. Big difference.
MR. KIRK: In fact --
CHAIRMAN FORD: What physically is the
reason for that?
MR. KIRK: I think -- maybe I'm not
understanding what you're saying but, in fact, if you
pass the master curve through each of those data sets,
which would mean you would have -- if you assume that
was the master curve, which it isn't, but if you
assume that it was and you indexed it to T0 and you
then did a statistical test to see how many were above
the line and how many were below the line, and your
statistical expectation would be 50 percent above and
50 percent below, you would find no reason to reject
the hypothesis.
CHAIRMAN FORD: So T0 on your slide VG 69
is not the same as RTNDT.
MR. KIRK: Absolutely not. No.
MR. HACKETT: I was going to add the
Midland case was particularly an extreme example
illustrating the problems you get into with RTNDT
indexing which is one of the things we had attempted
to remove from the analysis.
MR. KIRK: Yes. Certainly T0 is not RTNDT.
If that helps, I should have said it a lot sooner.
CHAIRMAN FORD: I thought I was a
metallurgist. I'm really showing my ignorance here.
Physically what is T0?
MR. KIRK: A committee decision. It's
physically nothing. It's simply the temperature at
which the median toughness is 100 Mega Pascal root
meter. That sounded flip but it's the God's honest
MR. KIRK: The intention was you wanted to
pick a temperature that is sufficiently above the
lower shelf. Well, first off, that you're not getting
into twining behavior. Secondly, that you've got
enough slope that you can make a reliable experimental
measurement, and you want to get it far enough off the
upper shelf that you're not into terrine. It could
have been 80 or it could have 120. Why we picked a
temperature of 100 Mega Pascal root meter to combine
with a reference thickness of one inch I'll never
MR. HACKETT: I think in short we could
devote an entire meeting to the master curve easily.
CHAIRMAN FORD: I'm rapidly coming to
MR. KIRK: Okay, but it's -- while we are
on this point, the benefit of the master curve method,
relative to RTNDT, of course, is that here the index
temperature is defined consistently based on toughness
data for all seals. The T0 is always the temperature
at which the median toughness is 100 Mega Pascal root
Forgetting all the physical arguments, if
you wish to, it's no huge surprise that T0 indexes
toughness data better than RTNDT which is clearly not
toughness. The benefit of using T0, in addition to
all the physical underpinnings, is that it's
rigorously consistent for each and every material.
You know the T0 and that's the point
that's made at the bottom bullet is consistently
defined for all steels and so it corresponds to the
position of the toughness data each and every time,
not to some representation of the data that is
independently derived.
CHAIRMAN FORD: And this is true only for
LEFM conditions?
MR. KIRK: No. Actually it's true for
small scale yielding conditions which can go
considerably beyond LEFM because LEFM limit in E-3.99
was constructed for mathematical reasons so that
linear elastic fracture mechanics would apply
irrespective of material.
Actually, the material that set the limit
was titanium. If the limit had been -- if the
coefficient in the KIc, which is 2.5, had that been
set on ferrictic steels, it would have been set at
about 1. There are a whole host of conditions and all
the data points here satisfy small scale yielding
conditions. You need EPFM to describe and the master
curve works under those conditions.
MR. KIRK: Next slide. Looking at RTNDT,
and this is where my eyes got wide open by the process
that Marjory was able to bring to the table because,
like I said, I thought I understood this. Here is
where I apologize for font too small. The schematic
on the right-hand side of your screen shows the 10 CFR
50.61 process for determining RTNDT.
I don't wish to go into the details here
but suffice it to say there's a preferred procedure
which is the ASME NB-2331 procedure where you test
procedure where you test Charpy, test NDTs, and
compare them. Then the NRC has adopted and has used
for quite sometime two alternative procedures, one
involving the use of Charpy data only which is
necessary to get RTNDT values for plates.
And a second alternative involving the use
of generic data which is just to say if for some
reason you didn't happen to measure the RTNDT of your
limiting weld, oh gosh, darn, here is a generic
distribution that you can go to and use a value.
So there are three different ways to get
RTNDT and none of these ways is terribly prescriptive
at all. Even if you go through the preferred
procedure of NB-2331, it says you test Charpys, you
test NDTs, and you compare them, but it's noticeably
silent on how many Charpys. There's no statistical
analysis of data and so on and so on and so on.
There are many, many engineering
judgements in this process. I think I said a myriad
of methods and transition temperatures have been used
to define RTNDT. If you go up to the MTEB 5.2
procedure, which the NRC adopted in order to have RTNDT
values for plates, you see a whole host of index
temperatures used, T30, T45, T100, 30 degrees off the
upper shelf. All of these things go into the mirace
that we call RTNDT.
Now, having said that -- and it's been
noted to me by my colleagues that I'm hypercritical of
RTNDT, maybe because I know there is something better.
Having said that, RTNDT when combined with the ASME KIc
curve, because of the procedures you have to go
through that's reflected in all these boxes to get an
RTNDT at every point along the way, you are constrained
to make only a conservative judgement. Again, in my
quest to find data that violated the master curve,
I've also been on a quest to find data that violates
the KIc curve when indexed RTNDT. I can personally tell
you that that data doesn't exist.
The methods were set up in the early '70s
with the intention of always producing a bounding KIc
curve, and it does. It just simply works. All of
these things taken together strongly suggest that the
bulk of the uncertainty in RTNDT is epistemic.
If we had more prescriptive procedures, if
we had only one procedure, if NTE and Charpy actually
meant anything relative to toughness data -- I want to
get Bill laughing out loud if I can -- all of these
things would serve to reduce the uncertainty in RTNDT
and those are all clearly lack of knowledge
uncertainties. For the purposes of this analysis,
it's pretty clear that the RTNDT uncertainty should be
modeled as being epistemic.
Actually, what I would like to do is if
you would skip ahead to view graph 73 since we just
talked about RTNDT. If we go to view graph 73 I can
show you what we've done in terms of treating the
uncertainty in RTNDT data. Up at the top again is the
mantra, "Toughness data is truth."
What we did was simply to take those
datasets for which we had a sufficiently well
developed transition curve to have a transition curve
and plotted those data as measured relative to an RTNDT
indexed KIc curve. We then slid the curve which is
represented here. It was a little more elaborate than
this because we employed a statistical treatment, but
we basically slid the curve until it bumped into the
first data point.
We called that value delta RT and we
adopted that delta RT value as a measure of the
epistemic uncertainty in the RTNDT data. That's how
far off the KIc curve, which was intended to be a
bounding curve was from the data it was intended to
What you see at the bottom of the graph is
a statistical representation of all of that data. We
had enough data for 18 different RPV steels, both
plates and welds. I think there was a forging or two
in here to define delta RT values for those different
materials and what this distribution tells you is that
RTNDT at its worst is accurate -- I'm sorry, at its
best is accurate.
At its worst places the KIc transition
curve perhaps 175 degrees too high which is sort of
getting to the Midland situation. And, of course,
there is a range of situations in between.
What we've done in the FAVOR code, and
this is by far the most significant change in terms of
having an effect upon the results, perhaps not the
most significant scientific change but the most
significant change in terms of changing the numbers is
the realization that, oh, RTNDT isn't the transition
temperature, the toughness data is and here's how far
off it is. In our sampling procedure, which I'll show
you in a minute, what we do is we simulate an RTNDT
We then simulate a random number from zero
to one and that then tells us for that run how far off
RTNDT is. Like I said, it could be right, it could be
175 degrees too high. On average this works out to
about a 65 degree downward correction in RTNDT.
MR. DUDLEY: Just a minor point. How much
better did you get with the three parameter fit than
a two?
MR. KIRK: I don't know.
MEMBER SHACK: Nobody uses three
MR. KIRK: It was a convenient fitting
function. One other thing to point out because I know
we've gotten this question in the past. The rules of
engagement for this analysis would stick with LEFM
data because the basis of FAVOR is LEFM.
Having said that, we did that very heavily
restricted data set that we could use, as you might
expect. Having said that, we have also constructed
this cumulative distribution function which is shown
on the bottom using EPFM data, using data that would
be valid by the master curve method.
What we find out is what that then gives
you is something more like 70 materials to put on the
lower graph of a much wider variety and you get the
same function. I know some concern had been expressed
before that. Now the inherent assumption is that
distribution represents everything. Admittedly it's
a leap to say that 70 represents everything but it's
a shorter leap than 18.
So in terms of a practical affect, that's
the big change in the initiation toughness model. The
other change, which is -- I'm sorry?
CHAIRMAN FORD: I'm struggling, I must
admit. There's a whale of a lot of information here
and to make sure that I'm not stamping something that
is okay when I honestly don't know what I'm looking
at. What other peer reviews has this gone through?
MR. KIRK: Okay. I would be happy to
comment on that. The work that you're seeing
summarized here was developed by a team that involved
both myself, NRC contractors, and indeed EPRI
contractors, as I pointed out.
We developed a draft recommendation that
has been passed around to members of the industry and,
indeed, members of our own contract staff, people like
John Murkle have reviewed this, Randy Nanstead.
MR. HACKETT: And Richard Bass.
MR. KIRK: Richard Bass at Oak Ridge
National Laboratory. This has been through -- in that
group I lot track of my revision count. We went back
and forth.
CHAIRMAN FORD: And how about the ASTM
MR. HACKETT: The ASTM standard has been
-- I don't know, again, how many versions that went
through. Several of us were involved in that. Mark
has been more involved than most. It started in the
late '80s and refined into a standard in -- when was
it, '97?
MR. KIRK: Yes, ASTM has adopted the
master curve testing standard in 1997 so they provided
us with a method to measure T0. In that committee, as
you might expect the notion of the universal curve
shape. Everybody is going, "What?" Universal
scatter. "What?" That was vetted through that
Then following that in 1998 ASME adopted
a parameter called RT T0 which is simply T0 plus 35
degrees fahrenheit as an alternative to RTNDT to index
the KIc curve. On the outside -- well, external to
this project the master curve itself has gone through
considerable peer review in both ASTM and ASME.
Here, to be honest, and this is where the
initial constraint came in, I said we didn't want --
well, we didn't want to if we didn't have to require
the licensees to make new measurements.
As we were discussing yesterday, when you
are dealing with uncertainty and, indeed, this is a
huge uncertainty, two things you can do. One is to
try to mathematically account for it which is what
we've done here. The benefit is that enables you to
use your old measurements. The detriment is you are
introducing that level of uncertainty.
The alternative would be to say, well,
let's go make toughness measurements and do this
better but that would send everybody scurrying back to
reconstitute Charpys and preprac them and test them
and hopefully find them in the hot cells.
That is certainly a further improvement
than can be made here. Indeed, I know a number of the
licensees are interested in that level of improvement.
That's not one that we've taken here.
CHAIRMAN FORD: Okay. All of these, as
you said, at the very beginning, using a 95 percentile
you are going to come up with the end result.
MR. KIRK: No. You're going back to the
graph that Ed showed. In this we have quantified --
well, first off -- you told me I was going to get in
trouble for that and you were right. We used the 95th
percentile for convenience. I think your comment
about a one sigma representation is one that I've
taken note of.
Throughout here we are defining
distributions left and right. In this presentation
you're probably not going to be able to keep track of
them. But what you'll find out if you refer to our
draft NUREG which is the basis for this if you refer
to the FAVOR theory manual which shows you what
actually got in there is that we only truncate the
distributions when it results in a physically
unrealistic prediction like a negative transition
temperature shift.
I mean, that's not relevant here but, for
example. No, we've put the full probability
distributions into the paper code.
MR. HACKETT: I guess I would just add a
comment just sort of almost a philosophical basis
here. The buy-in you need for this project on the
master curve is really fairly limited, as Mark tried
to get to. I mean --
MEMBER SHACK: No. You need to believe
it's the truth.
MR. HACKETT: That's what I was coming to.
What you are really buying into in this project, and
that's why I just wanted to try and make sure the
record is straight on this thing, is that the master
curve in T0 is giving you a better representation of
the real material behavior than RTNDT for all these
variety of reasons we've been through.
Given that, there are whole other levels
of things you can do with a master curve concept. I
think it's also fair to say it's been an area that's
been guilty of some zealotry over time in selling it.
But it had to go through a vetting process which I
think it largely has been through in terms of ASTM and
But for application to this project what
you are really saying is that has given you better
representation of what is the truth for these ferritic
materials. Then we are making adjustments to take
that conservative bias out, as Mark mentioned, in the
RTNDT methodologies. I think that is really all it
boils down to for this go-round.
MEMBER SHACK: Well, that and that your 18
material curve augmented by your 70 is universal.
MR. KIRK: Yes.
MR. KIRK: And really what that curve
represents is nothing more than quantifying what all
-- I've got to remember I can't point at the screen --
what all of this gave you. What all of that
combination of judgements and guidance or lack of
guidance produces.
Indeed, at the beginning of the project it
had originally been the hope that we could use, and we
have used some of these diagrams directly as
mathematical models. In this case it was much easier,
much more pragmatic just to rely on the data to tell
us what the end result was because propagating all
that through just became impossible.
Okay. So we've now discussed in detail
one of the parameters which was the index temperature,
RTNDT. We can now go on to just briefly discuss KIc.
In terms of significance, this -- well, let me get to
the end. The physical understanding of the cleavage
fracture process shows us that the noncoherent
particles and other barriers that present themselves
to dislocation motion are alone responsible for the
scatter in KIc.
That idea coupled with a number of other
ideas, mainly that KIc does not exist as a point
property. That is to say, all KIc values have
associated with the milane scale, and that is the
crack front length as it is commonly known, suggest
that there is only so good that you can measure KIc.
In other words, I suppose as an
experimentalist I like this analogy if we had
perfectly machine specimens preprac to exactly the
same depth, all tested exactly the same way, you would
still get the scatter that alarms most non-
metallurgist when you look at this plots.
So that understanding tells us that in a
PRA context this uncertainty that you see here is
indeed irreducible and, therefore, should be modeled
as aleatory, which is to say this created -- I think
this consumed probably about two months of Terry
Dickson's life.
In the old version of FAVOR we did an
epistemic model of KIc where we went through our
toughness model and we got an applied K at a
particular time, temperature, pressure. Then we drew
randomly from the KIc distribution at that
temperature. We said K applied, KIc passed, failed,
zero, one binary.
Whereas now, and Terry will get into this
in much more detail tomorrow, we recognize that for
any applied KIc that is, in fact, presented to a
material which has a distribution of KIc value so
instead of for that given applied KIc instead of the
material having failed or not, what we calculate and
indeed carry through the model is some probability of
Instead of having zeros and ones we've got
.005s or whatever that then get added up. That was a
significant programming change which is indeed a much
better -- is consistent with the PRA process, is a
much better representation of the physics. What
effect that has had on the numbers I honestly couldn't
tell you because we haven't done nor would intend to
do that comparison.
MEMBER SHACK: One thing is when you come
up with the aleatory model for KIc and you do it this
way, you get the three significant figures, the same
numbers that Oak Ridge got from their statistical
analysis which I thought was based on RTNDT.
MR. KIRK: I'm sorry?
MEMBER SHACK: You have a Weibull model
for KIc, an aleatory distribution. That describes the
aleatory behavior of KIc.
MR. KIRK: Right.
MEMBER SHACK: That's a Weibull model. So
say three significant figures I get the same
coefficients in there that I got from the Oak Ridge
model when I did it with RTNDT that you're doing with
RT lower bound.
MR. KIRK: Okay. Let me --
MEMBER SHACK: There is something magical
I'm missing here.
MR. KIRK: I think let me clarify. First
off, the aleatory model, the other mantra that we kept
having in this project was physically motivated and
empirically derived, which is to say we got from the
underlying physics the notion that there should be an
exponential temperature dependence in KIc. It should
have a lower bound value.
Indeed, we took out of the physics that
the scatter in the Weibull slope should be four. But
having said that, we then fit the temperature
dependency. We fit the lower bound values because we
didn't have independent estimates from the -- or
direct estimates, I should say, from the physics of
The reason why the numbers are the same is
that we didn't have independent numbers from the
physical model. They, in fact, came from the data.
The other thing that perhaps isn't clear is Oak Ridge
wasn't fitting RTNDT data. They were fitting RT lower
bound data. The index temperature they used was RT
lower bound.
MEMBER SHACK: Well, that confused me
because when I looked in the Oak Ridge manual, the
first time it comes up, of course, it's fit to RTNDT.
Then they tell me, okay, that's really just, you know,
an approximation. I thought in the original fit --
MR. KIRK: In the original fit, yes.
MEMBER SHACK: -- it was RTNDT. Did the
numbers change?
MR. KIRK: The numbers changed. And you
may have found an error in our work in terms of
labeling. Right now the --
MEMBER SHACK: They sort of warn me as I
go along here that that is only approximate first up.
I figured that was just to trap the reader.
MR. KIRK: No, the fit --
MEMBER SHACK: Okay. The original fit
really was different, as I would have expected it to
MR. KIRK: The original fit really was
different. Indeed, this gets into some of the, well,
to me the interesting nuances of why it was good to
have the physical model and the empirical models.
When we got to -- when you did just the
the empirical fit of the data, you found it doing
bizarre and unexpected things like the scatter on the
lower shelf was bigger than the scatter on the upper
shelf. That was simply an artifact when you fit it
versus RTNDT. That was simply an artifact of the fact
that you didn't have enough data on the upper shelf
and you didn't have enough data to independently
estimate the scatter parameter.
So bringing in the elements of the
physical knowledge that said, well, first off, RTNDT
isn't the right X-axis normalization. We need to have
something tied to the data. That is physical
observation No. 1.
Combine that with the notion that there
should be a scatter parameter or Weibull slope that is
not only universal to the various materials but
constant with temperature. Put those two physically
motivated constraints on the model and you get out --
I don't know if I can blow this up so that you can
actually see it.
You get a KIc versus temperature model
that is in agreement with the physics, is in agreement
with what you would "expect it to do" which is to say
smaller scatter on the lower shelf, expanding scatter
as we go up.
MR. HACKETT: It's an interesting
question. I'll just add one more comment. This goes
to the root of someone. Mike Mayfield started us down
this path many years ago before there was a master
curve. If you ask yourself the question, "What if I
had never heard of a master curve or there wasn't such
a thing, what would I have done in this area?"
I think his vision at the time was as
simple as we're just going to get all the appropriate
data and analyze it the best way we know how
statistically and at least try to have a statistically
motivated version of this that wouldn't necessarily
have the physical underpinnings that Mark is talking
about. But would that have been an improvement?
Yeah, absolutely it would have been an improvement.
You didn't need the master curve to do this.
It's a refinement that came along in
parallel that I think made some things better from a
physical understanding basis in addition to some of
what was going on in the project anyway. There would
have been improvements here I think regardless of the
different form.
MR. KIRK: Did that answer your question,
MR. KIRK: Okay.
MEMBER SHACK: I'm just trying to --
coming back to Ed's point, what difference would it
make in the final? I mean, if I lump all the aleatory
and epistemic things together, would it make a
MR. HACKETT: We have not done that in any
kind of systematic way. That's not to say we couldn't,
although we weren't in the mode of searching for --
MEMBER SHACK: It certainly makes it
easier to understand the Oak Ridge calculation when
you draw an epistemic loop and aleatory loop inside.
MEMBER SHACK: And that alone is almost
worth the price of admission.
MR. KIRK: I think -- I've gone through
this debate many times with people who were all
trained as engineers so we are all trained to say,
"Does it make a difference?" For my money, certainly
you want to focus on the things that are most
important unquestionably.
But there is also considerable value in
doing things right, especially from a regulatory
perspective where you get yourself caught in this trap
of, "Oh, well. I don't know. Let's make it
conservative." That becomes an increasingly in
today's world a hard judgement to defend. If
conversely you can say this is physically the right
thing to do, it makes it much more defensible.
Indeed, it's then something -- the best
part about that is I think it sends something that you
can change later as your state of knowledge improves
because you know why you made the decision and you
know where it came in.
MEMBER SHACK: If you lump them together,
I'm not sure that you would have been able to do the
conditional probability of fracture. That is, that
would have included both aleatory and epistemic
uncertainties and that really would have been wrong.
MR. KIRK: Well, yeah. And certainly if
you lump together you truly haven't gained that much.
I'll have to take exception with Ed because the
original ASME KIc/KIA analysis was not a statistical
analysis. That clearly needed to be done. But if you
just do a statistical analysis of RTNDT, you're stuck.
Your hands are tied and you're saying RTNDT is truth.
Now you're stuck with something that
clearly is not truth most of the time and you wouldn't
have gotten the 65 degree benefit and the conditional
probability failure numbers would have been at least
one order of magnitude higher. I'm saying that with
a fair degree of certainty because it was only
recently that Terry implemented this adjustment for
epistemic uncertainty in the code.
Indeed, the original Oconee scoping
analysis was done without that adjustment in because
we didn't have it at the time. When Terry put it in,
fortunately or unfortunately, I don't know, this isn't
something we have taken the time to document. I
recall phone conversations where the comment was made
that that change alone drove the failure probabilities
down by at least an order of magnitude.
Certainly any licensee that is watching
their RTPTS number knows that if they can get, heck, 5
degrees fahrenheit is great. 65 degrees fahrenheit
you just forget that you have a problem anymore.
There is no question in my mind that a purely
statistical analysis would not have netted that
benefit because you would be stuck with the notion
that RTNDT is truth. You just can't get there from
MR. HACKETT: You can see what happens
when you get materials and mechanics people going on
this stuff. We're going to run out of our time.
CHAIRMAN FORD: I'm looking at the time.
I don't want to stop the flow of concentration here.
We can go until half past 12:00 before we stop. I
don't believe there is any reason why we can't do
that. Ed, I'll leave it up to you to decide when we
stop for lunch.
MR. HACKETT: I guess I don't see this yet
as a good point to stop but I might ask Mark to look
at an intermediate juncture.
MR. KIRK: We can -- well, I think we have
essentially finished the initiation toughness model.
We've got a couple more logical break points. We need
to work through the irradiation shift model and the
arrest toughness model. If we can do those two in the
next half hour, that will probably be pretty good.
MR. HACKETT: I'd be thinking you could
probably do one.
MR. KIRK: Hopefully.
CHAIRMAN FORD: I was hoping for lunchtime
to try to catch up.
MR. HACKETT: This could be a good break
point since we've since debated the master curve a lot
and talked about the initiation toughness model.
We're at the next jump-off point which is the
irradiation model
CHAIRMAN FORD: If you don't mind.
MR. KIRK: That's fine.
CHAIRMAN FORD: We'll break now. That
gives me an hour to try and catch up here.
MR. HACKETT: Drinking from the fire hose.
CHAIRMAN FORD: This is one of those
situations I say now I'm a corrosion engineer rather
than a metallurgist. We'll recess for an hour and
come back at 1:00.
(Whereupon, at 11:59 a.m. the meeting was
adjourned for lunch to reconvene at 1:00 p.m.) A-F-T-E-R-N-O-O-N S-E-S-S-I-O-N
(1:04 p.m.)
CHAIRMAN FORD: Let us reconvene. You're
on 74.
MR. KIRK: Okay. Where we left off when
last you tuned in. We are now going to talk about the
irradiation shift model which starts on view graph 74
in your packet. This is the way that we estimate how
far upward in temperature we shift the un-irradiated
KIc curve.
We've already talked -- the parameters of
this model are, of course, the shift value which we
currently estimate using the Charpy 30-foot pound
transition temperature, the delta T30 value, and the
distribution of irradiated KIc.
However, in the previous discussion that
we had before lunch, we already discussed the
temperature dependence and the scatter in irradiated
KIc we expect to find, and indeed do find, in our
empirical data to be the same as for the un-irradiated
KIc. All we really have left to talk about is the
shift value.
Now I'm going to violate a fundamental
rule of presentations and show an un-Godly complex
equation and then say a little bit about it.
This is the model that the staff has
decided to use to represent, or I should say to relate
the shift in the Charpy 30-foot pound value with other
variables that we know about, the reactors like the
irradiation they run at -- I'm sorry, the temperature
they run at, phosphorus, fluence, nickel, copper, and
so on.
This is an equation that has been
developed by two of our contractors, Ernie Eason and
Bob Odette. It's currently a fit -- I should say,
again, it's a physically motivated empirically
calibrated fit to all of the data that was available
from the domestic nuclear reactor surveillance
programs as of, I think, about two years ago.
I did not intend to go into this in any
level of detail. Just suffice it to say that this
relates variables of chemistry and irradiation
exposure to the Charpy shift value and that it's a fit
to the available evidence that we have for operating
domestic commercial reactors, and that the functional
forms have been selected consistent with a physical
understanding of irradiation damage.
There are two points I would like to carry
away from this slide. One is in the upper right-hand
corner. If you go to any meeting in this country or,
indeed, anywhere in the world and discuss
embrittlement -- this would be referred to as
embrittlement trend curve -- you can get engaged in
some very spirited debates about whether this is the
right form or somebody else's equation is the right
form or does phosphorus belong in there at all or
should there be a flex effect and so on and so on and
so on.
Needless to say there is considerable
disagreement among the experts in this field and
perhaps it is just a sign that we have too many
experts as to what the right form of this equation is.
Indeed, just from a statistical perspective it's the
devil's own because of all the scatter in the data.
Having said that, the uncertainty here is
clearly epistemic. We are suffering from a lack of
complete knowledge about what some of the underlying
phenomena are and under what regimes of chemical
composition and irradiation exposure they're active.
Obviously we need to make that characterization in
order to use this equation properly in the
reevaluation effort.
The other thing that we need to know that
I would like to focus on a little bit here is down in
your lower left-hand corner, and that is the question
of does delta T30, which is a Charpy shift, have any
relationship to delta T0 or the toughness shift. In
all previous calculations that have been done of this
type, the implicit assumption has been made that, in
fact, the Charpy shift is equal to the shift in the
toughness curve because no adjustments or corrections,
or what have you, have ever been made.
Indeed, if you look at the physics that
guides toughness shift and Charpy shift, leaving aside
some of the finer details, you decide that, yeah,
increases in strength should cause increases in
toughness shift and, indeed, increases in Charpy shift
so they should be related. Exactly what the
relationship is is perhaps in question in terms of
what the coefficient is.
To quantify what that relationship is for
this project, we went into our empirical database of
shift data for reactor vessel steels which is
illustrated here and simply developed a winter
regression fit between the Charpy shift value on the
x-axis which is what we'll know from the embrittlement
trend curve relative to the fracture toughness shift
on the vertical axis.
Indeed, more by luck than anything else
the 30-foot pound Charpy shift is roughly equivalent
.99 versus 1.0 to the toughness shift for welds. For
plates the Charpy shift actually under-predicts
slightly the shift in toughness.
In forgings there's a co-efficient here
but we don't adopt that in the FAVOR code because
there were such a small amount of forging data to fit.
Moreover, it's not going to make a big difference
anyway because there is only one forging in the whole
evaluation and it's such a low embrittlement I don't
think it matters. So what you can take away from this
slide, which indeed this is a big version now --
CHAIRMAN FORD: Before you go too far --
MR. KIRK: I'm sorry. Go ahead.
CHAIRMAN FORD: -- would you mind going
back to the one with all the equations on it?
MR. KIRK: Yes.
CHAIRMAN FORD: I have not seen the raw
MR. KIRK: Yes.
CHAIRMAN FORD: In many of the
relationship correlations, the correlation factor is
very low.
MR. KIRK: Yes.
CHAIRMAN FORD: What you've done is when
you said that Ernie Eason was working with Bob Odette,
that he was taking the functional form of what you
might expect from theory and force fitting it, if you
like, to what Ernie Eason was seeing from his
correlation between observations.
MR. KIRK: As an example, perhaps an easy
one to look at is in the age hardening, or perhaps
more commonly called the copper rich precipitive term.
You've got the fluence function here that is fit to a
Tan H.
Not that there's anything in the theory
that says we should have a Tan H but the theory does
say that we should expect the effects of age hardening
and once you've precipitated all the copper out of the
matrix, the mechanism should stop.
That is what the Tan H does. Yes, you're
right. You come away from the physical understanding
with some very basic observations and then try to find
those functional forms or, indeed, lack of functional
forms in the equation.
Another one to perhaps point out, and this
reflects a decision that was made and equally a
different decision could have been made, I think it's
fairly well accepted, perhaps the only thing that is
commonly well accepted in this community, is that the
coefficient on the matrix hardening term in fluence
should be a square root.
I'm really waiting for somebody to get
after me about significant figures, but a statistical
estimate of that value is .4601, very close to one-
half. We elected a decision but clearly a different
decision could have been made to stick with the
empirical fit.
Equally you could have said, "Well, I've
got sufficient weight of physical understanding to say
that this should be a square root term, force that
coefficient to be .5, and then the effect would be
taken up in the other terms.
CHAIRMAN FORD: Okay. So what you did was
you took a functional form from Bob Odette and you
fitted it. It wasn't, therefore, to .5 but came up
MR. KIRK: Yes.
CHAIRMAN FORD: Because that came out of
minimizing your --
MR. KIRK: Minimizing the residuals, yeah.
CHAIRMAN FORD: Okay. I understand.
MR. HACKETT: Some of this, Peter, is also
worked in reverse where you start with the empirical
data and, in the case of Ernie's analyses, he would
have applied some statistical evaluation procedures
that show a trend. Then the challenge would be to
Odette and others in the mechanistic community to
explain that trend physically.
In some cases we could. In at least one
or two cases there weren't real good physical
explanations. Then we are stuck with a difficult
decision of whether to include or not include that
effect on the model.
CHAIRMAN FORD: Like in all such model
derivations where you are basing it on existing data,
obviously you are going to get a good correlation, or
within a certain error, between the model and the
MR. HACKETT: The data.
CHAIRMAN FORD: You're doing a circular
argument. So how can you validate such a model until
you get new data presumably?
MR. HACKETT: Very difficult. For
instance, the long-term bias term Mark has on here,
you can see that this data is accumulated on materials
that have been aged over 97,000 hours obviously. By
definition you're talking a pretty small population.
I think in some cases you are really stuck
with expert judgement whether it's statistical
judgement or physical judgement, or hopefully some
combination of the two.
I think part of what we get into,
especially more so in this area than any other area
like Mark was alluding to, is the statistical
judgements are inherently offensive to scientists.
You have this data that indicates this trend. No one
can explain it but, then again, no one is going to
argue statistically it's not there.
Then you are backed up to the sort of what
Jack Strosneyder would say if he were here. The
regulatory approach is we're regulators, at least our
job, and we see a trend in this data. We need to
account for it somehow.
Then you're stuck with not having pure
science of some sort to back that up so you're in a
weaker spot than you would like to be but it's
incumbent on you to do something. I think these are
the things that make this area especially difficult.
CHAIRMAN FORD: Thanks very much.
MR. KIRK: Okay. So the end result of all
of this is the model that was coded in FAVOR where we
start with generic distribution. Well, we start with
mean composition values of copper, nickel, and
phosphorus that have been recorded in the docket by
the plants.
In most cases we don't have -- I should
say in most cases we don't have enough data on the
particular heats of steel to construct a reliable
statistical distribution of copper, nickel, or
phosphorus. I have this here if you wish to go into
it in detail but I wasn't planning on showing it.
We, therefore, have done some work to
construct generic distributions of cooper, nickel, and
phosphorus that are then sampled on an epistemic
basis. Those go into the embrittlement trend curve.
Peter, if you're interested in the data,
if you refer to this tiny little postage stamp figure,
the cloud disturbs people and perhaps it should but
equally perhaps it should provide a challenge to
future work. That is the embrittlement trend curve
with all the data plotted over it.
Also, another thing to point out, as I
suggested earlier, that intention of this is to be
informative and that sometimes means it shows some
things that you haven't done a darn thing on. In
order to figure out what the embrittlement shift is
for a particular location into the vessel wall, you
need to attenuate the fluence through the wall.
Unfortunately, well, just the way it is,
not sufficient work has been done in that area in the
past decade, or decade and a half, to really provide
a good basis for any revision of the plant attenuation
function. I think, again, if you go into the
community you would find lots of arguments waged
against this attenuation function and, indeed, I could
make a few myself.
Having said that, if you say what new do
we know, the answer would be we don't know a lot more
so we're just sticking with that one for right now.
Having said that, here in PTS where the
flaws that get you are invariably pretty close to the
ID vessel surface anyway, what particular attenuation
function you pick isn't going to be that important.
Stepping outside of this framework here,
it is going to be critically important when we go to
implementing this embrittlement trend curve for
regulation of heat-up and cool-down limits because
there right now you need to calculate embrittlement
shift at the quarter T and three-quarter T locations.
Once you start attenuating the three-
quarter T, you pick up some pretty substantial
effects. There I think we certainly should pay much
more attention to it but right now we're sticking with
the -.24 exponential exponent.
We plug in our copper nickel fosh, all the
other variables into the embrittlement trend curve,
get out a predicted delta T30 at the cracked tip,
convert that to a toughness shift and, indeed, sample
epistemically on this uncertainty and what that shift
value is.
In terms of what's new relative to what
we've done, we've got a new embrittlement trend curve
here but in total on average relative to the old
embrittlement trend curve, it's different by six
percent. Will that have a big influence on the
results? My guess is no.
The new thing here is that we recognize
the delta T30 isn't a toughness shifting converted.
Having said that, the conversion is very close one to
one. Again, it's the appropriate thing to do from the
viewpoint of doing this all right. I personally don't
think it's going to have a big effect.
CHAIRMAN FORD: Just for interest, how
many data points is that attenuation curve based upon?
MR. KIRK: It's a judgement. That might
be a little bit unfair but the number is certainly not
bigger than about this.
CHAIRMAN FORD: That formula has got some
sort of theorical basis?
MR. KIRK: Yeah. It's making fluence
attenuate like DPA which is believed to be a better
physical metric or radiation damage. By the way, it's
conservative. This dates back to the Neil Randall
technical basis for Reg. Guide 199 Rev. 2 circa mid
MEMBER SHACK: But didn't Eason and Odette
come up with a different attenuation?
MEMBER SHACK: A different thru-wall
variation of toughness?
MR. KIRK: No. We have not looked at
CHAIRMAN FORD: I've certainly seen
attenuation curves with data but for stainless steel.
MR. KIRK: Yeah.
CHAIRMAN FORD: Is there any reason for
saying --
MR. KIRK: There is some very limited data
for the Gundremmigan vessel and there was --
MR. HACKETT: Also, this is an interesting
but, again, whole other topic we could get into in
great detail. One of the goals of the research
program in this area for a long time has been to try
to do this type of validation on a retired vessel.
CHAIRMAN FORD: Like Yankee Row.
MR. HACKETT: Well, Yankee Row may have
been more of an anomalous example but we have made
attempts with San Onofre, with Zion. Unfortunately,
we haven't met with success in any of these based on
the schedules for decommissioning and the economics
The other one Mark was going to refer to
probably was the Japan power demonstration reactor.
We had a collaborative program with the Japanese to do
that and attenuation has been mapped through the JPDR
wall. Unfortunately, that was, again, a demonstration
reactor. It's fluence levels were not typical of what
we would see in our commercial operation. Also the
wall was thin. There were a lot of things that made
it atypical.
This is an area where work remains to be
done and we have been trying to cooperate with Bob
Hardy and others in the MRP to try and somewhere in
the future get samples. Our last attempt was San
Onofre and my understanding is that was a budget
breaker. We're trying to do that.
CHAIRMAN FORD: Could yo mention -- it
just struck me about something else. You said it
didn't really matter. Is that in relation to the
MR. KIRK: In relation to the attenuation
in that once the flaw starts to bury itself in the
vessel thickness, the applied Ks aren't high enough to
cause crack initiation. Our benchmark here is the
Charpy shift on the inner diameter because that is
where all the surveillance data is.
Obviously the further you go into the
vessel, the more any differences in insenuation
function are going to show up. If you attenuate only
over, say, the first 10th of the vessel wall
thickness, the difference between this and other
proposals, I think, the other candidate proposal was
a -.36 coefficient.
Over the first 10th of the vessel
thickness it doesn't really make a huge difference at
all. If you go out to three-quarter T where you need
to be to assess your heat-up and cool-down curves, it
makes a whole big lot of difference. I mean, that's
just the mathematics of it and the fact that you are
extrapolating data further.
One other thing to point out with respect
to trying to validate this experimentally using
mechanical property data is that it's a very difficult
experiment to do from the viewpoint that any of the
candidate coefficients here, it's not a precipitous
Remember that you are going to try to
resolve this with either Charpy or perhaps T0
transition data and then you are -- so you are going
to have an uncertainty in that of plus or -20 degrees
C and you've got a huge signal to noise problem.
Then you couple that with the fact that
you're talking about material that's been irradiated
in a power reactor so you're not going to get a whole
huge big chunk of it. You're going to get a little
bitty something. Certainly the industry, the NRC, has
been interested for a long time in getting a chunk of
one of these things because everybody likes to cut up
real structures.
Having said that, I think before somebody
makes that investment, it's incumbent on us to think
about what the effect is we're trying to resolve
relative to our ability to measure that. What you
ideally would want to get is something that has been
-- what you would really want is something if you go
way far out on the embrittlement curve it would all be
embrittled to about the same. You want to get
something around the need.
But designing the right experiment here,
having the money to do it, and having enough material
to make -- I was going to use the word unequivocal but
that is perhaps delusional -- less than uncertain
measurements is very, very difficult.
CHAIRMAN FORD: Looking at this -- just
getting off that subject and going on to the database
for the effect of embrittlement, fluence on KIc, do we
have any data going out to fluence levels appropriate
for end of life for extended license?
MR. KIRK: For extended license there's a
good question. Without remembering the specific
numbers, I don't think the fluence is so much the
problem. The area of question, and it depends on who
you talk to as to how big a question this is,
certainly we've got fluences out to EOL and NEOLE
because the capsules all have a lead factor.
The difficulty is that now some folks
believe that there is some either independent or
synergistic temperature effect that is occurring at
long times that could be giving us an uptake in the
embrittlement that you simply wouldn't observe in the
accelerated capsule positions, or that it would take
-- well, it would take a long time to observe. I
think more data is always better but I think we have
covered that parameter space adequately well.
Time, again, it depends on who you ask as
to how important you believe that to be. Certainly if
you look at it from a purely thermal embrittlement
viewpoint, these reactors operate at 550 and you have
to be there for a hell of a long time and ferritic
steel to get anything really going on. Like I said,
some people believe in a synergistic effect and some
MEMBER SHACK: Just coming back to your
attenuation again, suppose you rethought the quarter
T flaw. If I look at your statistics, my chances of
a quarter T flaw don't look real good.
MR. KIRK: As a member of the ACRS, I
think you should make that a recommendation. Indeed,
just to digress a little bit, one of the points that
Mike Mayfield had when we prebriefed him on this is he
said, "If things keep going the way they look like
they're going and we are able to raise the PTS
screening criteria by the amount that these analyses
seem to suggest, then the conservatisms that are
buried in your Appendix G calcs for heat up and cool
down are going to make routine operations more
limiting than PTS so somebody better we thinking about
So, yeah, I think -- and that is a very
good point and one that I think you people have
recognized is that with -- and we are soon getting
into talking about the flaw data. With the flaw data
that we have, which is, indeed, not nearly as
extensive a database as this but bar more extensive
than we ever had before, I personally hope we are
developing the basis to sway some of the minds on ASME
to say that, yeah, quarter T flaw is just a tad bit
out there. We clearly make up some problems for
ourselves here.
That concludes the irradiation shift
section. Now the last part is the arrest fracture
toughness model. We need to deal with the arrest
model because our criteria for vessel failure is that
you punch a crack through the outside. Again, that is
an engineering decision.
Certainly in other countries, I know
France and I believe other countries in Europe, have
adopted an initiation based failure criteria. This
has been what we have historically done in the U.S.,
which is not to say that it's right.
We have revisited that a number of times
both within the NRC and outside with the industry and
the consensus has -- well, there has never been
sufficient consensus to move us towards an initiation
based criteria basically because I don't think anybody
wanted to give up whatever extra bit we got out of
The parameters of the arrest model are as
illustrated here. We need a distribution of KIa, and
we need to know how much the crack arrest curve shifts
outward from the KIc curve due to the higher strain
rate characteristic of arrest than initiation.
The current arrest fracture toughness
model if you look at KIa it's got a fit temperature
dependence and scatter that since it was a fit made
independently of KIc data is completely independent of
the KIc data. It's not related at all. Whereas when
you look at the underlying physics, you say maybe they
should be.
RTNDT we've already beat on enough so I
won't go there. And in the current ASME code and,
indeed, in the model that we've used in our former PTS
calculations, the shift between KIc and KIa has been
fixed for all material conditions irrespective of
Those are just merely the qualitative
characteristics of the current model. To try to put
all on one slide where we got between working with the
industry, with Marjory and ourselves, this is the
summary. I'll try to step you through this to point
out the high spots.
We start off on the left with the physical
observation and in the middle tell you what that would
suggest we should see in the empirical data. Then on
the right we can show you the data.
Starting in the green we start with the
observation that all ferrictic steels have the same
lattice structure. That suggest looking at a
dislocation based constituent model that we should
have nearly identical temperature dependence for
initiation and arrest. I say nearly instead of
exactly because, of course, there's a coupled rate
temperature effect term.
Clearly KIa occurs at a different rate
than KIc but the rate effect is small. We also expect
that the temperature dependency again since it's
controlled by the lattice structure should not be
influenced by radiation composition and heat
Over on the right you see a plot of mean
curves that were fitted through the KIc and KIa
databases just as fit curves unconstrained by any
physical observation. Indeed, we find out quite
happily that the data are doing what we thought they
should do and that the KIa and KIc curves do have
essentially a similar shape.
As you can see, the physical understanding
gets rolled into the empirical model with different --
I shouldn't say this in this crowd maybe but Dr.
Apostolakis is not here -- to different degrees of
certainty based on how good we feel our physical
insights are.
The middle box I have a graphic on that I
can show you if you're interested but when we look at
the micro-features of the steel control initiation and
arrest. If you look at crack initiation, crack
initiation starts with things like carbides and they
have a fairly wide spacing in the ferrite matrix
relative to the factors that control crack arrest
which is more characteristic of the matrix itself.
We should expect to see that the scatter
in the KIa data because the crack initiators are
spaced wider that allows more stress variation and
consequently you would expect to have more scatter
than we see in the KIa data. Indeed, the empirical
evidence bears that out again. We've got a graphic on
that in the report. It's not featured here because
it's a fairly minor effect.
The third point which is, I suppose, the
big change in this model is that we come with a
physical observation that there is a hardening curve
that is universal to all steels, or maybe I should say
all ferritic steels.
The best 25 word description I can come up
with for that is if you take a tensile specimen in the
laboratory, whatever steel, I don't care, load it up
past the yield point, unload it, then reload it, it
doesn't start flowing again until it passes the point
where you started to unload.
You can unload all you want and it just
keeps marching up the same hardening curve. This idea
gets to the point that, okay, you are mechanically
cold working that material. This concept gets to the
idea that hardening is hardening is hardening is
dislocation piling up in the material so that you
would expect to see all materials tracking up one
universal stress strain curve.
What that leads us to is the idea that we
should expect to see the initiation curve approach the
arrest curve as the material damage increases for the
following reason. If you have, say, un-irradiated
material that hasn't been irradiation hardened, say
it's a 60 yield material, you apply a rate
characteristic of crack initiation and you get one
transition temperature.
You apply a crack arrest type loading rate
which is very, very fast. You get an elevation in the
yield strength. You get a hardening due to the rate
effect because the dislocations can't move fast enough
so you get a big uptake in the yield strength and
consequently a big increase in the crack arrest
transition temperature.
Contrast that with the situation that you
find if you take the material that has been
irradiation hardened to a considerable degree you've
already exhausted in that material a considerable
amount of your hardening capacity so the same increase
in loading rate will produce a much, much lower
increase, elevation in yield strength and,
consequently, a much, much smaller increase in the
transition temperature from initiation to arrest.
When we examine the experimental database
which is shown down here in the blue graph, delta RT
arrest is just the difference in transition
temperature between initiation and arrest and T0 would
be the crack initiation transition temperature.
Down here you have materials that haven't
been significantly worked. Up here you have materials
that have been. Again, you see what you qualitatively
expect from the physical understanding which is that
the curves should be getting closer together.
CHAIRMAN FORD: There is statistical
justification for drawing those curves?
MR. KIRK: Yes. Those are statistically
drawn curves.
CHAIRMAN FORD: Two points to the right
and quite a lot to the left.
MR. KIRK: It's a log normal fit which I
think is causing that to happen, but yes. This now
becomes -- I'm going to keep this -- well, we can go
to this. This shows again the overall sort of high-
level flow chart of how we model initiation and
In terms of looking at what's different,
the point I would like to again focus on is this
graph. As I pointed out before, in the -- look down
here now. In the existing model, the model we've used
before of crack initiation and arrest, the KIc and KIa
curve were fixed at a separation of 55 degrees
fahrenheit, about 20 degrees C, and they always stayed
that way irrespective of how irradiated you were.
Previously the curves were fixed with this
kind of separation down here. What we've done here,
we've implemented this in the model so we go through
and start off by figuring out the crack initiation
index temperature for the irradiated material.
Then we come into this curve and then
randomly pick what the shift is based on these data.
What this is saying is that now for materials that
have been heavily irradiated we'll probably get about
the same numbers we've always used before. For
materials that haven't been heavily irradiated we'll
get a much bigger number.
What that means is for materials that
haven't been heavily irradiated, the crack arrest
curve will be shifted out further than it ever was
before which makes that material less arrest tolerant.
That was actually a nonconservatism in the old
However -- way that quickly -- having said
that, this is again one of those things that it's
appropriate to include to get the model right but I
don't think has a big practical influence because what
you're saying here is I have a material, let's say, at
an un-irradiated transition temperature of -100 degree
C, I'm going to shift its arrest curve out further but
I don't care because it's got so much initiation
toughness the cracks never started anyway.
This has been implemented in the model and
is, indeed, a substantial change relative to the
earlier model and it's a change that we believe to be
physically appropriate but I don't think it's going to
make a big difference in the result of the
CHAIRMAN FORD: I'm going back to reading
this again. On 82?
MR. KIRK: 82.
CHAIRMAN FORD: Just a small detail.
Where does the 14.4 come from?
MR. KIRK: The 14.4 --
CHAIRMAN FORD: Down on the left-hand
MR. KIRK: I know. I need to refresh my
memory. The 14.4 -- here is one of those things that
we haven't talked about in detail here. T0 which is
-- okay. This is a relationship indexed to T0 which
is measured by the ASTM standard. That has a size
associated with it of one-inch thick.
The 14.4 reflects the fact that in
general, of course, the vessels we're assessing are
thicker so you need to shift that out a bit. That is
the short answer. The more detailed answer we'll talk
about later.
MR. KIRK: Going through the flaws, and
this now -- so that completes the toughness and
embrittlement model. We now get on to the flaw model.
The graphic in the upper left-hand corner shows you
where the flaw data comes in. This actually goes into
the stress intensity factors so this is now on the
driving force side.
We've developed distributions of flaws in
fabrication welds, repair welds, cladding welds, and
plate materials. Each of those distributions includes
a description of flaw density, flaw size, flaw
location, and orientation.
The information that we've used to
construct these statistical models include
experimental data and data from two expert sources
including a model called PRODIGAL which is a computer
code that actually arose out of an expert elicitation
and, indeed, from an expert elicitation that Debbie
Jackson and Lee Abramson completed probably about two
years ago now.
This table describes the sources of
experimental data. This has been a major program with
the NRC department of research where we have actually
gone out and gotten either ex-vessel material or
material that was supposed to go in a vessel that
never made it, cut it up, and destructively and
nondestructively examined it.
To follow the outline, I would like to
start by talking about the assumptions and the process
for model building. The assumptions fall into three
categories; basic assumptions, assumptions
necessitated by procedure, and assumptions based on
observation under physical understanding.
I'll start with the basic first. The
basic assumption is that except in limited cases we
will employ no theoretical or physical model of flaw
formation. That is not to say that some don't exist
out there. It is to say that we didn't find them
sufficiently well developed or have the ability to
develop them sufficiently for this purpose.
Therefore, we are treating our
experimental data as the highest truth so,
consequently, since we've only got data, we don't have
an independent model. These uncertainties will have
to be treated epistemically. We employ expert
opinions and models only when necessary and they have
kept in to a small extent basically where we didn't
have enough data.
The third point is that the inspected
material, therefore, must be assumed to represent
adequately all the material conditions of interest.
As a consequence, when we look back we said we had
data looking at welds which is the most important
thing in this analysis. We have weld data from two
vessels, P.B. Rough and Shoreham.
When it came to deciding on the flaw
distributions that we would use, you could take those
two and put them together and essentially take an
average out of them, or you could treat them
individually and do what regulators are prone to do
and use the most conservative one.
In this case we've used our most
conservative data and assumed it to represent the
whole of the population. The data is not average the
reason being that even though -- certainly not to be
overly critical because, as I mentioned to Bill before
lunch, we certainly have much more data than we had
here before.
Having said that, we've got extensive data
on two samples of material. We are expanding that to
all the vessels in service and we don't have a
physical model to say if that is appropriate or not.
You certainly would hope that it would be
appropriate because all these materials were
procedured in the same standard. They were all
inspected nominally the same way and so on and so on.
The judgement that's been made, you can
characterize that as being conservative and you can
say that it's conservative for the available data. As
I mentioned before, we don't really have a good way to
project to other data sets.
CHAIRMAN FORD: So when you look at this
data, you are saying that, in fact, it's that?
MR. KIRK: No. I'm saying -- Dr. Ford is
showing a graph of flaw size for the Shoreham and P.
B. Rough and I think Shoreham was -- I can't remember
which was higher.
CHAIRMAN FORD: Shoreham is higher.
MR. KIRK: Shoreham is higher. We use the
Shoreham curve. We don't assume --
MR. KIRK: We use the curve. We use the
distribution, yes. Looking at the procedural
assumption, I'll try to go through these very quickly
but feel free to stop me when you want. The largest
flaws have been the subject of our destructive
inspection recognizing that the large flaws will be
driving the failures.
The small flaws have only been
destructively inspected on a sampling basis and in
constructing our distributions we've combined the
larger flaw destructive evaluation data with the small
flaw NDE data in construction of those distributions.
All reported defects in FAVOR have been
modeled as sharp cracks. I know the folks at PNNL are
on the line. They designed their procedures to
minimize the number of volumetric defects like
porosity that were actually reported. However, having
said that, they also wanted to be careful that no
nonvolumetric defects were missed.
Consequently, certainly some volumetric
defects were included in the characterization. In
FAVOR we model those as sharp cracks and that is
clearly a conservative assumption.
We've idealized complex clusters of flaws,
porosity, so on, into single, simple, plainer
elliptical or semi-elliptical cracks. That's
certainly a customary procedure within ASME code.
It's become a customary procedure because it's
conservative in virtually every case.
However, I'm not going to sit here and
swear to you that I couldn't come up with some complex
cluster of flaws for which the K at some point at some
location of the flaw front isn't bigger than the
superscribed ellipse but we have adopted that
Flaws are measured in their true size and
shape in PNNL's inspection, but in FAVOR they are
assumed to lie only in the axial or circumferential
direction simply because with FAVOR we had no
intention of it ever doing mixed mode fracture
calculations. Again, that is a customary procedure in
engineering assessment, broadly conservative but I'm
sure there are one or two exceptions to the rule.
We'll get to this on the next slide but we
found that virtually all the flaws that we found in
the weld metal were, in fact, on the fusion line.
FAVOR assumes a bi-material -- well, it doesn't
characterize the heat affected zone. We have weld
metal properties. We have plate or forging
properties. There's no properties in between.
However, all the flaws, or most all of the
flaws, are physically found to lie in the fusion line
so judgement had to be made regarding what properties,
what chemical composition, to assign to those flaws in
assessing our embrittlement level.
Lacking any better knowledge and not being
prepared to model anything other than a bi-material,
the decision was made to assume that the fusion line
flaws were controlled by the more embrittled of the
surrounding materials and that is obviously a
conservative assumption.
The weld flaw distribution that is
actually used in any particular vessel analysis is
based on a rule of mixtures from the different weld
process constituents. In other words, if you go in
detail into the PNNL report, you will find that they
reported to us different flaw distributions for SMAW,
SAW, and repair welds.
Within FAVOR we don't map where the
different weld types are. We just know that there's
a weld there. So what we've done, for example, most
of the vessel is SAW so the result in flaw
distribution is, I think, 96 percent SAW, one percent
SMAW, three percent repair and that is assessed on a
vessel-by-vessel basis but it's just a simple role of
mixtures based on a random draw from the experimental
Obviously all of those things -- well,
some of those things could be done better. Some of
them perhaps could not be done better but all of them
are outside of the scope of the current project.
Moving on to assumptions that have been
based either on observation or physical understanding,
we establish -- of course, we get these statistical
distributions of flaws with tails that go on forever.
As Lee Abramson, our statistician, points out, just
because you've never seen a quarter T flaw doesn't
mean one might not exist somewhere, which is unlikely
but perhaps true.
In any event, one needs to decide if you
are just going to not truncate the distribution and
have ever diminishing probabilities for very large
flaws, or if you are going to truncate it somewhere.
We took the decision to truncate it and we established
limits based on physical arguments.
For example, the largest flaw that is
allowed to exist in a weld is the maximum extent of a
weld repair cavity. We picked those limits based on
physical arguments like that that we believe to be
We have since done sensitivity studies in
favor and found out that picking the limit, half as
much or 10 times more, didn't make a difference at
all. We believe it to be a realistically conservative
characterization that really has no affect on the
results. I've already discussed this before.
CHAIRMAN FORD: That rather assumes that
your NDE inspection capability and the porosity of
which you do it is capable of finding all those. Is
that, in fact, the case?
MR. KIRK: Actually, that's a whole other
interesting point, Peter, because we don't right now
-- this has always been a subject of debate between us
and the industry and the ASME code. There really is
no explicit credit given for the inspections other
than that it gives everybody a warm feeling that
things are the way they expected them to be.
But it isn't like Mark referred to heat-up
and cool-down curves earlier which were all based on
the quarter T flaw or flaws at quarter T, three-
quarter T, a good portion of the way through the wall.
Somebody could right now inspect their vessel with the
best available UT technology and show to themselves at
least convincingly that sort of thing just isn't
The regulation still requires them to do
that calculation as if the crack were there. In that
sense, there's no credit given, nor is there credit
given for vessel specific inspections as part of this
project. That would be something again if we were
talking somebody came in on a 1154 specific analysis,
then the regulator would be stuck with trying to
decide what level of effectiveness and probabilities
of detection and sizing to apply for that specific
case on a vessel specific basis.
MEMBER SHACK: They only assume
fabrication flaws, too.
CHAIRMAN FORD: So you're not going to be
a stress corrosion crack. That's not even --
MEMBER SHACK: Not contemplated.
CHAIRMAN FORD: Not contemplated.
MR. HACKETT: That's correct.
MR. KIRK: I've already touched on the
second row. Since virtually all of the weld flaws
were found on the fusion line and that's where you
would expect them to be anyway. That's where they've
all been placed.
All flaws in the cladding have been
assumed to exist parallel to the welding direction
because that's where we found them all and because
that's where you expect them to be. They are normally
lack of enteron fusion. This is a change from the
previous code where cladding flaws were all stuck in
the axial direction because, well, heck, it was
Here we've got a very easy physical
observation to make. They just took all those surface
laws and essentially made them benign. The
distribution of cladding flaws we've got very limited
data on.
We have a few observations so we base the
overall trend on the model coming out of the
elicitation code PRODIGAL relative to our experimental
data that is providing what would appear to be a fit
to the data. It basically goes through the data
Same thing with plate flaws. We have some
data on plate flaws and we're getting more. However,
the dataset wasn't large enough for our statistical
people to feel comfortable basing it on the data alone
coming out of the expert judgement process that Debbie
and Lee conducted.
The experts told us that they expected
small flaws which in plates were defined as flaws less
than a quarter inch to have one-tenth of the density
to be density, density, density.
If you had 10 flaws in a weld, you would
expect to find one in a plate for small and flaws and
one-fortieth of the weld density for large flaws. So
we've used the weld flaw distribution and adjusted it
down by those factors.
It turns out -- and this is perhaps just
luck. It had to be because the data wasn't available
-- that goes through the data. Then the last part is
that when we have flaws in axial welds or
circumferential welds where cladding, we know based on
the weld what way the flaw is going to go. Axial
welds have axial flaws, cladding and circ welds have
circ flaws.
In plate we don't have that preferred
orientation and, indeed, the inspections of the plate
other than laminar flaws, which we have discarded
because they don't provide a fracture concern, there's
no preferential orientation in the plate.
We found no preferential orientation in
the experimental data so the FAVOR code when it
generates a plate flaw has a subroutine that just does
a coin toss and 50 percent of the time it's an axial
flaw and 50 percent of the time it's a circ flaw.
Again, for the results you're looking at
now for Oconee we could put them all axially and it
wouldn't matter. For Beaver Valley it's obviously
going to be a very relevant judgement to make because
they are plate limited.
In the interest of time, I'll just show
one result from the data. This shows the flawed
densities that are being used. It's kind of a busy
graph. I apologize for that. On the horizontal axis
we have normalized the flaw density to the median flaw
density and the median flaw density is shown in the
bar chart.
We are seeing out of the data what you
would think you would see, that the small flaw, small
being less than one bead size, less than one bead
depth, I suppose, the small flaws occur with much
greater frequency than the large flaws.
However, if you refer to the accumulated
distribution function, the small flaws are much more
tightly grouped. There is much less uncertainty about
the size of those flaws than there is the large flaws.
This is just a diagrammatic representation
of some of the information out of the experimental
data. There are whole books on this so I won't go
into that in detail but just some of the information
that's going into the FAVOR code.
In terms of how we do the flaw model in
FAVOR, as I pointed out before, the distribution used
is either based on a rule of mixtures for the
different weld types or unbounding cases in terms of
what data we decided to use as noted previously.
In terms of how we treat uncertainty,
since we have no independent physical model, the
statistical uncertainty in the data is the only
uncertainty that we explicit account for in the model.
We quantify that uncertainty by generating 1,000
different input files and randomly drawing from those
input files when we go through a FAVOR run.
Certainly the algorithm to generate the
flaw distribution could have been coded into FAVOR.
This was simply an expediency to say, well, it's
easier to write a preprocessor and treat this as input
data. As I mentioned before, the uncertainty has been
modeled as epistemic.
MEMBER SHACK: Can you come back to that
flaw density curve?
MR. KIRK: Yeah.
MEMBER SHACK: How did you -- you take
different pieces of the weld and you find the flaw
density in that and then you measure them to the
median flaw density for a given weld? What does this
MR. KIRK: I'm sorry. Your question is
how do you measure the flaw density?
MEMBER SHACK: Yeah. How is this
determined from the experiment? I mean, is it saying
that if I have three feet of weld, there's some small
cubic location that actually has a much higher density
of flaws and that is where I get the two?
MR. KIRK: I'm hoping PNNL is on the line
because I'm having trouble with this one.
Fred or Steve, are you there?
MR. SIMONEN: Can you hear us?
MR. KIRK: Yes, we can, Fred.
MR. SIMONEN: For the flaw density we
might have examined, let's say, .05 cubic meters of
repair metal. We found, say, six flaws, large repair
flaws within that repair metal. Nominally it would
be, you know, six flaws and divided by that amount of
metal you get the average density.
Statistical uncertainty is that if you
only find a few flaws in there, you don't know if you
would have examined 50 times more metal that you might
have found a different value. If you only find one or
two flaws in a volume of metal, you can't say too much
about the average density. There are statistical
equations to characterize the uncertainty due to the
small sample size.
MEMBER SHACK: Oh. So this is basically
a sample size correction then?
MR. SIMONEN: Exactly.
MEMBER SHACK: Okay. Okay. Thank you.
MR. KIRK: Thanks, Fred.
We covered that. I'm sorry. Bill, did
you have another question?
MR. KIRK: In terms of looking at the flaw
model and significant changes from the earlier
analysis, on this graph the mean flaw distribution
curves that were used in the Oconee analysis indicated
as being for the weld metal, the base metal, surface
flaws in the cladding. You can compare that to the
Marshall distribution labelled M which was used in the
earlier 1980s analysis.
Two relevant things to point out in
comparing them with now is the Marshall distribution
had in some cases bigger flaws than we have now but
much more significantly all the flaws in the Marshall
distribution are surface breaking flaws. Of course,
that is a much more severe flaw than an embedded flaw.
Whereas now virtually all the flaws in the
weld metal, virtually all the flaws in the base metal
are embedded so there's a substantial change that has
a substantial effect on the calculated probabilities
of initiation and failure.
Finally, my wrap-up slide. This is the
message or things we would like you to remember coming
away from the PFM summary. In the area of toughness
models, we have referenced all of our models and our
uncertainty analysis to both toughness data and
physical understanding.
We have removed a significant conservative
bias in the un-irradiated index temperature. We have
removed small non-conservatism in the arrest model.
And we have explicitly treated the aleatory nature of
toughness uncertainty where we've quantified it and
it's being treated in FAVOR.
In the embrittlement model, again we have
referenced to, in this case, Charpy data. I should
say in physical understanding. We have a correlation
with an improved empirical and physical basis and we
have corrected for slight biases in the Charpy-based
shift estimates.
In fluence, which we didn't get into in
detail but it certainly bears mentioning, in the
earlier analyses the peak fluence in the vessel was
assumed to exist everywhere on the vessel.
Now we use a fluence map that's calculated
by Brookhaven National Lab which shows that the peak
fluence, of course, only occurs at very limited
locations in the vessel so by allowing the fluence to
vary over the vessel surface in a realistic way,
essentially huge portions of the vessel the radiation
levels drop substantially and effectively drop out of
the picture. That is a substantial change.
Flaw distributions while based on limited
data are, indeed, based on significantly more data
than before. Most flaws are embedded rather than
being on the surface. However, I should indicate that
relative to the previous analyses where we would put
the circa 1980s analyses would put one considerably
larger flaw in each of the subregions. Now we've got
many, many more flaws than before.
The vessels are now seeded with, depending
upon the analysis, somewhere between 5,000 and 10,000
flaws which is indeed appropriate based on our
experimental data a significant departure from before
but it appears that the effect of making those flaws
buried and making them smaller has more than overcome
the effect of having many, many more of them.
That's the summary for PFM. I appreciate
your endurance.
MEMBER SHACK: Can we go back to the model
for just a second?
MR. KIRK: Sure. Which one?
MEMBER SHACK: Just the size distribution.
MR. KIRK: This one?
MEMBER SHACK: No, the next one. There's
an expert judgement that the distribution is
truncated, for example, at 23 percent for the welds?
MR. KIRK: Yes, 23 percent. I believe the
number was based on the extent of a weld repair cavity
that is allowed by ASME and I believe that was two
inches. It happens to be 23 percent of the Oconee
vessel wall. I'm trying to remember the logic we had
on the base metal cut. I think we all rationalized
that it should be less than weld metal and I think
that was about it.
MR. HACKETT: I think the base metal was
a judgement amongst sort of the working group that was
involved with that.
MEMBER SHACK: What we're seeing here is
some sort of a multiplication, right? There's a size
factor times the density distribution to get the
MR. KIRK: Yes. That's right.
MEMBER SHACK: The size distributions,
those are going to be universal and the density will
be -- oh, the density is universal, too, because we
don't have an individual vessel type.
MR. KIRK: That's right. The density that
we measured on the Shoreham vessel will be assumed to
apply to all the vessels. The size distribution will
vary slightly, and I emphasize a slightly bit,
relative to the ratio of SMAW to SAW to repair. That
changes the rule of mixtures adding that gets in here.
Since the vessels are made virtually all from SAW,
that's what's dominating the distribution. Again,
essentially, yeah, a universal size distribution as
MEMBER SHACK: Now, until we get to the
cut-off, are those slopes basically coming out of the
analysis of the data from P.B. Rough and Shoreham?
MR. KIRK: Yes.
MEMBER SHACK: So those are empirically
MR. KIRK: Those are empirically based.
That's right.
MEMBER SHACK: And then an expert
elicitation says it sort of really tails off pretty
steeply at some point?
MR. KIRK: Well, the expert elicitation
was do you expect to find a flaw of infinite size?
Well, no. Okay. Well, how much smaller? Well,
that's why we're experts. Pay us more.
MR. HACKETT: I guess I would make the
suggestion this is another good break point.
CHAIRMAN FORD: I was about to say exactly
the same thing.
MEMBER SHACK: Can I ask one more
question? Is 1,000 samples enough? I mean, does that
quantify your -- you know, what uncertainty does that
give you if I sort of do my order statistics? Is that
what I do to get confidence bounds on these things?
MR. KIRK: Yeah, I think the -- well, I'll
take a cut at it and maybe Terry or Fred would like to
chime in. I mean, I think of it as like two certainly
isn't enough, 30 certainly isn't enough. You want to
take enough draws from the mathematical model that we
have constructed to reproduce the scatter that we've
seen in the flaw data. Certainly 1,000 covers that to
me. A more mathematical explanation you're going to
have to ask somebody else.
I don't know. Terry, do you have a
thought on that?
MR. DICKSON: I agree with you.
MEMBER SHACK: Okay. So you weren't
aiming for some percentile confidence bound by some
order statistics argument?
MR. DICKSON: Terry Dickson, Oak Ridge
National Laboratory. Actually, I'll defer to Fred
Simonen for the definitive answer, but I believe he's
going to tell you that he did a Monte Carlo analysis
basically with the intent to reproduce the scatter in
the data that was seen.
MR. HACKETT: Is that right, Fred?
MR. SIMONEN: Yes. We thought 1,000 was
more than enough samples. Just given the number of
actual observations we had, I don't think we could say
if we get 10,000 we could be doing anything more
realistic than the amount of data we had collected on
the flaws.
MEMBER SHACK: That's true. I guess there
is no point --
MR. SIMONEN: I guess we didn't want to go
beyond that. It just generated a lot more data for
Terry to handle internally within his computer code.
That is kind of a judgement as to what would be a
reasonable number of files to give Terry to manipulate
within his FAVOR code.
MR. BISHOP: Bruce Bishop at Westinghouse.
What that is is 1,000 different flaw distributions.
The flaw distribution is sort of the measurement of
the uncertainty in the density and the flaw size.
Actually what you're saying is we don't know that
MEMBER SHACK: There's only one flaw
distribution, right?
MR. BISHOP: No, there's a 1,000.
MEMBER SHACK: This is all epistemic
MR. BISHOP: What we're saying is we are
representing that uncertainty as we're not sure that
we know that one distribution exactly so we're going
to simulate using Monte Carlo simulation the
uncertainties around that distribution and that's
where the 1,000 distributions come from.
Is that right, Fred?
MR. SIMONEN: That's correct.
MEMBER SHACK: Yes. I'll think about it
some more.
MR. HACKETT: And next we'll come back and
pick up the thermal hydraulics portion.
CHAIRMAN FORD: I'd like to talk with you
and the colleagues here about that.
CHAIRMAN FORD: Let's recess until 25
past. Then we'll find out the best way of doing the
rest of the time we have.
(Whereupon, at 2:13 p.m. off the record
until 2:33 p.m.)
CHAIRMAN FORD: Okay. We have decided to
keep to the schedule, albeit a revised schedule. We
are going to revise the revised schedule.
MEMBER SHACK: You even said that with a
straight face.
CHAIRMAN FORD: We'll talk about --
MR. HACKETT: I think this means we'll be
picking up on slide 27 in everyone's package.
Professor Mosleh from University of
Maryland will be making a presentation.
MR. MOSLEH: Yes. I am actually
representing a large group of people who work on this
part of the problem. As Alan mentioned in the
morning, this has been truly a interdisciplinary,
interactive work.
I would like to acknowledge the
contributions by other team members, Kasyz Almenas,
Professor Almenas for the University of Maryland, an
expert in thermal hydraulics, Bill Arcieri from ISL,
an expert in thermal hydraulics, Dave Bessette from
NRC, expert in thermal hydraulics, Dr. James Chang, an
expert, among other things, in thermal hydraulics at
the University of Maryland.
If I were to kind of divide the areas of
expertise among all these thermal hydraulic experts
I'm the least experienced and that's probably why they
asked me to be the presenter. If you divide the areas
of expertise to even substantive expertise, I have the
normative part and the rest of the team has the
subcentive expertise so I will defer the questions on
that subject to my colleagues and team members.
This as a way of introduction, we are in
the middle block, thermal hydraulic analysis. But as
a major block within all other pieces or aspects of
this problem, one of the key requirements from day 1
was to develop a consistent, internally and externally
consistent set of methods and techniques with respect
to treatment of the value modeling issues, as well as
uncertainty and primary uncertainty and consistent
within disciplines and compatible across disciplines.
Then you see examples where compatibility
became actually a constraint or a driving source or
mechanism for our analysis as a boundary condition.
Back to what Dr. Ford brought up this
morning which was an important aspect to address, and
that is managing such a large effort with respect to
interactions among the team members. This was not
easy and not obvious from the beginning.
As a result, a number of activities went
in parallel. At the end or the middle when we
compared notes, we recognized that the need to be a
certain type of interaction, lessons learned from the
work, that, indeed, toward the end it became almost
one team effectively frequently interacting and
exchanging notes and providing feedback and feed
forward to different pieces of the methodology and
then the process. That in itself deserves, I think,
much attention as a model for future activities.
One aspect of this was recognition, of
course, of the key issue and uncertainty assessment
which we obviously had the intention and then we did
an assessment of uncertainties that we could indeed
address explicitly.
But also part of the uncertainty
assessment became uncertainty management in the way
that Alan mentioned this morning where we would
actually refine the models to remove the sources of
uncertainty and, therefore, improve the modeling
process. That was also, I think, an important aspect
of our experience.
With this introduction, I would like to
give you an overview of the presentation. Talking
about initially in the beginning constraints and
assumptions of the model and the entire process. Some
notes about RELAP model which is a core of our
analysis, the method that we use. We call it top-
down. Probably not the right terminology for that.
Method of defining plant states that are
PTS significant to enable us to identify what areas to
focus on, what accidents and areas made sense, and
what things we need to hand to the PFM folks for
further analysis. In addition, obviously, we also had
to devise a method of identifying which RELAP runs,
which hydraulic runs we needed.
With that, our job being primarily on the
uncertainty side, which is my part, method of
identification of dominant sources of uncertainties
and the characterization of such uncertainties.
Finally, applying these tools and
techniques that we developed throughout we ended up
with a set of thermal hydraulic runs that represented
not only the best estimate but also the uncertainties
with the corresponding frequency distribution.
Constraints and assumptions, kind of a
high-level view of the issues that we have to wrestle
with. I'm sure some of you are familiar with this in
Europe and also in the past in the U.S. for
characterizing and addressing uncertainty thermal
You know that much of the effort has been
focused single scenarios and we can do quite a
reasonable job by focusing narrowly on a single
scenario here. We were faced with the task of about
10,000 or so, 10 to the fourth, a number of scenarios
each of which in principle would be assigned to a
specific thermal hydraulic run.
We had to reduce that to about 100 or less
and that was the concern at the beginning imposed by
obviously resources as well as input requirements or
formats for the FAVOR code.
In addition due to the complexity of
thermal hydraulic models, as you know, and inherent
nonlinearities, we had to simplify the process using
screening criteria, screening tools to help us focus
on those aspects of uncertainty that merit
consideration and explicit modeling avoiding the
nonlinearity to put kind of almost a panaverbal
barrier between us and the solution or the answer we
were interested in.
One of the other assumptions that we made
was that RELAP was essentially a proper model and
subject to the variabilities and uncertainties that we
recognize when addressed to the extent possible that
that tool provided a reasonably good answer to the
characterization of the pressure and temperature
traces that we were interested in.
A few notes quickly on RELAP model
description. The experts among the team who ran RELAP
started with the Oconee model developed by INEL in the
early '80s in the original PTS evaluation study,
introduced some setpoint changes to update it to the
current plant values.
One thing is that they change the nominal
value for the temperature or that the temperature was
set at 70 rather than 90 which was in the '80 study.
We used that, by the way, as a valuable in our
uncertainty assessment later.
Some control models were added to enable
simulating operator response and action times which
Alan mentioned in the morning also. A few other model
corrections were introduced. For instance, to test
the problem with the overfilling of the intact steam
generator they introduced a control or EFW flow so
that they could be controlled independently.
As kind of a step toward refining and
making the code more accurate, they added two
dimensional downcomer model as opposed to the 1D which
is applied to the rest of the code.
Do you have a comment, Dave, on this? No?
What is different in this particular
study? Well, the fact that we had a faster running
code and something that we could manage a few hundred
runs was a tremendous help compared to the 10 or so
runs that was the basis of the 1980 study. This
allowed us to run many, many cases and do sensitivity
and parametric studies to help us screen out factors
that did not matter and zoom in on things that
contributed significantly to the uncertainty and the
basic characterization of the plant thermal hydraulic
In the AF input preparation there is not
much change. It is still time consuming. However,
the graphical output capabilities have improved
significantly. We use that a lot during our
assessment of the scenarios and thermal hydraulic
What's different, again, and probably the
most important part of this presentation, that they
allowed us to do uncertainty evaluations and
uncertainty assessments, an integral part of this
round as compared to the 1980.
And some new insights with respect to the
validity of the 1D model based on exponential results
and some other computation for dynamic results were
basically things that enabled us to convince ourselves
that certain assumptions we were making were valid or
acceptable at this point.
Other things is that the new code has the
capability of addressing the two-phase flow versus the
old five equation models that we have been using in
the past routinely. That is an expanded or enhanced
capability in that sense.
This is an overview and I would like to
spend a little bit of time on that because it is going
to be helpful in understanding how the whole process
actually generated the results. Even though this has
been kind of shown before, I would like to point out
specific things that relate to the term hydraulic
On the left under PRA event tree side, the
yellow box on the screen, you see scenarios coming
from PRA and each scenario has an associated
uncertainty on the frequency or probability of
occurrence so those little curves that you see on the
side of each scenario is the uncertainty distribution
or the frequency of occurrence of the event.
That is about 20,000, 30,000 or more. I
don't have the most recent figure but there are many
such scenarios obviously. One step in reduction
happened in the interface between the yellow box and
the green box, namely we have to map tens of thousands
of scenarios to a limited number of thermal hydraulic
runs. That's one reduction process that we had to go
What you see as a distribution on that
side of run 1, run 2, etc., is an aggregation or the
sum of all the frequencies of all scenarios that have
been assigned to that particular bin. You see the same
thing done for the remaining runs, thermal hydraulic
MEMBER SHACK: Now, run here means bin,
MR. MOSLEH: It's a bin that initially
assigned the most representative distribution of
thermal hydraulic run to and then we go through an
expansion asking whether that is kind of sufficient
representation of that particular bin. In some cases
we added more runs and in some cases the bin remained
one run bin.
Initially we had about 300 cases to play
with in terms of runs, although we were shooting or
focusing on reducing it to less than 100.
Now, at that point, addressing the
uncertainties added -- you know, the requirement for
adding runs or expanding the existing pool. Please
note that while we were looking at mapping the PRA
event tree sequences to the thermal hydraulic runs, we
identify additional runs we needed to do and then
identify sources of uncertainty that we could actually
put back into the event three and, therefore,
expanding the event trees and reducing the number of
uncertainty parameters we had to consider.
Back and forth between the green and the
yellow box we were iterating to optimize what we would
then consider in the thermal hydraulic uncertainty
The reason being is for each scenario, as
you will see, we identified multiple sources of
uncertainty and variability that needed to be
considered on top in addition to the characteristics
of the scenarios going into those bins and the few the
better because you are talking about the communitorial
explosion if you have many, many factors. Each factor
sometimes is a continuous variable so we have many
values to consider and so on and so forth.
Moving from the RELAP characterization of
bins and assigning them to multiple scenarios, we have
one uncertainty analysis task starting with these bins
and adding flavors of various parameters producing
multiple curves, multiple versions or variations of
the initial set of thermal hydraulic results;
temperature, pressure, and transfer coefficient.
What you see in the middle box, the brown
box, the first of one group, you see three curves
resulting from one such uncertainty consideration and
the corresponding fraction of times, P1, P2, P3, that
each of those situations will emerge. I'll go through
these in more detail but this is an overview.
At the end, obviously, you naturally need
to modify the initial frequencies coming from the PRA
by the probabilities that reflect the uncertainties
that now we have added. You have gone from one
thermal hydraulic run to three thermal hydraulic runs
each of which has a certain probability of occurrence,
P1, P2, P3.
You modify the frequencies coming from the
PRA by these probabilities, P1, P2, P3, and you
generate a new frequency curve that is assigned now to
each of those sets of thermal hydraulic traces and
those are passed on to the PFM analysis where the
frequency distributions and the conditional
probability of failure of the vessel will then
generate when summed over all scenarios the total
frequency of vessel rupture as a result of PTS type
event and that is the very last distribution that you
see in the blue box.
You have expansion and contraction process
taking place addressing different types and flavors of
uncertainties, variability, spectrum of scenarios, and
so on and so forth. That's an overview.
The process is outlined in this set of
blocks. I just wanted to comment on a couple of
things. The yellow boxes, the lighter shade,
essentially referred to the process of identifying
what mattered in PTS from a top-down approach.
You have a disturbance in a plant and what
kind of situations can you get to a potential PTS
scenario. This is the type of thing that is typically
done in a PRA space.
In this particular case for ensure that we
have a comprehensive and complete coverage of PTS
scenarios we have this top-down approach utilizing
basic principles, heat balance in the primary system
and applying certain characteristics of the plant
itself, in this case Oconee to identify the classes
and categories of PTS potential scenarios.
That is kind of the yellow boxes. That
parallels and shares many steps, many phases with what
Alan presented this morning in his functional event
tree perspective.
Then the brown boxes cover the uncertainty
layer on this. Initially even though we went through
a significant reduction in the number of scenarios,
still there was no point or no net gain from
addressing uncertainty on everything so we went
through further reduction to zoom in on those cases
where uncertainty assessment will actually be cost
Through that process we have identified
important sources of uncertainty, reduced the number
of parameters that we had identified initially to
fewer so that the problem would be manageable in size.
Size management in this case was a complex process,
not very obvious as to what one would actually use and
rely on to reduce the number of runs, cases,
We went through the typical screening that
we do in PRA, namely frequency screening. Whenever
the frequency dropped below a certain level, we just
did not consider those. And also with qualitative
criteria engineering judgement about the impact of
various scenarios.
The starting point that was put together
by Professor Almanas initially was this very simple
but very informative model of a reactor where we have
the core and the reactor vessel, hot leg and steam
generator, the pressurizer, the steam generator
secondary side, and the part that we're interested in,
namely the reactor vessel downcomer in the yellow
highlighted area and various links and lines of the
cold leg, the hot leg, and the various sinks and
sources of heat and mass and energy.
As a guideline you could look at this and
say what could happen that would get us to a situation
that has a PTS potential? What are the heat sources?
What are the heat sinks? Then from this we identified
a number of parameters or characteristics or events in
PTS scenarios that would impact the temperature and
Remember, we are not interested in all
possible set of thermal hydraulic characteristics and
what happens to them in these scenarios. We are
interested in the impact on the temperature and
pressure, the downcomer.
Things that affect the temperature, I
don't want to go through this. These are kind of
fairly obvious in a way, some of them at least.
Things that would affect the temperature are the heat
capacity and heat source and heat sinks, such as core
flood tank and high pressure injection steam
The coolant flow rate affects the
temperature, namely the RCP state, the reactor coolant
pump. A couple of things that are Oconee specific,
the possibility of mixing of core water into downcomer
due to the event valve that they have in Oconee, and
the number of phenomena that people have identified in
the past as possible source of impact on the
temperature, the flow resumption and interruption and
the boiling condensation.
On the pressure side you have some of the
same phenomena and same contributors, the mass, the
energy change in the RCS, and the mixing of the core
water and the downcomer, so on and so forth.
Through this process of kind of a simple
picture that is the beginning of any serious work in
thermal hydraulics basic principles we identify these
sets of variables and we know that you are dealing
with large RCS heat capacity and these significant
heat capacities require a large heat loss to decrease
the downcomer temperature fast enough to be of PTS
This is an example to show graphically and
numerically that the heat sources are smaller than the
dominant heat sinks that we have identified, namely a
two-inch surge line break which can easily overcome
the decay heat decreasing the temperature.
On the left-hand side you see the primary
side break case and the impact of a two-inch or eight-
inch break, compared to the black line going up that
shows the net affect of the decay. The heat sinks,
the dominant ones, basically can actually put us in a
PTS situation by themselves.
On the second side we have a similar
situation, main steam line break and the SRV stuck-
open case. These are cases that could get you to also
a cooling effect that is not compensated by the decay
heat. These are obviously the dominant sources of
heat or other minor sources of heat such as the energy
that the pumps put in but they are not dominant
This was mentioned earlier this morning.
To characterize scenarios on a screening level, we
looked at things that would give you a cool-down ramp
of more than 100 degrees and temperatures falling --
downcomer temperature falling below 400. As was
mentioned several times, we did not use these as a
sole screening criteria.
In every case we had our eyes on the
pressure behavior making sure that we are not missing
anything that would be the result of rapid change or
significant change in the RCS pressure that combined
with the temperature behavior would get us into
In general we realize that the tendency
was that the thing that PTS was most sensitive to were
the temperature. If you look at the three variables,
temperature, pressure, and the heat transfer
coefficient, if you were to rank those, I would rank
them as -- we would rank them as temperature first,
pressure, and then the heat transfer coefficient. It
turned out actually the heat transfer coefficient is
not a factor at all. We can vary it by some factors
and not get any significant impact or effect on the
These things we are using as guidelines to
help us kind of put our arms around the problem, the
multi-dimensional problem of how many variables do you
change at a time. Well, you know, first we look at
temperature and then we look at pressure. In that
order we can reduce the number of factors to consider
at a given time to a smaller number.
The earlier presentation so far as focused
on kind of general characteristics, general
considerations. At this point I'm just showing that
there are certain peculiar features of, in this case,
Oconee that needed to be considered. In this
particular case the existence of the vent valve that
could create a flow path for the hot water or steam
getting to the downcomer. That was a factor in our
As a result of these considerations
looking at the various factors, the team came up with
a matrix representation of the -- this is an event
space where you look at the primary and the secondary
states and you then look at various combinations of
those. On the primary side we have the primary side
intact or a break or breach in the integrity of the
primary and there is a dividing line between at 1.5
inch break size.
That dividing line is based in the
capacity of HPI to compensate RCP trip set points.
The dividing line also gives us a zone on the left-
hand side below 1.5 inch versus above 1.5 actually
decoupled the primary and the secondary.
On the secondary side you have things such
as whether you have breach of one steam generator or
two, or you have an overfeed case for one of the steam
generators and the combination of these so there are
a number of possible states.
And the combinations of the secondary side
and the primary side the state gives us the matrix.
Some cases obviously are not of PTS concern, not a
concern at all such as intact on the primary side and
nominal situation for -- a normal situation for the
second side.
As we go through the matrix we can
identify certain characteristics that are unique to
the cells of this matrix. Each cell is further
divided to subcells, depending on state of high-
pressure injection system ,that plays an important
role in some of the same areas.
We use this as a guideline and, at the
same time, a more event tree driven not quite bottom-
up but event driven picture on the left-hand side that
was displayed in the morning by Alan.
The insides and characteristic stat were
identified on the right-hand side through this top-
down approach to PTS event matrix were consistent with
the results of the bottom-up approach. Collectively
we decided that we have a reasonable coverage of value
scenarios as classes of scenarios at this point.
Each cell in the matrix or each branch in
the functional event tree on the left represent and in
principle contain a number of T-H classes. We
populate the matrix with T-H, thermal hydraulic runs.
Each thermal hydraulic run represent many,
many scenarios. Sometimes nine or 100, 1,000
scenarios were grouped into one thermal hydraulic bin
because of their common characteristics.
Once you have this matrix through an
iterative process, we combine cells based on
similarity on PTS characteristics. If there are
certain things that tell us, well, you know, cell X
and cell Y could be combined and represented by the
same classes or groups of thermal hydraulic runs, we
did so.
In this process we reduced the number of
thermal hydraulic runs we needed and also identified
new runs that we did not have because we saw an
obvious gap or hole. Then we mapped the PRA event
tree sequences, those tens of thousands of sequences,
individual sequences or groups of sequences, into the
T-H runs that we identified at that point.
We went through a series of frequency and
PTS impact engineering judgement and engineering
assessment to screen the number of scenarios that we
needed to consider basically. It's a typical
screening that, again, was mentioned this morning
based on frequency and impact without having the
benefit of the PFM runs at that point because the
FAVOR code was not ready for production use.
However, at the end we have this pleasant
feedback that the criteria we applied had actually
worked and none of the ones that we labeled as
insignificant, even though we passed a subset of those
two for tests, turned out to be important, about 100,
I think, scenarios. I don't know the number. Is it
50, Terry, of sensitivity cases?
MR. DICKSON: Fifty sensitivity plus 47.
MR. MOSLEH: Right. Fifty sensitivity
plus 47 actual base scenarios that were used in FAVOR
code to generate PFM results indicated that the ones
that we label as insignificant turn out to be
MEMBER BONACA: Okay. Just a second. The
previous view graph, just to make sure I am still
following you, this is still in the binning process?
MEMBER BONACA: Okay. So some of these
branches, in fact, will end up in some bins that are
not particularly starting with the same sequence.
MR. MOSLEH: Right.
MEMBER BONACA: The same initiator.
MR. MOSLEH: That's true.
MEMBER BONACA: The other question I had
was you presented before a simplified model to
highlight what the controlling parameters would be.
Again, it was always used for the purpose.
MR. MOSLEH: Yes, only.
MEMBER BONACA: Okay. At this stage you
are back now into the more sophisticated RELAP5 model.
MR. MOSLEH: Absolutely. These are just
guidelines for us to think through the problem.
MEMBER BONACA: Okay. Good. I just
wanted to make sure I am not losing the process.
MR. MOSLEH: So at the end of this process
we had a number of scenarios, groups of scenarios
mapped into the cells of this matrix. The numbers
that you see in this matrix represent the sum of the
frequencies of events going into various categories in
these cells.
You can note very quickly that the one
that I've highlighted yellow actually contains about
94 percent of the total frequency. Of all the
scenarios that were identified and analyzed being of
PTS significance and applying the screening criteria
of 10 to the -8 or less being screened out. When you
add all those scenarios, 95 percent of those fall in
that particular cell containing a number of dominant
scenarios one of which will be the subject of example
The reason we were interested in
identifying such a cell and you are looking for
something like that was that instead of now doing
uncertainty analysis on all the cells with or without
number, we could focus on that particular box.
Covering 95 percent is well within the
range of things that we do routinely in PRA. We do
uncertainty on the dominant scenarios rather than
everything because that's where you see the impact on
the results.
Taking that as a guideline, this
particular cell includes a number of classes of
scenarios including stuck-open SRVs, whether they
remain open or reclose, two different flavors of the
scenario. LOCAs between 1.5 inch and 4 inch in
diameter and LOCAs between 4 inch and 8 eight in
diameter are among the things within that particular
cell, the yellow cell.
Then we start looking more seriously and
all these things in a way happen in parallel more or
less because while you were doing the mapping and
classification and characterization we were looking at
the various sources of uncertainty and variability.
We identified the sources of uncertainty and grouped
them into two classes, two groups, those that deal
with the structure of the model and the modeling
Those which are not in the first group and
they were easily characterized as parameters of the
models. The line is somewhat arbitrary in a
theoretical discussion between model and parameter
but, you know, what we have listed there as model
uncertainty is what we mean by model uncertainty.
Namely, the process of event sequence
modeling and mapping scenarios to T-H run. That
process is a source of uncertainty, the process of
constructing the PRA model and the mapping and
assigning thermal hydraulic runs.
Then they have for each T-H run that
you're running and assigning to different scenarios
you have sources of uncertainty associated with the
RELAP code itself and regroup them into the so-called
internal uncertainties and RELAP input deck
preparation and nodalization.
What you see here, if I go back from the
top, on the first sub-bullet, even sequence modeling
and mapping to T-H runs, the first is the fact that
you don't have all the possible details in the event
That was mentioned in the morning
extensively and discussed by Alan that there are
certain inherent assumptions that you are making in
constructing event sequences.
What we did here, and I think it's an
improvement over the conventional way of doing PRAs
and, for that matter, actually the first PTS study, is
that whenever we identify things that -- sources of
uncertainty, model uncertainty, sources of ambiguity
and variability that was best addressed by adding and
refining the event trees we did.
We would negotiate back and forth and Alan
and the team will go back and add things to the event
tree or they will identify things that would help
reduce the uncertainty variability so now you are
removing sources of uncertainty. The most important
ones, the most relevant ones I would say are already
addressed by improving and adding details to the
Second sub-bullet, assignment of event
tree scenarios to T-H bins. In the first round of
assignment of T-H bins to the scenarios, we were
looking at explicit signature, very specific signature
of an event for which we would run a thermal hydraulic
run. That initial assignment carries very little
uncertainty because the term hydraulic run is actually
tailor made for the scenario.
The second stat you now expand and you
have a number of representative T-H runs for a
spectrum of scenarios that are going to the bin. This
is the part that required judgement and discussion and
assessment. There are a number of explicitly treated
parameters there that we considered such as operator
timing or equipment operation timing, parameters
external to the core and to the reactor affecting the
thermal hydraulic behavior of the reactor.
These were types of things that help us to
have a more explicit treatment and representation of
this source of uncertainty, namely the assignment of
representative runs. That's where we actually will
show you examples of how we expanded the thermal
hydraulic runs to address the source of uncertainty
RELAP code. I have a list of relevant
modeling uncertainties in the following view graph.
It is a computational model. We identified a number
of sources of model uncertainty in there. We think we
treated the most important ones. That's our
What we did not treat in the second
bullet, nodalization, is not treated explicitly but we
used ISL and NRC used a fairly detailed nodal model.
I think it's a realistic level of detail. I'm told
that if he had gone to 500 nodes, it would not have
significantly improved.
Certainly we are not staying at the 10
node level so there is an optimum level of
nodalization that has been used and there is a
reasonably high degree of confidence about the quality
of such model.
Nevertheless, we did not treat that source explicitly.
Parameter uncertainty, all of the things
that I don't cover in the first such as boundary
conditions, are included in the parameter uncertainty
such as, say, temperature of the RWST. The important
ones were treated explicitly.
Now, once you go from the top to the
bottom progressively, most of the time with a few
exceptions, you are going from primary sources of
uncertainty or variability to the second resource of
primary -- sources of uncertainty primary to
secondary. In what sense? In the following sense.
We are trying to address or quantify
address most dominant sources of uncertainty. There
are many other sources of uncertainty that their
impact is not significant. We were not planning or
did not intend to cover all.
In terms of net effect from the top to
bottom, you see cases where you have the scenarios
that define a spectrum of scenarios that are mapped
into a particular cell in that matrix that I showed
earlier. The variation from scenario to scenario is
a bigger source of uncertainty than the variation in,
say, some coefficient within RELAP. Okay?
As such, I call one primary and the other
secondary. Not in terms of complexity because the
second one is a much more complex problem to handle.
Fortunately, the impact was smaller than we initially
thought and that way we were able to address the
uncertainties without solving all the problems
associated with modeling in thermal hydraulic.
A few comments about what I mentioned
earlier, and I think it was in response to a question
Dave addressed in the morning. Also we are concerned
about the 1D volume averaging nature of the code so
there is a downcomer to the modification that has
taken place.
We use experimental results, the Oregon
State APEX facility and computation flow dynamic runs
and results to get a sense of validity of the 1D
computation back to the scenarios that we're
interested in. The judgement of the experts within
our team is such that that issue is not a significant
Empirical correlations, a number of which
are listed in the next view graph, was a source of
concern and we thought that they do have an impact on
the results so we treated the most important one of
those explicitly. That in a nutshell is a list of
things that at the end of this process we zoomed in
On the parametric side we also sometimes
call them boundary conditions because they are not
inside the code or RELAP run. They are the primary
size break, the primary size break location, decay
heat, season, like winter, fall, the temperature
HPI state up or down, partially function,
HPI flow rate, and the core flood tank pressure were
the ones that we characterized as boundary condition
or parametric uncertainty and they turn out to be all
aleatory in nature because there are externally
controlled or variable in nature so to speak.
There is more of an inherent variability
in the character than the things on the left-hand side
which are internal to RELAP code and they are modeling
in nature. The amount of circulation that we get in
the primary is addressed by a modeling trick using the
vent valve as a way of simulating circulation.
In fact, what I list here as sources of
uncertainty are also surrogate ways of addressing the
uncertainty. A number of things such as the component
heat transfer, which is a calculated number based on
a model within RELAP, we address that using surrogate
modeling methods.
Flow resistance is the same. Break flow
area rate calculation in RELAP is a source of
uncertainty. We came up with a way of addressing that
through surrogate models. This computational mixing
of fluid, the reverse flow in the cold leg, which is
the result of a numerical fact inside the code, was
also a source of concern that we tried to address.
So we have a classification of aleatory,
epistemic in terms of the character of this
uncertainty, whether all the modeling ones are more
epistemic because we claim that we better models with
developing and utilizing more refined models you
remove that source of uncertainty so it's knowledge
driven. The other ones on the right hand side are
nature driven so we don't have much control over
So as you can see in this view graph, you
have, I don't know, maybe 10, 12, 15 parameters to
vary and each parameter has a certain range. How do
we then identify and actually get the impact of these
factors combined when we vary them at the same time?
That's a very difficult problem to address
given not fundamentally or in principle because you
can run the code billions of times and get an answer
for each possible combination and come up with maybe
a little bit more efficient sampling techniques and
then reduce the number of runs that you make.
Nevertheless, it was much more than we could afford
and much more than we could actually afford to give to
the PFM folks for analysis.
So we decided to use ways of simplifying
the problem, reducing the number of variations that we
needed to consider, and find indirect ways of
identifying representative thermal hydraulic scenarios
-- few representative thermal hydraulic scenarios for
this massive number of communications that you could
possibly get.
The first step was to go through one-
factor-at-a-time kind of analysis sensitivity. About
200 such cases were run. Actually 200 plus because
sometimes we had a base case run, one of these 200
cases, changing one parameter, but we kind of had a
sense of how RELAP responded to changes in input
values so we developed additional sensitivity cases by
extrapolating from existing runs so there were a large
number of sensitivities that we did to get a sense of
the impact of individual factors on the result, on the
temperature and pressure.
Primarily focusing on temperatures, I
mentioned earlier, we looked at the temperature first
and then pressure so one factor at a time. It was a
method of looking at the influence of individual
Now, each of these sources of variability
or uncertainty such as decay heat or season or HPI
state, functional state of the high-pressure pumps,
some of them are continuous random variables and some
are discrete.
Like the state of HPI is off/on or
partially functioning four or five states. Some deal
with temperatures that go from 20 degrees to 80
degrees so it continues. We couldn't run all these
small increments on the values of the parameters.
We picked representative values from the
range and for each range we had an uncertainty
distribution, epistemic uncertainty distribution so
temperature could range from 20 to 80 and you have a
normal distribution centered around maybe 4560. Then
we say let's represent this curve using a discrete set
of points, 3, 4, 5 points. We did that and started
varying the parameters.
MEMBER SHACK: So that's like saying you
took the 5 percent, the median, and the 90.
MR. MOSLEH: Something like that. Or a
DPD which meant we collapsed the total probability
into three discrete points with probability mass so 30
percent of the mass is centered on one point, say 40
percent on another one, and one minus the rest which
would be on the third one. This is called DPD or
discrete probability distribution.
And to get the effect of the simultaneous
impact of the factors we came up with another
computational trick that I will mention later. This
is an example of many, many sensitivity analyses that
were performed where an average temperature change,
10,000 second average on the downcomer temperature is
plotted against the different break sizes going from
2.8 inch to 8 inches for two different states of HPI,
the nominal and the failed state.
We can see easily that after about 6.6
inches or 5.7 inches the state of HPI doesn't matter
whether it's functioning or failed. Similar
sensitivities, tons of those were performed to give us
a sense of how things change.
This was to have a sense of the parametric
variables. On the modeling side I mentioned a kind of
surrogate model. This is a case where we recognize --
the experts recognize that the treatment of break flow
rate in RELAP is a source of uncertainty.
In fact, the current version of RELAP
calculates what we have shown here with the red line
second from top. For different break sizes that's the
flow rate that the code calculates at a certain
upsteam pressure, I think is 71 bars.
There according to older version of RELAP
choose the Ransom trap model. That's the green one.
The new one is, I think, Henry-Fowski model we've been
using since '98 in RELAP in the alpha -- what is the
new version, gamma version?
MR. BESSETTE: I forget.
MEMBER SHACK: The gamma.
MR. MOSLEH: The gamma version. The red
one is the one that all our thermal hydraulic runs are
run with that. However, we recognize that because of
the uncertainty in the computation of the break flow
rate, that if you go to two extremes, namely to the
black line, which is a frozen model, and the hem line,
which is the blue one right there at the homogeneous
equilibrium, mixing that, in fact, just for model
uncertainty associated with the break flow rate, if
you have four different, at least, variations, four
different options, two granted or extreme and the ones
in the middle are more realistic, nevertheless, there
is a range.
Now, we couldn't really change the code
and the computational capabilities of the code so we
came up with a surrogate way of addressing the effect
of the break flow rate computation. If you note that
we can get the same variation in the mass flow rate by
changing the size across. You change the size
horizontally and you get the effect of the mass flow
rate change vertically.
There are a number, four or five such
numerical surrogates or methods that we have to have
to represent these modeling uncertainty sources within
This is a graph that shows on an average
temperature sense an average over 10,000. You can
look at one of the cases, a 2.8 inch surge line LOCA
that if you keep the size at 2.8 inch and vary all
other parameters ranging from cold leg break location
all the way to break area temperature in the outside,
winter, summer, whether you have the event at the hot
zero power or at power, HPI failing or functioning,
you can see that the net effect of many different
parameters in one graph and very quickly decide that
in an average sense only a subset of those are
important, not all.
When you go to the commutorial kind of
question when you need to combine 15, you can maybe
settle for five instead of 15 because the others have
negligible impact or effect. That was another set of
sizes for problem reduction.
Still you have a number of formulas to
vary and then we didn't want to run, like if you take
a few of these and a few values per variable,
sometimes you get 5,000 combination. We didn't want
to run 5,000 cases. We made an assumption that maybe
the next effect of these variables, the combined
effect of them, would be an linear additive of the
individual impact.
Remember that we are focusing on this
particular cell, this particular yellow cell. We said
it would be ideal if we could add the net effect of
the individual temperature changes together and get
the effect of the combinations.
We tested that so what you see as T equals
sum of delta T so let's add the delta Ts coming from
different sources of uncertainty. The probability of
such temperature situation would be the product of the
probability of different variables on your uncertainty
As a way of testing this assumption in
this limited scope of applicability with no claim of
generality that in this limited scope of applicability
for the cells we looked at, the cell that was the
focus of our uncertainty analysis, this graph shows
that for five cases when we actually ran RELAP, and
the cases are listed 1 through 5, for various values
of the parameters the RELAP calculated average
downcomer temperature listed in the last column was
not very different and, in fact, was very close to the
average temperature from the linear additive
At least in this case, based on this
sample randomly selected set of runs, we felt a little
bit more comfortable about the idea that maybe the
temperatures can be added.
I must add very quickly something to this,
that we did not use the result of the sum of this
temperature as a substitute for thermal hydraulic run.
Nor did we pass such information to the PFM.
Actually, these were used as indicators, as ways of
identifying a few representative curves.
We said, look, if I can identify from this
population of many, many possible combinations plotted
in terms of average temperature and the frequency we
observe on the left-hand side and the cumulative
distribution version is on the right-hand side.
If I read the temperature, the total
temperature using my additive model, pick three or
four representative temperatures from this
distribution and the corresponding probability that
comes from all possible combinations of these
variables, that those indeed are reasonable
representation of the spectrum of these massive
nonexistent number of runs that we would have
otherwise needed to represent the variability of these
various parameters.
We used this linear average temperature as
a guideline in identifying representative thermal
hydraulic average temperatures and representative
thermal hydraulic scenarios leading to, if you see,
that equation on the very last line, Tdc (i) and p(i)
which means this temperature and the corresponding
probability is the fraction of probability contained
in this particular bin, this pair of information.
Once we run the corresponding thermal
hydraulic runs using RELAP for the combination that
represents that particular total average temperature,
then we have not only the traces coming from RELAP
run, but we also have the corresponding probabilities
P1, P2, P3 that can be used to modify the frequencies
coming from the PRA.
We have the frequency phi equal to theta
coming from the sum of the frequencies from PRA
multiplied by the probability P1, P2, or P3 or could
go all the way to P5, I think, five different cases
coming from selecting representative runs from this
type of process.
This process helped us identify the
average behavior of the representative runs and then
we went and identified the representative runs from
the actual thermal hydraulic runs, the ones that came
closest to having a similar behavior.
At the end, this is the same matrix, two
pages, with the yellow highlighting the cases for
which we did uncertainty. It actually list all the
cases that we identified and put in that particular
cell. That represents the uncertainties addressed
through the process I just outlined.
At the end, what was sent to the PFM
analysis were in this particular format of single sets
of downcomer temperature, pressure, and heat transfer
coefficient and corresponding frequency.
MEMBER SHACK: Okay. I'm up to my
eyeballs here trying to follow this. Let me just come
back one step at a time.
Now, if I go back to the matrix -- no,
wait. I just one to go back one step. Your list of
results generated as input to PFM analysis where
you've broken the runs down now.
MR. MOSLEH: Oh, the very last view graph.
MEMBER SHACK: The very last view graph.
MEMBER SHACK: Now, the frequencies I see
here are the PRA frequencies times your P1, P2, P3
MEMBER SHACK: And the case that I see is
the actual case where you've given them the
temperature and the number?
MR. MOSLEH: Right.
MEMBER SHACK: So they get the temperature
and the pressure for that and you've taken the
uncertainty over into the frequency space.
MEMBER SHACK: Okay. I've got that part.
Can I go back one more?
MR. MOSLEH: The reason for this
particular format of result was the input requirement.
There are many ways of presenting uncertainty but this
particular one was dictated by the interface with
FAVOR code.
MEMBER SHACK: Now, I take a particular
scenario here and I do -- I get the linear sum by
looking at the parameters that effect that scenario
and I do my linear summation to get my cumulative
damage or my cumulative distribution function. Then
I pick three cases out of that by weighting it.
MR. MOSLEH: Right. And these are just
guidelines for what kind of average temperature
behavior you would expect and you find the thermal
hydraulic run that comes closest from the set that you
have at your disposal.
MEMBER SHACK: Can I go back one more?
When do I start doing this? I have a scenario. I
have some PRA scenario that I'm following here.
MR. MOSLEH: Class. And then you have
typically at least one thermal hydraulic
representative. I don't want to use the best estimate
but the one that comes closest as representing the
entire class as a single PH run. You have that.
Kolaczkowski. Go back to slide 18 way early in the
presentation package. I don't know if this will
answer Bill's question or not.
MR. MOSLEH: This is 18.
MR. KOLACZKOWSKI: Okay. Thank you. I
put this up to show the PRA process started with a
bunch of information and then we started building the
event tree models and we did some preliminary
quantification. We went back to Oconee as well as
internally did a review. We modified that stuff. At
that point we revised the model. We requantified it.
Now we start refining the binning and so on.
It's really in a way at the start of that
step. It's late in the process that now that we are
beginning to understand what's important; that is,
that 94 percent of the risk is in that one bin and
before we actually get the final quantification, if
you look at the bottom yellow box, event sequence
refinement arising from the uncertainty
considerations. That is where a lot of this
uncertainty process that Ali is explaining is
It's a refinement on the course binning
that has been done up to that point in the process and
now we are refining the binning based on the things
that really matter from an uncertainty perspective
and, if necessary, redefining bins, creating new bins,
collapsing old bins, whatever it might be. That's
when that refinement step is taking place. I don't
know if that answers your question.
MEMBER SHACK: Yes, but when I finally get
to a refined bin, does a thermal hydraulic bin define
sort of a median set of boundary conditions that I
used for that?
MR. MOSLEH: Yes, central or median.
MEMBER SHACK: It predicts some sort of
central, median, whatever we want to call it.
MEMBER SHACK: A set of thermal hydraulic
conditions that I'm going to use for that bin.
MR. MOSLEH: Yes. Yes. There is a
thermal hydraulic signature for that bin.
MEMBER SHACK: There is a thermal
hydraulic signature for that bin. Now, how do I
define the parameter range? The epistemic
uncertainties in the model I think I can understand.
Those are kind of constant. It's the aleatory
uncertainties in the boundary conditions for that
central condition. How do I define those?
MR. MOSLEH: Well, for each of the
variables that we identified as being a source of
uncertainty whether epistemic or aleatory. Within the
main characteristics of that central median thermal
hydraulic signature we start varying those uncertainty
parameters the ranges of which are defined by either
the nature of the uncertainty.
For instance, if you are dealing with
outside temperature we had four possible cases;
winter, summer, spring, and fall. Some other ones the
exact range and reason for the range and the values
and distributions assigned were subject of analysis
discussion, an expert assessment.
But at the end you end up with the ranges
for each of those uncertainty sources. Those then are
used to vary the relevant characteristics of that
central T-H case. By the way, I must say also that
sometimes a T-H bin was represented by more than one
T-H run.
MEMBER SHACK: But the delta Ti then is
the one-at-a-time variation.
MR. MOSLEH: Absolutely.
MEMBER SHACK: Give me that range.
MR. MOSLEH: One. That's the effect.
MEMBER SHACK: Okay. And then I get that
effect. I get the delta Ti from that. I guess I can
see how that works.
MR. MOSLEH: Then you say what if I change
all of them together and this is a nonlinear --
MEMBER SHACK: Linearize the problem.
MR. MOSLEH: Linearize them and you test
it in that limited context and know that it worked.
MEMBER SHACK: Okay. Then I convert that
into a cumulative distribution.
MR. MOSLEH: Right. Of all possible
combinations you put in the distribution how many
cases do you see in average temperature of this
magnitude. What fraction of time you see an average
temperature of some other magnitude.
That is the nature of your distribution.
You say from that distribution I'm going to pick three
or four or five values average temperature. I look at
my table of thermal hydraulic runs and find something
MEMBER SHACK: Okay. Or, if necessary,
add a new T-H run.
MR. MOSLEH: Right. If necessary. In
several cases we ran new cases.
CHAIRMAN FORD: I have an even more
fundamental question. I can understand roughly all
these questions about uncertainty in the model and the
boundary conditions. Are we satisfied that the RELAP
model, in fact, is accurate?
For instance, I noticed that in one of the
documents we're talking about modifying the Oregon
State experimental facility for PTS conditions. Have
those experiments been done? Has the RELAP5 code been
verified against those data?
MR. MOSLEH: Dave is the right person to
MR. BESSETTE: Well, yes. Every time you
pick up a code like RELAP and use it, you are faced
with that question. Is it adequate for the purpose
I'm using it. You have to go back and look at the
code assessment, code validation.
Early on we decided that associated with
the current PTS effort that we would run an
experimental program at Oregon State University APEX
facility dedicated to PTS experiments. We ran a
series of 20 experiments separate fractured integral
and one of the purposes was to look at phenomena, look
at mixing and things like that.
The other purpose was to provide data on
important PTS transients to assess relap. We haven't
run RELAP against all 20 experiments but we have run
them against about five experiments so we've done that
At one point we had some view graphs in
here dedicated to that but we took it out because we
probably wouldn't have time to show it.
CHAIRMAN FORD: It would have been really
helpful actually to have shown it because it would
have given a baseline confidence that you are starting
from something solid and then you can start to build
up on, well, what is my uncertainty in conditions,
etc., etc.
Since we don't have it on paper in front
of us, are there good correlations between observation
and theory?
MR. BESSETTE: Yes, particularly with
respect to these fairly global parameters like
pressure and temperature. We got very good agreement.
CHAIRMAN FORD: When you say very good
agreement, when you fed in basic uncertainties rather
than all this stuff. I don't mean that derogatory.
I mean, this PRA stuff, would it have changed the
results when it comes down to you would have a thru-
wall crack, the frequency of that data? Do you
understand what I'm saying?
The basic uncertainty or inaccuracy, if
you like, of code itself as based on experiment at
Oregon State, how much would that have affected the
end result of the frequency of thru-wall crack as a
function of temperature?
MR. HACKETT: I would guess -- let me
venture something out just to sort of frame the thing.
I would guess the short answer is we probably don't
know without having gone through that in an
interactive sense through completion of even FAVOR
runs with the final impact on thru-wall cracking
Dave might be able to say how much -- what
sort of confidence band we had with the data from APEX
being predicted by RELAP. Was that within five
percent or one percent on balance? As Peter said, not
having the curves in front of us here, were they that
close? Was it that good or were there some
MR. BESSETTE: Specifically in terms that
are RELAP comparisons with Oregon State data, the
results -- we are still looking at -- whenever you
compare the code with the data, there may be
disagreements and we are still sorting out -- I can't
give an exact number because we are still sorting out
the information.
MR. HACKETT: I think maybe what we could
do we could take an action to get that information.
MEMBER BONACA: But it seems to me that,
for example, in the selection that you made going from
view graph 46 to 48, when you are looking at code
model uncertainties you were choosing those parameters
that would affect the code respondency between that
behavior that is predicted RELAP5 with the sequence
that you have in the PRA.
The code is respondency between actual
performance. For example, whether or not RPV valves
-- what is the state of those. I have the same
concern about characterizing the RELAP5 predicting but
I think the bigger issue here is you can always
postulate that you have a transient that will give you
that kind of thermal hydraulic behavior.
The question is does it correspond really
to a certain frequency for certain scenario of the PRA
that has characterization of frequency, the proper
representation corresponding. It seems to me that we
are focusing on those elements that would affect
correspondence between the two of them.
MR. BESSETTE: In this uncertainty
assessment what we tried to do was to identify the
dominant boundary conditions and dominant modeling
aspects of RELAP in terms of the uncertainty and
varying all those parameters.
MEMBER BONACA: I guess I'm trying to
understand myself. It seems to me that when you take
a certain thermal hydraulic sequence that you have
probably a number of ways to get there. One of those
is some bias in RELAP5 that will give you that.
The response shouldn't be that one but
that kind of temperature pressure profile. The
biggest issue is that the correct pressure temperature
profile for that specific sequence in the PRA. That's
really what you're looking for, right?
MR. MOSLEH: There are a number of --
MEMBER BONACA: Is that really what limits
your epistemic uncertainty?
MR. MOSLEH: Yes, that's a contributor to
the correlation between the scenario characteristics
and the thermal hydraulic results coming from RELAP
and, for that matter, with reality. It's an epistemic
source of uncertainty. We think that the judgement of
this exercise is that we capture the dominant sources
of uncertainty.
Even within RELAP in terms of its
prediction capability, wherever we find the source
that would result in a different temperature or
pressure trace, that we try to address those.
We think that those are of secondary
importance in the sense that the scenario to scenario
variability that will give us a bigger spectrum of
thermal hydraulic response and reduction of those to
a few cells for the PFM analysis was a bigger source
of uncertainty than some other concerns about the
accuracy of a RELAP code. Obviously we did not close
the book on issues regarding RELAP and it's
correlation with the --
MR. BESSETTE: The APEX results. Thus far
we have never attempted to define the uncertainty of
RELAP with respect to comparison between the code and
individual integral system experiments. We have
avoided that kind of thing in the past in favor of
doing uncertainty quantification based on the dominant
phenomenonlogical uncertainties in the calculation.
One of the reasons we did that is that the
facilities are scaled so just because we define a
comparison, you can define a comparison between RELAP
and individual experiment but that, we don't believe,
gives us a real uncertain quantification for the code.
CHAIRMAN FORD: But surely if you -- I
mean, when a licensee comes along with a predictive
code of some sort we ask them to verify it and the
same should apply here. If you are going to use a
code like RELAP5, you should be fairly comfortable.
If you have a known set of inputs into the
code, you will get a verifiable output from the code
like temperature of the material. Then you can take
that known code and apply all these uncertainties of
the inputs but you've got to go through that first
step. I have been assuming since it's a fairly old
code and it has been verified sometime before this --
MR. MOSLEH: I referred to a number of
studies here and in other countries. I mentioned
Germany and Italy in comparing the results both at the
input level and the output level of the code. What
parameters we are assuming ranges inside the code and
how do they correspond to measurements and actual
experiment results and what comes out of the code to
compare to the measures.
There have been a number of studies. The
work in Italy you are probably familiar with Professor
Darios' work, Ed Hoffer at GRS. There is work but it
is not really conclusive in the sense that this code
has been verified against a carefully selected set of
experiments run to verify and confirm all its results
and all ranges.
MR. BESSETTE: It doesn't help right now
but we did go over some of that information. We had
a thermal hydraulic subcommittee meeting back in July
of last year at Oregon when we went through some of
those results. Unfortunately it doesn't help you
MR. HACKETT: I think --
MEMBER BONACA: I think also the
nodalization. I mean, I didn't see anything in the
selection of epistemic parameters here that would
indicate that you were looking at that.
MR. MOSLEH: We did not vary on
nodalization. We did not vary on errors a code
operator could make. The number was listed as not
addressed. We think that in terms of number of nodes
250 is not bad. Is node No. 250 or 217 or 156 is the
right node and the right exact size and location and
all that and there are 50 different ways of doing
that, that type of uncertainty we have addressed.
MEMBER BONACA: You see, yes, but I'm just
struggling with that. Maybe you're right but assume,
for example, that you had a model that consistently
under-predicts your cool-down. Now you are having
this frequency coming out for the sequences and you
have now this look that goes to the probabalistic
fracture mechanics that will tend to give you
optimistic results. I'm just making a hypothetical.
Assume that you have a bias in the code that will give
you consistent with that.
MR. HACKETT: I'll interject at the risk
that I'm not an expert in this area. Obviously I
think what Dr. Ford is bringing up and what the
committee is discussing is very reasonable. Obviously
we are also in an enviable position here with RELAP
that there are cases you can benchmark, too.
I think what I'm hearing is that is not
the ultimate test of the code but that, at least,
needs to be documented and presented to the committee
again at some point. We'll take an action to come
MEMBER BONACA: Most of all, I mean, I'm
not at all saying -- you know, I think actually RELAP5
has a lot of credibility as a computer program but
there is so much that goes into the actual modeling
with the program, so on and so forth.
When you go from view graph 46 to 48, I
was left at the beginning with the question of why
only these epistemic parameters. I'm sure you have
plenty of reasons why you haven't included others but
it will be valuable for us to know where they are.
MR. MOSLEH: The specific ones list that
and the exceptions are based on analysis we have done.
Obviously there are cases where we say we did not
address this particular source of uncertainty assumed
to be good or we have reasons to believe they are good
such as nodalization.
MR. HACKETT: I was going to say at this
point, too, based on discussion with the chairman
previously we were going to try to fit in what was
originally item 3 on the agenda which was to go over
the Oconee results in terms of introducing tomorrow's
example problem. If that's okay with the committee,
we'll go ahead and do that at this point and try and
finish that as expeditiously as we can.
MR. HACKETT: And that will be largely
Mark and Alan, I think.
MR. KIRK: Okay. Let me find where we are
Okay. Go to view graph 101 and 102 in your slide
pack. Actually, view graph 102 you have seen before.
People pick on me for taking too long to make slides
so to reduce their unit cost, I need to use them
The Oconee I results that we would like to
talk about again. This is sort of a quick view and a
high-level summary which is not to say that we have a
detailed summary that we haven't showed you because
we're in the process of putting that together right
We did after multiple years of talking
about process and procedure and aleatory and epistemic
we thought that everyone would find it refreshing to
actually see something that sort of looks like
results. We'll talk a little bit about some vessel
specific inputs that we have.
Again, as before, highlight where there
might be some significant differences relative to the
earlier 1980s analysis, the generic inputs of flaw
distribution and toughness distribution and, indeed,
fluence we have addressed earlier. Then we would like
to share with you some outputs and interpretations
that we have developed to date.
As you previously have been warned, but
I'm obligated under contract to tell you again, all of
these are preliminary. In terms of the PRA and
thermal hydraulic inputs to this analysis, we have
sort of gone through this before.
Starting from the 1E-4, 1E-5 number of
scenarios, that got down to approximately 150 total
transients that were analyzed by RELAP of which 50
were screened, eliminated by inspection based on the
criteria that has been discussed earlier.
Fifty have been assigned to what we call
our base case which is in the process of being
revised, and 50 were used as thermal hydraulic
sensitivity cases to address some of the uncertainty
concerns that Professor Mosleh was just discussing.
It's my understanding that those results are still
being processed.
The initiating event frequencies that
we're looking at, and this is just, you know, a very
broad brush range from E-9 to E-4. The only reason to
point that out is to say -- and, of course, this
reflects all of the current data.
The reason to point that out is simply to
say we are not getting low-vessel failure
probabilities simply because we have low-balled the
initiating event frequencies. That's not the case.
We have tried to take as realistic a view as possible.
Again, reflect the most recently available
data regarding operator training procedures, actuarial
data and so on. Just as a point to make relative to
where we've been before, some initiating event
frequencies are considerably lower than their 1980s
brethren. For example, the main steam line break
dropped from around an E-4 to E-6 event.
MEMBER BONACA: Just a question I had on
that. The range of frequencies there includes also
those -- it's just the initiating event, right?
MR. KOLACZKOWSKI: Yeah. There's a little
bit of a perhaps confusion in terms. It's an
initiation event from the PFM perspective. It is the
whole scenario frequency. It's the bin frequency. I
apologize for that.
MEMBER BONACA: Also for the main steam
line break?
MR. KOLACZKOWSKI: In terms of the main
steam line break same thing. We are really talking
about a main steam line break that becomes a very
severe cooling event. I'll get to this point in just
a minute but in the early work where they essentially
took no credit for operator action, you are almost
left with just the frequency of the break.
But if you allow for operator credit and,
so sad, it becomes a severe event, you drop that
scenario's frequency by like two orders of magnitude
by giving credit to the operator. I guess that really
kind of gets to the next slide.
Just a couple of things I want to mention
quickly from the PRA perspective in the Oconee
analysis. What you see on this slide is a summary
and then some comments on that, a summary of what were
the two dominant PTS risk groups of scenarios, if you
will, from the early 1980 work.
The one on the left was, in fact, the most
dominant and it was called the residual group. It
was, as we call it here, the everything else group.
It was made up primarily of a collection of relatively
small frequency sequences, sort of the residuals from
a number of the hundreds of thousands of sequences
that they analyze.
If it was fairly small frequency, they
kept throwing it into this "residual group." It got
to the point where that residual group's frequency got
to be pretty large. On top of that they ten applied
the worse case conditional probability of vessel
failure to all the sequences in that group, the worse
CPF that they had calculated for the 10 or 12 or
whatever T-H runs that they had made, which was a
conditional probability of failure of 5.4 times 10 to
the -3.
They applied that to all the sequences
that were in this residual group so a very
conservative treatment. This accounted for roughly
half of all the PTS risk in the early '80s work. Also
note that little to no -- there were a few exceptions
-- little to no human actions were credited in the
analysis for the residual group at all.
Comparing that to the current study,
again, we already pointed this out, we're using latest
frequencies and equipment failure probabilities, etc.,
based on experience up through about 1999 in the
industry and Oconee specific where we could.
I pointed out already that the number of
transients we're having today per year is a lot less
than number of transients we were having in the '70s.
Clearly that has had an impact here.
We've already talked many, many times
about the fact that we didn't have a catch-all group.
We have rather than binning everything into 10 bins,
we have binned them into 140 bins or whatever, some of
which were screened.
More refined binning and, therefore,
didn't have to do this very gross binning and treat
small main steam line breaks in with the large main
steam line breaks, for instance. Just the process
itself obviously is a lot better and we didn't have a
catch-all group per se.
Human actions accredited realistically.
Then for all the bins that we have sent to the PFM
folks, of course, they calculated CPIs and CPFs for
each of those bins rather than throwing everything
into a residual bin and then applying the worse case
CPF to that entire bin. Very different process.
Because of the changes in process, because
of using the latest frequencies, probabilities, giving
human credit, etc., it really made that residual group
essentially go away and get replaced by more accurate
-- because of more refined binning more accurate
assessment of what the PTS risk really is.
A word about steam generator tube ruptures
because that was also among some of the dominant
scenarios in the early work. The more likely steam
generator tube rupture types of sequences did have low
CPFs in the original work and they still do on our
work. But, again, many of the less likely; that is,
the lower frequency types of steam generator tube
rupture events got dumped into this residual group and
then treated with the worse case CPF.
Rather than really treating it as a steam
generator tube rupture, it was treated more like a
main steam line break so clearly it overestimated the
risk contribution from those lower frequency steam
generator tube rupture events. Whereas, again, we are
treating those as a bin unto themselves. Clearly our
work agrees that you have to remember if you really
treat it as a true steam generator tube rupture, the
break is fairly small.
One or two tubes is the assumption here.
The amount of cooling you actually get in the scenario
is pretty low and it drops the amount from
significance. Now, a few words about the main steam
line break the remainder of this slide and the next
slide. Then I'll pass it off to somebody else.
The main steam line break represented
nearly all of the remaining PTS risk in the early work
beyond the residual group. Again, looking at
comparisons between the old or older work and this
work, of course, in this study I've pointed out that
we have allowed human credit for rapid isolation of
the feed to the faulted steam generator and also for
throttling high-pressure injection once the throttling
criteria are met.
Why did we do that? Well, again, really
looking at the way today's training and procedures are
and crediting appropriately certainly suggest that is
the appropriate way that we ought to be treating this
kind of scenario. Whereas in the original Oconee work
there was almost no human credit given to perform
those actions.
Main steam line break happened. It caused
a severe cooling event. There was no credit for the
operator. Not surprising it was a dominant PTS risk
You did see on the third sub-bullet there
under the human credit just some typical human error
values for failure to isolate the faulted steam
generator by 10 minutes time into the event. Also the
failure to throttle the injection 10 minutes after
meeting the throttling criteria for injection, you did
see some typical human error values there.
Again, that is why the main steam line
break suddenly goes from 10 to the -4 event to a 10 to
the -6. You're taking a 10 to the -2 credit for the
operator to stop the feed of the faulted steam
generator which effectively stops the cooling of the
So, therefore, the sequence frequencies
are relatively low for a severe cool-down event due to
a main steam line break. Again, what the actions
effectively do is isolating feed limits the cool-down
rate. Throttling the HPI limits any repressurization
issue. The more likely scenarios which involve
successful actions by the operator become essentially
relatively benign cool-down events so the main steam
line break kind of goes away.
And I guess that's reinforced just a
little bit. I'm sorry. I went the wrong way.
Reinforce just a little bit in the next slide why
crediting operator actions during main steam line
break. Again, this is just meant to be an example of
all the actions that we credited where we thought it
was appropriate.
Overcooling prevention and control are an
integral part of the Oconee crew training. Oconee
operators are sensitive to overcooling scenarios,
especially because of the once-through steam generator
design. They are pretty sensitive to knowing that if
they don't control feed properly, that they can get a
serious overcooling event.
They are very sensitive to the fact that
use of the once-through steam generators, etc., they
need to get on things like a main steam line break
scenario fairly quickly, isolate that faulted steam
generator and stop the cooling.
Instrumentation is available and
procedures are written to facilitate both the
identification of an excessive steam demand. The way
the procedures are written and the way they jump into
certain EOPs there is a bias towards, for instance,
when you do see a main steam line break or the
indications thereof, the procedures are written such
that they can immediately go to a step in the
procedure that says isolate that faulted steam
generator right now. We don't wait until we get to
step 42. We can do step 42 right now. That's the way
the procedures are written.
Additionally, there are warnings to
throttle the HPI which appear in numerous points
throughout the procedures and, in fact, is a so-called
continuous action step. They look for those
throttling criteria. Once they are reached they can
begin the throttle regardless of where they actually
are in the EOPs as they are responding to the event.
Then finally I pointed out the importance
of going to Oconee and observing on the simulator some
actual overcooling events. We felt much more
comfortable with our human error estimates because
they were based in part on observations that we saw,
the way the crews actually respond to a main steam
line break event and how quickly, or not quickly, they
can get to certain steps in the process to isolate the
steam generator and so on and so forth.
There's a nutshell of at least some of the
examples on why, for instance, in this case the two
most dominate PTS scenarios in the early work have now
kind of gone away based on differences between the old
work and the present work.
MR. KIRK: Okay. Back to some materials
changes. This gets to the way we model the vessel.
What you see here is an unwrapped view of the Oconee
vessel with various plates and welds indicated. The
point I would like you to take away here is in the
earlier analysis the whole vessel would have been
assumed to have been made out of the most embrittled
material, in this case being the circ weld that's
indicated in green.
That is very different than what we do
today. What we do today is we have in documented
information that the licensees have supplied, the
material properties, the copper, the nickel, the RTNDT,
the Charpy shift, and so on, for each of the different
plates and welds and we use those in the analysis.
I haven't indicated the least embrittle
material here but previously the whole vessel would
have assumed to have been RTNDT at the end of Oconee's
original licensee which was 40 years of 183
fahrenheit. That would have been this whole picture,
whereas now it's only this little slice through here.
In other regions the plates may be up to
100 degrees lower so we use the appropriate values.
Obviously a much more realistic model and one that
removes quite a bit of conservatism.
This slide in your packet has been
modified -- this slide that is on the screen has been
modified from the one that is in the packet to
hopefully prevent you from asking me a question I have
now been asked two times and it was misleading on the
This is an attempt to summarize both the
PRA input to the PFM analysis, that being the
initiating event frequencies on the horizontal axis.
Just to remind you, call attention to the word "mean"
because each of these initiating events, of course,
has a histogram associated with it.
And then on the vertical axis you see the
output of the PFM code and that should, again, say --
I think that is actually the 95th percentile
conditional thru-wall crack probability.
Be that as it may, a couple of things to
take away. One is that if you go along the horizontal
axis I broke the logarithmic scale so the horizontal
axis is zero. Not a very small number but zero.
There have been quite a few, in fact,
about half of the base case sequences that we analyzed
in PFM had no probability of failure because the
applied K never got up into the KIc distribution so by
definition no risk. About half of the sequences had
some risk associated with them.
Of course, it's the product of the X and
the Y variable that gives you your risk metric, at
least in conceptual terms. What you want to look for
are vents up here in the upper right-hand corner and
those will be our dominant events.
What you see are a whole bunch of LOCAs
populating that area, some stuck-open valves on the
primary side which is kind of like a LOCA except that
the operator might be able to do something about it,
whereas a LOCA they can't.
Also call your attention to two points
that were just made previously. Steam generator tube
ruptures, the red circle is down here. No
contribution in this case. Main steam line break
which, as was just discussed, used to be a dominant
contributor way down here and so what you'll find out
is on the next graph main steam line break doesn't
even show up.
What this is is again another very high-
level condensation of the results of the preliminary
Oconee analysis. The bar graph on the left-hand side
of your screen shows the contribution of several
different classes of events to the probability of
crack initiation. What you see is all those LOCAs act
together and they make up about 70 percent of the
cracks that have initiated in the vessel.
However, in general, of course, and with
increasing breaking size the LOCA pressure drops quite
dramatically. It doesn't form -- it is still a
dominate contribution to the probability of failure
but doesn't make up nearly as much as the initiation
The single dominant sequence is shown here
in the red checkerboard. That is a stuck-open
pressurizer safety valve for which the operator does
nothing and it finally recloses automatically at the
RTS setpoint. For a single transient it was big but
overall a fairly small contribution to the crack
initiation frequency. Since the pressure stayed so
high while the vessel got cold, it contributed roughly
30 percent and was a single dominant transient to the
failure frequency.
Comparison of the red and the green shows
you the benefit, at least in this one instance, of
operator action in that the green represents two
variants on the red scenario where the operator took
control of throttling the HPI. One scenario was at
one minute. One scenario was at 10 minutes.
Obviously those two scenarios combined
contributed more to the crack initiation frequency
but, happily, it severely mitigated -- severely being
a good word in this case -- it significantly mitigated
the number of those cracks that went through the
vessel wall.
Here we have two scenarios that
contributed a little under 20 percent to the
initiation frequency, but that only increased to
roughly 30 percent of the failure frequency, whereas
that same scenario without operator intervention,
perhaps only five percent of the initiation frequency
during into almost 30 percent of the failure
MR. KOLACZKOWSKI: Let me point out that
tomorrow when we go through the example it is focused
on this SRV reclosure and operator actions, etc., etc.
MR. KIRK: Observations which, I think, we
have already made. Dominant scenarios are all primary
system LOCAs. By realistic accounting of operator
action we have been able to mitigate significantly the
influence of secondary system events on the total
probability of failure.
In fact, it's fair to say that secondary
system events haven't played a role at all in Oconee,
at least at this stage. Finally, the time of SRV
closure and, thus, re-pressurization has a significant
influence on event severity. That is a pretty obvious
Ed showed you this at the beginning and
promised it was where we would get to at the end and
it hasn't changed all day. Really great how that
works. We are currently in the process of scrubbing
this and hopefully removing the pink preliminary. We
know now that there are things that we certainly do
need to change.
We found some things that will drive these
results in an upward direction and some in a downward
direction. I was keeping count on the back of a
matchbook and I finally gave up. Of course we're
going to rerun it again but I've given up even trying
to predict. Certainly the news is positive at this
point and I don't think anything that we found is
going to change that overall positive view.
I think -- yes. Just to summarize, they
look promising. This all leads us to the perception
that the risk of vessel failure is lower than we
previously believed it to be, previously believing
circa 1980s. Obviously we still have a considerable
amount of work to do.
Specifically we need to establish a new
risk goal so we know where to draw that horizontal
line. We need to assess the contribution of external
events to overall risk. Of course, we need to
complete and finalize the analysis for Oconee and
complete the analysis of the other three plants to
make this a done deal.
That's the end of the slide set on Oconee.
MEMBER BONACA: A question I had was
tomorrow when we have the example, it would be
interesting to have a sense for this curve that you
showed us, you were pointing out that the main steam
line break sequence has been essentially eliminated
because you went from 10 to the -4 to the -6. If I
understand, Oconee has no main steam selection box so
it's all manual.
MR. KOLACZKOWSKI: That is correct.
MR. KOLACZKOWSKI: I mean, they have
turbine stop valves, etc., if the break was really
down right by the turbine but it is true if the break
was in essentially a large portion of the steam line
itself, there are no MSIVs that would automatically
close off the event. It also relies on operator
MEMBER BONACA: It would be interesting to
understand the sensitivity of these results on that
kind of range of parameters, how they propagate
MR. KOLACZKOWSKI: Again, we will do our
best to address that. I mean, the example is really
aimed at the LOCA scenarios because that's where the
dominant results are. I think we can certainly
digress what we need to and talk a little bit about,
therefore, why is the main steam line break still
going away. We can certainly attempt to do that.
MEMBER BONACA: Because, I mean, that's an
important element.
MEMBER BONACA: If it doesn't go away for
some reason, then, you know, that pointer comes up
somewhat and I would like to understand. Before, I
mean, I get all excited because this is exciting.
MR. KOLACZKOWSKI: We feel confident that
the main steam line break is going to go away.
MEMBER BONACA: But there is an issue of
this kind of uncertainty.
CHAIRMAN FORD: Ed, you said, and I don't
think you said it flippantly, in the beginning this
turns out to be a correct scenario for a majority of
the stations nowadays, "PTS situation goes away."
Could you expand on that?
MR. HACKETT: Yes. In fact, one of the
things I was going to mention, a conversation with Ron
Gambol during the lunch break.
Mark, if you could put up that last slide.
We had a conference call with the industry
before we did this. This is about a week ago now.
Just focusing up there on the one that says, and we
carefully worded this, "Leads to perception that the
risk of vessel failure is lower."
Ron was reminding us on that call, and
reminded me again today, to be careful about
overselling this at this point. Also that this risk
perception is a relative thing. Oconee's risk was low
to start with. If there are Oconee people present or
on the line, their risk was lower to start with. We
think now it's lower still. But whether it's two
orders of magnitude or four, or maybe by the time we
get done with all of this it's different than that to
some extent, we don't know right now.
You asked a difficult question, though.
Maybe to kind of repose your question does this rule
go away if we can show three orders of magnitude
improvement and is there any need for a PTS rule if
that sort of thing can be demonstrated.
We haven't decided that yet and that's a
debate I know we've had. Mike Mayfield challenged us
with that when we dry-ran this presentation. What if
you are -- for instance, what if we haven't rung
everything out of this and when we do, we are three
orders of magnitude better than what we thought we
were, four or five as the case may be, depending on
what you are comparing to.
Then I think it is incumbent on us as
regulators to look real hard at whether or not the
resources are necessary to go through with the revised
rulemaking. I think it's very premature at this point
to say that. My personal view is I would be surprised
if it goes away to that extent as opposed to being a
modified criteria.
CHAIRMAN FORD: It will probably have to
be a modified criteria surely because of Palisades and
Beaver Valley.
MR. HACKETT: At a minimum it will be
CHAIRMAN FORD: Fort Calhoun.
MR. HACKETT: Having it go away, I think,
would be probably optimistic.
MEMBER SHACK: What's the difference in
raising it 50 degrees and making it go away? I mean,
for all practical purposes?
MR. HACKETT: Good question there, too,
because we had predicted in advance of this that,
first off, we just about eliminated anyone from major
MEMBER SHACK: Unless we came back and
said for license renewal to 120 years.
MR. HACKETT: Right. That's where I --
MR. KOLACZKOWSKI: I heard Oconee was
going to ask for 300.
MR. HACKETT: We still had as of the last
go-around on this, and I think NRR -- I don't know if
there are any NRR reps here late today but NRR had
looked at their reactor vessel integrity database and
projected forward to at least the 20-year renewal
There were still potentially for or five
plants that could be impacted in that time frame
depending on how this thing goes. Less of a concern
than it was before but it is yet to be determined
exactly where we end up. I know there's a lot of
hedging going on there but that's kind of where we are
at the moment.
MEMBER SHACK: Those will be impacted if
the rule didn't change if the screening criteria
stayed right where it was but I don't think you have
to shift it very much.
MR. HACKETT: You wouldn't have to shift
it very much. In fact, I think on the order of 20
degrees or more would probably take care of the 20-
year renewal period is my recollection.
CHAIRMAN FORD: You mentioned also during
the break something about the EPRI verification and
validation exercise.
MR. HACKETT: Yeah. I guess there are
several items in that regard. We are pursuing to the
extent possible, and I'm also not the right person,
not the right QA person, but we have enlisted the
support of folks internal to the NRC and also folks
who have done this type of work for EPRI to do
verification and validation in accordance with, I
believe, we're using an IEEE standard.
MR. MALIK: We are going through the VNB
process. On December 6th and 7th we had a public
meeting involving the industry contractors and we were
doing the validation part of the process. In that we
have taken several aspects of it, PI aspects of it and
PFM aspects of it.
There are four or five different teams
that are each work. They have come up with a set of
problem statements. They will be using those to come
up with a solution from independent methodology and
compare with what the FAVOR code is giving it and
comparing the two.
During the months of January, February,
and March all this activity. In April we will meet
again and there we will correlate all the results and
see how they are coming up with the validation
criteria. We will be monitoring the progress.
MR. HACKETT: It's pretty obvious we're at
some risk here because in an ideal sense, we would
have had this completed before we started runs for
Oconee or any of the other plants. The schedule
pressure on the project has precluded that so we are
running it in parallel with a lot of hope and crossed
fingered that we're not going to show up any major
show stoppers in the QA.
If we do, we're going to have to circle
back on ourselves to see what that means. So far so
good but it remains as sort of a stay-tuned situation
to see how that comes out.
I guess that concludes the staff and
contractor presentation for today. We'll go through
the detailed example tomorrow. Hopefully we have
enough time to do that. It looks like we're not quite
back on schedule but we're close.
CHAIRMAN FORD: Ed, thank you very much
indeed. Are there any other questions for today?
We're in recess until tomorrow at 8:30.
MR. HACKETT: Thank you.
(Whereupon, at 4:33 p.m. the meeting was

Page Last Reviewed/Updated Monday, July 18, 2016